Science topic

Image Segmentation - Science topic

Explore the latest questions and answers in Image Segmentation, and find Image Segmentation experts.
Questions related to Image Segmentation
  • asked a question related to Image Segmentation
Question
1 answer
I am experimenting with Retinal Optical coherence tomography. However, the region of interest named 'irf' has a very small area and I am using Dice loss for the segmentation in Unets.
However, I am not getting satisfactory results as the input images are noisy and also the ROI is very small. Can anyone suggest to me a suitable loss for this kind of challenge?
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
6 answers
Is there any post-hoc easily applicable explainable AI tool for detections carried out by complex CNN-base architectures e.g. Mask R-CNN?
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
2 answers
I'm working with a radar image and using the SNIC algorithm to segment it in Google Earth Engine. From what I have read, by viewing the region of interest and changing specific parameter values, the best result can be achieved, am I right?
I read the article about the algorithm but it is still not very clear which parameters is better to pin and which to change depending on results from every test performed.
How to set the optimal values of the parameters of the SNIC algorithm?
Thanks in advance
Relevant answer
Answer
You can follow the following steps:
ee.Algorithms.Image.Segmentation.SNIC(image, size, compactness, connectivity, neighborhoodSize, seeds)
  • ArgumentTypeDetails
  • image:ImageThe input image for clustering.
  • size:Integer, default: 5The superpixel seed location spacing, in pixels. If 'seeds' image is provided, no grid is produced.
  • compactness: Float, default: Compactness factor. Larger values cause clusters to be more compact (square). Setting this to 0 disables spatial distance weighting.
  • connectivity: Integer, default: 8Connectivity. Either 4 or 8.neighborhoodSizeInteger, default: nullTile
  • neighborhood size :(to avoid tile boundary artifacts). Defaults to 2 * size.
  • seeds: Image, default: nullIf provided, any non-zero valued pixels are used as seed locations. Pixels that touch (as specified by 'connectivity') are considered to belong to the same cluster.
  • (from the tutorial of Google Earth Engine)
  • asked a question related to Image Segmentation
Question
8 answers
Is there any one would like to do research together in medical image segmentation
Relevant answer
Answer
Medical image segmentation is the task of segmenting objects of interest in a medical image.
Regards,
Shafagat
  • asked a question related to Image Segmentation
Question
4 answers
Hello dear researchers
How to do the watershed algorithm in eCognition developer software?
Do I need to add a plugin to watershed segmentation? How? There is no watershed segmentation plugin in my software.
Thank you all
Relevant answer
Answer
The algorithm works on a gray scale image. During the successive flooding of the grey value relief, watersheds with adjacent catchment basins are constructed. ... A set of markers, pixels where the flooding shall start, are chosen. Each is given a different label.
Regards,
Shafagat
  • asked a question related to Image Segmentation
Question
7 answers
Hi , I'm trying to implement domain adaptation Segmentation From DeepGlobe datatset to Vaihingen dataset .
But both classes are different , so to align the classes , I have to merge 'Roads' and 'Buildings' labels to one label in Vaihingen ground truth dataset .
I have to assign same label to Roads and Buildings as 0,255,0 to be treated as same class
Please suggest a way to merge the classes and assign them as same label .
Is there any tool to acheive it ? QGIs or ArcGIs , since I'm new to this , kindly tell me the steps .
Relevant answer
Answer
Follow these steps:
1. Click Modify in the Features group on the Edit tab.
2. Expand Construct and then choose Merge.
3. Select the Existing Feature option.
4. Click the Select button.
5. Modify the choices in the window by adding or removing characteristics.
6. Configure the combined feature's attributes fields and field values.
7. Select Merge.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Segmentation
Question
3 answers
I'm going to automate landfill or landslide detection from satellite images in Python using u-net, semantic segmentation and TensorFlow for my master thesis with a supervise approach therefore I need to train the model with labeled images. In your opinion, I would be able to extract those training images directly with label from GEE (google earth engine) so that would be identifiable in TensorFlow . There is some public datasets for training using deep learning approaches do you know any datasets to be suitable for remote sensing?
Any advice would be appreciated, thanks in advance.
Relevant answer
  • asked a question related to Image Segmentation
Question
2 answers
I have searched about the REVIEW database but I can't find it for download. besides, The vampire dataset is not a public database. So, I had sent vampire non-commercial license to use the annotation tool. But they didn’t answer.
could anybody send me the datasets?
Relevant answer
Answer
So thanks Mr.
J. Rafiee
  • asked a question related to Image Segmentation
Question
7 answers
I am researching handwriting analysis using image processing techniques.
Relevant answer
Answer
It depends on the application. In some cases, you may need to do that. Check this paper it will help you to understand this.
  • asked a question related to Image Segmentation
Question
3 answers
Does anybody can recommend me a tool that coul extract (segment) pores (lineolae) from the following image of a diatom valve?
I mean an ImageJ or FiJi plugin or any other software that can solve this task.
Relevant answer
Answer
First of all, diatoms look amazing! If you happened to have any pretty pictures you'd be happy to share I'm looking for a new desktop wallpaper :)
To answer your question, if you are looking a reasonably easy, but robust approach, you could try the Trainable Weka Segmentation plugin (https://imagej.net/plugins/tws/) which uses machine learning to extract relevant parts of your image. You use a set of training images that you annotate yourself to train the classifier, and then you can apply your classifier to a comparable set of images.
Hope this helps
  • asked a question related to Image Segmentation
Question
3 answers
Hi,
In my research, I have created a new way of weak edge enhancement. I wanted to try my method on the image dataset to compare it with the active contour philosophy.
So, I was looking for images with masks, as shown in the below paper.
If you can help me to get this data, it would be a great help.
Thanks and Regards,
Gunjan Naik
Relevant answer
Answer
The best way to obtain the dataset is to request it from the authors.
  • asked a question related to Image Segmentation
Question
4 answers
I need to know do I need to calculate the above metrics by selecting every available trained weight with all the test data to evaluate the model performance? Or do I only need to calculate those metrics for the best weight file I was chosen as my final model for the complete test set? The reason for asking this is in the below reference the author of the repository calculates both metrics by taking every weight file for the complete test set.
Please refer line 152 .
  • asked a question related to Image Segmentation
Question
6 answers
I need to know whether I should consider the number of labels available or the number of images available to increase the model performance. I mean in my case in a single image I have multiple annotated labels available. I know when I increase the number of images for training the accuracy of the model will go high. But in my case do I need to pay attention to maximizing the number of labels in the dataset or the number of images in the dataset to get better accuracy.
Ex:
Case 1:
1 image contains 4 annotated labels(2 for ClassA , 2 for ClassB)
|
|
Total: 60 images
Case 2:
image_b_1 contains 1 annotated label(1 for ClassA)
image_b_2 contains 1 annotated label(1 for ClassB)
|
|
Total: 200 images
Which case will give the maximum accuracy results during the training?
Relevant answer
Answer
Your approach is OK. I would like to suggest you to focus on the model selection (appropriate statistical model) as well. You can gradually enhance the accuracy with more data availability and labeling. Some clustering model, e.g., von Misses Fisher Distribution on unit hypersphere may be one the initial choice.
  • asked a question related to Image Segmentation
Question
8 answers
I need to calculate the accuracy, precision, recall, specificity, and F1 score for my Mask-RCNN model. Hence I hope to calculate the confusion matrix to the whole dataset first to get the TP, FP, TN, FN values. But I noticed almost all the available solutions out there for the calculation of confusion matrix, only outputs the TP, FP, and FN values. So how can I calculate metrics like accuracy and specificity which includes TN as a parameter?.Should I consider the TN as 0 during the calculations ?
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
11 answers
Dear colleagues, I am working on a project of Object detection with Bounding box regression (on Keras, TensorFlow), but I can't find decent sources and code samples. My task involves 200 classes, and each picture can contain from 10 to 20 classes that need to be detected. A medical task, more precisely, the recognition of types of medical blood tests.
I fixed some works that are close to mine. But I want to implement my task for the Russian language and on the KERAS & TensorFlow framework. I am sincerely grateful for any advice and support!
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
7 answers
I am interested to acquire Z-stacks via brightfield microscopy of spheroids comprised of Head and Neck cancer cells. My objective is to acquire sphericity and volume on day 0 of the experiment (72h after seeding into ultra-low attachment 96-well plates), on day 1 (24h after treatment with drug of interest), day 2 (48h after treatment) and day 3 (72h after treatment). Due to several time points, it is important that the spheroids must not be fixed for sphericity and volume measurement. After some literature research I was not able to find a microscope model, where I can put my 96-well plate for acquiring z-stacks.
Maybe someone here has encountered a similar problem and could help me, I would be very thankful for any help in this matter. I already have found the software, which I would need for the measurement (arivis vision 4D) but still do not know, with which microscope model I should acquire the images. Is it even possible to acquire z-stacks spheroids without putting them on a slide?
Relevant answer
Answer
Hi Sam
I am not sure if you can successfully define the volume of a spheroid using brightfield imaging, I'd prefer fluorescence microscopy and - depending on the sample size required - with an optical sectioning capability.
For the small sample size I suppose Fahimeh's suggestion then works. For the larger ones were you keep the sample in mulitwells most high content screening setups which have confocal will then be suitable to image with the better imageing result in the lower part of the object towards the objective on the inverted setup (depending on size you might more or less of the total but after 72h you might only get a very small part of the control).
There are also solutions available providing very low phototoxicity based on light sheet microsopy, not sure if you can purchase them yet (e.g. oblique plane microscopy, SCAPE but also commercially available light sheet microscopy e.g. Zeiss) but you might need to establish a different sample mounting method. A commercially available solution for multiplexed 3d light sheet imaging has been developed by Luxendo now Bruker....
There is loads to think about and a lot depends on your assay size, the resolution you require and your financial resources. Don't take my suggestions as complete.
Best
Heiko
  • asked a question related to Image Segmentation
Question
3 answers
I am working on medical Image Segmentation
can anyone help me, I want to understand the principle of the work of the U_NET
Relevant answer
Answer
thank you so much for your response Amin Honarmandi Shandiz
  • asked a question related to Image Segmentation
Question
5 answers
Hello,
Is anyone already worked with MR image data set??? If so, Is there any model to remove the motion artifacts in the MR image data set if contains??? What should we do if we have an MR image with motion artifacts??? Please give me your suggestions if it is possible to remove artifacts once the scan is produced.
Thanks in advance,
Dhanunjaya, Mitta
Relevant answer
Answer
Interesting
  • asked a question related to Image Segmentation
Question
3 answers
Hello everyone. Thank you for considering this question.
I have some tiff image stacks from MicroCT. I am trying to isolate porosity using a variety of tools available to me including FIJI ImageJ, Avizo, and Python: OpenCV and SciKit Image packages. Most of my X-ray stacks are easily segmented using the histogram, but these high resolution stacks are particularly difficult. Whilst edges are quite clearly defined, pore space and material solid are within identical intensity ranges, therefore using histograms to threshold is producing large pores as solid, or on the other side of the threshold, large pores within the solid material.
As preparation on these image stacks, I have used a median filter to preserve edges and then attempted a variety of thresholding techniques, all of which provide the errors mentioned above. AI is obviously the next step and I have used Tensorflow in Python to write my own U-net but unfortunately I cannot find a suitable thresholded microCT dataset to train the program. This left me in a catch 22. In order to use AI to threshold my dataset, I needed to threshold my dataset to train the AI. This would in turn threshold my already thresholded dataset.
I have included an example from the stack below. Please could anyone suggest an alternative method? As a last resort, I plan to use the histogram to threshold and then correct the pores by painting them in image by image. Time consuming, but at this stage, I am feeling like it is my only option.
Any help or suggestions welcome.
Relevant answer
Answer
Hi Ben,
I see two possibilities to solve your problem:
  1. You could use marker-based watershed segmentation instead of thresholding. If you use the gradient image as height map, edges in your data will be important, not absolute grey-values. You would need to set markers for your two materials (material and pore space). You could use thresholding for that. The markers for the pore-space you could put at the very low values, which are also occurring at the edges of the large pores, and the markers for the material you can set to the very high values. Or -- in Avizo -- you could use "2D Histogram Segmentation" for setting the markers and doing the watershed segmentation all within one module.
  2. The second option would be the better one in my opinion: Your problems arises from "phase contrast". If you have access to the projection images before reconstruction of the 3D data, you could apply a "single-distance phase-retrieval" algorithm and use the resulting images for reconstruction. A list of available software packages you can find, for instance, in this paper: https://arxiv.org/abs/2005.03660v2, in the introduction.
Kind regards,
Andreas
  • asked a question related to Image Segmentation
Question
9 answers
Hi
what are the Deep Learning Techniques for Medical Image Segmentation?
can anyone help me
thanks
Relevant answer
Answer
J. Rafiee , Hao Liu, Shafagat Mahmudova thanks a lot for your help
  • asked a question related to Image Segmentation
Question
6 answers
Hi everyone,
I am relatively new to Fuzzy based segmentation. I read few articles in this field and reached a few terms that I should ask for clarification. The first term is the Fuzzy c-means, and another is the Fuzzy connectedness (relative connectedness and absolute connectedness). It would be great if any experts can give me some understandable insight into the mentioned approaches (Fuzzy c-means and Fuzzy connectedness). Thanks in advance for your inputs.
Relevant answer
Answer
This is also a presentation of Fuzzy connectedness
  • asked a question related to Image Segmentation
Question
9 answers
Hello everyone,
I need to annotate the MRI images (with .nii.gz format). Each image includes several slices. What I need is an annotation tool that can annotate the area of interest and propagate it in all slices? (considering that the location of object changes in different slices).
Thank you all in advance.
Relevant answer
Answer
I would recommend looking at these two tools:
  • ITK-SNAP (http://www.itksnap.org): This is a relatively simple tool to use for segmenting images.
  • 3D Slicer (https://www.slicer.org/): This is a very versatile tool with an active community and a sizeable selection of plug-ins.
  • asked a question related to Image Segmentation
Question
5 answers
I have a 3D Image of size 100x100x100. I have features (feature dimension 25) extracted for each pixel by considering a small pacth around it. So the input to 1D CNN is 100x100x100x25 reshaped to 1000000x25 in the form nxf where n is the number of samples( in this case pixels) and f being the feature dimension. Is using CNN ideal for this scenario? or is a different deep learning model necessary?
  • asked a question related to Image Segmentation
Question
1 answer
Hello!
I've stained some bright field Arginase-1 Hematoxylin slides, but I'm having a hard time finding any computed histology predicates for evaluating the intensity of the cytoplasm verses the nuclei; as everything seems to be done manually.
I've attached some 20x images of the slides. But I'm concerned about edge detection of the cell and haven't had much luck with segmentation in ImageJ. I lose a lot of edges in-between some of the cells.
  • Does anyone have any other software or ImageJ plugins you'd recommend?
  • Any specific computed histology guides or methods of hepatocyte specific antigens or something similar?
  • Any other guidance?
Thank you for your responses.
Relevant answer
Answer
You might find CellProfiler useful. Agreed, the cell borders are very faint. If you exclude nuclei and quantify staining within cells, might not need to segment individual cells. Good luck.
  • asked a question related to Image Segmentation
Question
34 answers
Good morning everyone. As a part of my research work, I designed a network that extracts the leaf region from the real field images. when I search about performance evaluation metrics for the segmentation I found a lot of metrics. Here I provided the list
1. Similarity Index = 2*TP/(2*TP+FP+FN)
2. Correct detection Ratio = TP/TP+FN
3. Segmentation errors (OSE, USE, TSE)
4. Hausdorff Distance
5. Average Surface distance
6. Accuracy = (TP+TN)/(FN+FP+TP+TN);
7. Recall = TP/(TP+FN);
8. Precision = TP/(TP+FP);
9. Fmeasure = 2*TP/(2*TP+FP+FN);
10. MCC = (TP*TN-FP*FN)/sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN));
11. Dice = 2*TP/(2*TP+FP+FN);
12. Jaccard = Dice/(2-Dice);
13. Specitivity = TN/(TN+FP);
14. Sensitivity = TP/(TP+FN);
suggest to me which performance evaluation metrics are best suited for my work. thank you.
Relevant answer
  • asked a question related to Image Segmentation
Question
4 answers
What is the new/novel objective functions for Optimal Multilevel Thresholding Image Segmentation other than the Kapur's and Otsu's objective functions.
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
4 answers
Dear All,
Can any one suggest me a few dataset for Histopathological Breast Cancer Image Segmentation?
Thanks
Relevant answer
Answer
Thanks Elhadad
  • asked a question related to Image Segmentation
Question
7 answers
Dear community , currently working on emotions recognition , as a first step I'm trying to extract features , I was checking some recources , I found that they used the SEED dataset , it contains EEG signals of 15 subjects that were recorded while the subjects were watching emotional film clips. Each subject is asked to carry out the experiments in 3 sessions. There are 45 experiments in this dataset in total. Different film clips (positive, neutral, and negative emotions) were chosen to receive highest match across participants. The length of each film clip is about 4 minutes. The EEG signals of each subject were recorded as separate files containing the name of the subjects and the date. These files contain a preprocessed, down-sampled, and segmented version of the EEG data. The data was down-sampled to 200 Hz. A bandpass frequency filter from 0–75 Hz was used. The EEG segments associated with every movie were extracted. There are a total of 45 .mat files, one for each experiment. Every person carried out the experiment three times within a week. Every subject file includes 16 arrays; 15 arrays include preprocessed and segmented EEG data of 15 trials in one experiment. An array named LABELS contains the label of the corresponding emotion- al labels (−1 for negative, 0 for neutral, and +1 for positive). I found that they loaded each dataset separately (negative , neutral , positive) , and they fixed length of signal at 4096 and number of signal for each class at 100 , and fixed number of features extracted from Wavelet packet decomposition at 83 , my question is why they selected 83 , 4096 and 100 exactly ?
I know that my question is a bit long but I tried to explain clearly the situation , I appreciate your help thank you .
Relevant answer
Answer
As far as the problem is concerned, when DWT is executed as a feature extraction method, lots of wavelet coefficients are generated as the decomposition level increases. Therefore, choosing the necessary features from DWT coefficients becomes crucial for the effectiveness and conciseness of the classifier. Generally, this issue can be solved by manual selection of the features. However, to determine the practicability of DWT an active feature selection (ASF) strategy for DWT coefficients can be adapted. There are two phases contained in the ASF process. In phase 1, to reduce and provide a uniform feature dimensionality, we use relative power ratio (RPR) to pre-process each DWT coefficient obtained from digital responses. Further, in phase 2, the optimal DWT coefficients for classification with an automatic search method are identified to ensure that the outcomes of the whole searching process are the most useful features for the classification. Through the above phases, the original DWT features are refined and selected according to their RPR values. Concise features are finally achieved in an active manner with no human designation.
  • asked a question related to Image Segmentation
Question
7 answers
what is the best machine learning method , to segment and extract the length , angle and the size of collagen bundles pictures obtained from the human cornea?
Relevant answer
Answer
It will depend on your database. You can start with traditional computing, extracting resources, such as filters, lbp, surfing, etc... and if the result is not what you expected, you can try methods that require more processing, such as artificial neural networks and finally deep learning, if you have a robust database and gpu.
  • asked a question related to Image Segmentation
Question
10 answers
This research is about applying PSO based segmentation to an image. The main idea here is to find the best value of radius of the object in an image after PSO is applied. Is there any specific fitness function for image segmentation to be apply in PSO algorithm?
Relevant answer
Answer
Any of you suggest the links for PSO using Python
  • asked a question related to Image Segmentation
Question
15 answers
How do i get started with the implementation of Label propagation (Multi Atlas based Image segmentation) using sparse coding? The same has been implemented in the paper
Are there any resources that I can refer to that have implemented this approach? Can somebody guide me on this.
Relevant answer
Answer
Thank you for the reply. I did go through the research paper you mentioned. I found the explanation complicated and also, the paper seems to implement a different method. I am interested in implementing Label propagation using sparse coding like the one implemented in the paper
and a basic approach would be helpful for me to get started as I am very new to this topic.
Regarding the website nitrc, its very resourceful. Thank you for recommending it. Also, I wasn't able to find a similar method in that webite. Could you maybe provide a link to it?
  • asked a question related to Image Segmentation
Question
7 answers
I am interested in using MATLAB to extract texture features using LBP for each pixel in an image and clustering them using K-means algorithm. Anyone with relevant knowledge or the MATLAB code should assist.
Relevant answer
  • asked a question related to Image Segmentation
Question
13 answers
I plan to implement 3D image segmentation. I've features extracted from an unsupervised feature extraction method. The features have lower dimension than that of the input image. Which segmentation method suits best for this use case? I plan to implement it in python.
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
25 answers
I want to detect circular-like (Blobs) objects in an image.
1) What are all the possible features which can be used to recognize these objects?
2) How can I find the eigenvalues for each component in the image  alone?
Any MATLAB code(s) is really helpful?
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
12 answers
Hi, guys
My duty now is processing soil CT images. I've read a lot of papers related to segmentation, and I found that Kriging segmentation seems to be a common and effective method. In papers, a software named 3DMA-rock is widely used for Kriging segmentation. I googled the software. It seems to be unavailable now……
So does anyone who has the code for kriging segmentation? Can you share with me? Or do you have any other software suggestions when doing Kriging segmentation?
Thanks a lot!
Mengying
Relevant answer
Answer
Read these articles:
1-Advantages of multi-region kriging over bi-region techniques for computed tomography-scan segmentation
2-X-ray µCT: how soil pore space description can be altered by image processing
  • asked a question related to Image Segmentation
Question
7 answers
Which is better method to save time and performance in segmentation active contour or level set for segmentaiton images??
Relevant answer
Answer
Active contour is a boundary based segmentation: Snake id perhaps the classical model. Level set is also a contour model based on curve evolution
  • asked a question related to Image Segmentation
Question
14 answers
I am trying to implement a CNN (U-Net) for semantic segmentation of similar large greyscale ~4600x4600px medical images. The area I want to segment is the empty space (gap) between a round object in the middle of the picture and an outer object, which is ring-shaped, and contains the round object. This gap is "thin" and is only a small proportion of the whole image. In my problem having a small a gap is good, since then the two objects have a good connection to each other.
My questions:
  1. Is it possible to feed such large images on a CNN? Downscaling the images seems a bad idea since the gap is thin and most of the relevant information will be lost. From what I've seen CNN are trained on much smaller images.
  2. Since the problem is symmetric in some sense is it a good idea to split the image in 4 (or more) smaller images?
  3. Are CNNs able to detect such small regions in such a huge image? From what I've seen in the literature, mostly larger objects are segmented such as organs etc.
I would appreciate some ideas and help. It is my first post on the site so hopefully I didn't make any mistakes.
Cheers
Relevant answer
Answer
 The Fully-Convolutional Net (FCN) implementation is the most popular architecture for semantic segmentation . FCNs only have convolutional and pooling layers which give them the ability to make predictions on arbitrary-sized inputs. Another option is R-CNN (Regions with CNN feature). It performs the semantic segmentation based on the object detection results. R-CNN first utilizes selective search to extract a large quantity of object proposals and then computes CNN features for each of them. Finally, it classifies each region using the class-specific linear SVMs. Compared with traditional CNN structures which are mainly intended for image classification, R-CNN can address more complicated tasks, such as object detection and image segmentation, and it even becomes one important basis for both fields.
  • asked a question related to Image Segmentation
Question
12 answers
Hi every one. Anyone pls suggest me what is the best segmentation algorithm for segmenting leaf region from the real field images. Consider that real field images contains too many unwanted objects like other leaf images, branches, human parts ( like fingers, shoes) etc. And also kindly provide me MATLAB code for this if possible. Thank you.
  • asked a question related to Image Segmentation
Question
4 answers
Dear all, I am trying to implement pose normalization for face images using piece-wise affine warping. I am using delaunayTriangulation to construct face mesh based on detected 68 landmarks for two images: one with frontal face and the other with non-frontal face. The resulted meshes do not have the same number of triangles and also have triangles that are different in direction and location.
Could anyone help please? Thanks.
------------------------------------------------------------
% Construct mesh for frontal face image
filename1 = '0409';
img1 = imread([filename1 '.bmp']);
figure, imshow(img1); hold on;
pts1 = load([filename1 '.mat']); % Load 68-landmarks
DT1 = delaunayTriangulation(pts1.pts);
triplot(DT1,'cyan');
% Construct mesh for non-frontal face image
filename2 = '0411';
img2 = imread([filename2 '.bmp']);
figure, imshow(img2); hold on;
pts2 = load([filename2 '.mat']); % Load 68-landmarks
DT2 = delaunayTriangulation(pts2.pts);
triplot(DT2,'cyan');
Relevant answer
Answer
  • asked a question related to Image Segmentation
  • asked a question related to Image Segmentation
Question
14 answers
We are looking for a free software that will allow us to detect and help to classify specific tumors in jaw.
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
3 answers
I am trying to calculate area fractions of elastin and collagen in histological images stained using Weigert's elastic stain and a collagen counterstain (I am actually uncertain whether it is merely eosin or some other counterstain, like van Gieson). Attached is an example region, where elastin is "black"/dark violet and collagen is pink/magenta.
My idea is to apply color deconvolution and train a simple two-layer perceptron to do pixel classification. The results appear to be acceptable in terms of accuracy, but without proper validation, it is difficult to say for sure. However, manually segmenting even a small area is an encumbering task that would take a disproportionately long time to finish.
My question is whether it is possible to validate the results in another way without creating ground truths for comparison?
Relevant answer
  • asked a question related to Image Segmentation
Question
14 answers
Greetings
I need a MATLAB Code for blood vessels segmentation.
The link shows an example image from which vessels should be extracted
Thank You in Advance
  • asked a question related to Image Segmentation
Question
16 answers
When i start doing literature i observed that most of the authors used pre-trained CNN models for the identification and classification of plant leaf diseases. Why none of the authors concentrated to designed customised CNN model for this problem? is there any particular reason?
Relevant answer
Answer
To make a robust CNN model it requires a large number of pre-labeled data sets. Which is quite difficult while tackling a practical problem. Moreover, the training of a model is a time-consuming process and requires a large storage space, which is quite costly in general.
You are right that a pre-trained model is highly customized with its parameters and size but the impact is low when you trained over a large number of data sets.
  • asked a question related to Image Segmentation
Question
16 answers
Is there any open source software that can be used to segment image data (dicom) and the segmented data can be exported as volume or mesh data for further FE analysis?
Relevant answer
Answer
  • asked a question related to Image Segmentation
Question
10 answers
We invite you to apply for the PhD position with the topic "Automatic identification of structures in biomedical mega-images" using the following link: https://tinyurl.com/rqh6qf9
It will be in collaboration between myself and Prof. Ben Giepmans (UMCG) and will have the opportunity to work on large biomedical images in nanotomy generated by a state-of-the-art electron microscope (EM).
Relevant answer
Answer
Ph.D. scholarship is not found. I would glad that inform me if there is a similar position.
  • asked a question related to Image Segmentation
Question
7 answers
I'm looking for a program that will do automatic approximate image segmentation for prostate T2 MRI images.
The program should be:
  1. Fast
  2. Automatic without the need for user supplied landmarks
  3. Robust in the sense of always producing an answer in a reasonable timeframe
  4. Reliable in that it will always approximately outline the prostate
  5. Freely available
The program does not need to be:
  1. Accurate. I only need an approximate answer, accuracy is not as important as reliability
  2. Open source
Sample Prostate MRI Image with segmentation attached
Relevant answer
Answer
We have developed an online segmentation tool with implemented AI-models, one of which can segment the prostate gland automatically. Find it here: www.medseg.ai.
A somewhat late answer but perhaps useful for future projects!
  • asked a question related to Image Segmentation
Question
3 answers
Hi, I'm looking to label some objects (irregularly shaped cellular aggregates) to train a network to identify, count, and measure these objects in images. I'm hoping someone knows of a tool similar to MATLAB's image labeler, where you can label objects by pixel or with a paintbrush- style tool. labelImg seems to be the go-to for many people, but it appears to only have the ability to draw a bounding box around the object rather than using custom polygons or pixelwise labels. A big plus would be if I could label the objects in a handful of images and have the tool try to label similar structures in the rest of thur training images, again like MATLAB's labeler.
Python- or Julia- based tools are preferred but not a hard requirement. Does anyone know of a tool/package like this?
Relevant answer
Answer
For generating data for image segmentation I often use LabelMe. It supports Pascal-VOC format for encoding polygon for segmentation classes. You can draw both bounding box (rectangular) or polygons for different uses cases such as Semantic Segmentation, Instance Segmentation, Video Annotation etc.
  • asked a question related to Image Segmentation
Question
3 answers
I am a new researcher. So, it is very hard for me to understand how the research procedure is generating. And how to choose the best technique. In these circumstances, I can't understand which segmentation algorithm I should use for the MRI image.
Relevant answer
Answer
The first essential step of research is 'exploring'. If you're new to this segmentation field, first read some review papers about image segmentation. Then you can search for a specific algorithm to segment MRI image. To create a new algorithm, you should aware of what are the limitations applicable in latest techniques and try to overcome those limitations. It will come only by experience....
Best of luck
  • asked a question related to Image Segmentation
Question
7 answers
I am looking for visual stimuli that produce a similar effect than the well know “Dalmatian dog illusion” (see Figure attached).
If you look briefly  at the Dalmatian dog illusion  for the first time, it looks like a pattern of meaningless black and white stains (left panel). However, once a priming cue is briefly presented (the red contour in the right panel) the representation of a Dalmatian dog in a field becomes apparent. Once “seen” this representation will remain apparent even after the priming cue is removed and can't be unseen (look at the left panel again without the red contour).
Do you know other types of visual stimuli containing a hidden object shape that never pops-out before and always pops-out after the transient presentation of a priming stimulus? 
Thank you!
Relevant answer
Answer
Here is a similar one, which you will never be able to not see it again once you've found it:
  • asked a question related to Image Segmentation
Question
3 answers
I want to do texture segmentation. I have used 2-d wavelet decomposition and then calculated energy as feature vector. I have have calculated the feature vector of each pixel from a multi-textured image. Now from feature vectors, how can I achieve segmentation of a multi texture image?
Relevant answer
Answer
it is very difficult to perform texture segmentation using feature vectors representing pixels. i suggest you take advantage of spatial redundancy and use spatial tiling. you may then use a clustering algorithm to cluster the resulting vectors.
  • asked a question related to Image Segmentation
Question
5 answers
I am working on image segmentation using python open cv package but couldn't able to work with it because of error indicating "ModuleNotFoundError: No module named 'cv2'" and the same error is shown in the image
Can anyone suggest me how to fix this issue for proper working of open cv package.
Relevant answer
Answer
Actually the open cv package was missing in the inbuilt python library. Now i fixed the issue by manually adding the package into the python as per the suggestion given by Pavlo Pavliuchenko and its working well. Thank you very much for your response Cao Zhiwei, Hejar Shahabi
  • asked a question related to Image Segmentation
Question
4 answers
In object based classification method , first Image is segmented in no of objects and then this objects are used as training , validation and testing data. Do I need data sets of Images of Same scene ? or Can I performed supervised classification with OBIA using 1 image?
Relevant answer
Answer
I agree with Alaa and Miguel. Best.
  • asked a question related to Image Segmentation
Question
4 answers
I am trying to run some statistics on the contour outline of imaged zooplankton specimens from the LOKI system (https://www.researchgate.net/publication/236931182_Imaging_of_plankton_specimens_with_the_lightframe_on-sight_keyspecies_investigation_LOKI_system ).
The LOKI system extracts Regions Of Interest (ROI) directly in the underwater unit. Thus, each file generally contains the image of an individual specimen. It can be generally assumed, that the largest object in the image is the ROI to evaluate.
To get the contour information I use some functions of the EBImage package (see some sample code below). This works fine for most images.
Nevertheless, in some images the background is heterogeneous and a little brighter. Thus, the background may partially contribute as “false positive” to the ROI. In other cases parts of the background become structures of their own, being sometimes even larger than the ROI (see the two upper rows of the attached panel imge).
I already tried a couple of things like Otsu’s thresholding method or the filter2 functions. I also tried to first subtract a certain value and finally spread the range (multiplication or power function). So far I was not able to find a universal and simple approach.
The ROI is generally part of the brightest structures in the image. But specimen appendages may vary in brightness and be even close to the background. As these structures are also important the background cannot be simply darkened, as it would also loose some of these structures (see last row of the panel image).
Thus, an interesting approach might be to start from the brightest structures and to identify pixels above a local, adaptive threshold; something like a modified Hoshen-Koppelman algorithm.
Now is my question, whether here is someone who may give a hint, an approach or even some lines of code.
As I finally want to process >100k images it is of course advantageous to keep the solution as (computational) cheap as possible. However, to have a good solution is the preference.
Below I attached some quick code to get an idea.
Also included are some images for testing…
Best regards,
Jan
# short test on selected contour outlines of zooplankton specimens
# Dr. Jan Schulz, ICBM, University of Oldenburg
# data retrieved from a LOKI device (DOI: 10.2971/jeos.2010.10017s)
# file date: 30.09.2019
library ("EBImage")
#-------------------------------------------------------------------------------
# create a small fake matrix easily to distinguish from real images
# and to ensure proper functionality when an images could not be loaded correctly
# in subsequent image processing it will be used to emulate a smal 7*7 pixel image,
# with black background and a 3*3 pixels white dot in its centre
fakeMatrix = matrix (0, nrow = 7, ncol = 7)
fakeMatrix [3:5, 3:5] = 1
# initialise the greyIMG matrix with the tiny fake matrix
# just for security reasons, when import fails the routines calculate a very small
# image that can be identified pretty easily
greyIMG = fakeMatrix
aFileList = list ( "D:/Daten/HE434Test/0067_HE 434-067/Haul 1/LOKI_10001.01/Pictures/2014.10.20 11 25/20141020 112511 943 000000 2037 0102.png",
"D:/Daten/HE434Test/0067_HE 434-067/Haul 1/LOKI_10001.01/Pictures/2014.10.20 11 24/20141020 112403 993 000000 2073 0456.bmp",
"D:/Daten/HE434Test/0067_HE 434-067/Haul 1/LOKI_10001.01/Pictures/2014.10.20 11 24/20141020 112434 218 000000 0108 0933.bmp")
# create a file for the plot output
png (paste ("D:/Daten/HE434Test/0067_HE 434-067/Haul 1/Test", ".png"),
width = 21,
height = 29.7,
units = 'cm',
res = 310)
par (mfrow = c (3, 2), # display different plots as an 3x2 array
oma = c (4, 2, 3, 2), # define outer margins as lines of text
mar = c (5, 4, 2, 2))
for (i in 1:length (aFileList)) {
aFileFullPath = aFileList [[i]] [1]
# handle BMP images
if (toupper (tools::file_ext (aFileFullPath)) == "BMP")
{
#read the BMP image
aSourceImage = read.bmp (aFileFullPath)
# use the number of colour in the attributes to re-scale values to [0..1]
aSourceImage = aSourceImage [ , ] / max (attr (aSourceImage, "header")$ncolors)
greyIMG = channel (aSourceImage, "luminance")
}
#handle PNG images
if (toupper (tools::file_ext (aFileFullPath)) == "PNG")
{
# read the PNG file
aSourceImage = readPNG (aFileFullPath)
aSourceImage = channel (aSourceImage, "luminance")
greyIMG = aSourceImage [ , , 1]
}
#-------------------------- improve image and create a binary silhouette image
# do some image enhancing
# adding a value -> increase brightness
# multiplication -> increasing contrast
# power -> gamma correction
# greater as -> filter for pixels being bright enough
if (toupper (tools::file_ext (aFileFullPath)) == "BMP")
{
#enhancedgreyIMG = greyIMG [ , ] ^ 0.8
enhancedgreyIMG = (greyIMG [ , ]- min (greyIMG))*8
}
if (toupper (tools::file_ext (aFileFullPath)) == "PNG")
{
#enhancedgreyIMG = ((greyIMG [ , ] + 0.007) * 16) ^ 4
#enhancedgreyIMG = greyIMG [ , ] ^ 0.8
enhancedgreyIMG = ((greyIMG [ , ] - min (greyIMG))*8 )
}
# now find the objects in the image
IMGmask = fillHull (enhancedgreyIMG)
# use a small disk to close gaps in the structures and
# finally create the primary binary matrix from values still >0.3
# this matrix contains all objects, including dizzy background objects
IMGmask = closing (IMGmask, makeBrush (9, shape = 'diamond')) > 0.3
# now find all individual objects being not connected and
# give them an individual ID, thus IMGmask becomes an integer matrix
IMGmask = bwlabel (IMGmask)
# identify contours of all the objects having an ID
IMGcontour = ocontour (IMGmask)
# now get the ID of longest contour
# It is assumed that the largest object in the image is the
# focus object extracted by the LOKI system and we expect
# that this is our object to investigate
IDofLongestContour = which.max (lengths (IMGcontour))
# now we switch back to binary mask
# just containing the largest object
IMGmask = (IMGmask == IDofLongestContour)
# get the dimension of the image for convenient reasons
tmp_IMGwidth = dim (greyIMG) [2]
tmp_IMGheigth = dim (greyIMG) [1]
# create teh first plot with the original image
plot (x = c (1, tmp_IMGwidth),
y = c (1, tmp_IMGheigth),
main = "Original image (distorted aspect ratio)",
xlab = "Horizontal pixel",
ylab = "Vertical pixels",
type = "n")
rasterImage (aSourceImage/max (aSourceImage) , 1, 1, tmp_IMGwidth, tmp_IMGheigth)
plot (x = c (1, tmp_IMGwidth),
y = c (1, tmp_IMGheigth),
main = "Processed image and extracted contour",
xlab = "Horizontal pixel",
ylab = "Vertical pixels",
type = "n")
rasterImage (enhancedgreyIMG/max (enhancedgreyIMG) , 1, 1, tmp_IMGwidth, tmp_IMGheigth)
points ( IMGcontour[[IDofLongestContour]] [ , 2] ,
-IMGcontour[[IDofLongestContour]] [ , 1] + tmp_IMGheigth,
pch = 20,
cex = 0.1,
col = "red")
}
dev.off ()
Relevant answer
Answer
you can try Matlab image processing
Matlab is very good tool for any medical image processing and real time image processing.
  • asked a question related to Image Segmentation
Question
19 answers
I implemented the segmentation on my dataset and it is compared with manual segmentation to find the accuracy of segmentation, is there other methods for evaluation?
Relevant answer
Answer
Great discussion point. Following the updates!
  • asked a question related to Image Segmentation
Question
1 answer
Scope
Biomedical imaging has emerged as a major technological platform for disease diagnosis, clinical research, drug development and other related domains due to being non-invasive and producing multi-dimensional data. An abundance of imaging data is collected, but this wealth of information has not been utilized to full extent. Therefore, there is a need for introduction of biomedical image analysis techniques that are accurate, reproducible and generalizable to large scale datasets. Identification of imaging patterns in an anatomical site or an organ at the macroscopic or microscopic scale may guide characterization of abnormalities and estimation of disease risk, analysis of large scale clinical datasets, and assessment of intervention therapy techniques.
This special track solicits original and good-quality papers for delineating, identifying and characterizing novel biomedical imaging patterns. These methods may aim for segmentation, identification and quantification of anatomies, characterization of their properties, and classification of disease. These approaches may be applied to radiological imaging such as CT, MRI and ultrasound, or imaging at the cellular scale such as microscopy and digital pathology techniques.
Topics
Topics of interest include, but are not limited, to the following areas:
Machine/Deep Learning for Computer-aided Diagnosis and Prognosis
Biomedical Image Segmentation and Registration
Radiomics, Immunotherapies, Digital Pathology
Cell Segmentation and Tracking
Relevant answer
Answer
It's a great topic.
  • asked a question related to Image Segmentation
Question
6 answers
I want to detect blocks of text from the images. I want the model to be able to detect each of the blocks as paragraph or address or signature or table or title of the document.
I would want to use machine learning/deep learning using python and open source libraries available.
I am a novice at machine learning. Please help me out with this. If possible please share sample code for better understanding.
Thank you
Relevant answer
Answer
Varun Sk , you may try Pytesseract, its quite simple for beginners in machine learning yet its powerful enough to do extraction text and image from scanned documents.
  • asked a question related to Image Segmentation
Question
9 answers
I am using Matlab for image segmentation watershed algorithm has been done successfully , i want to ask how do i further segment each cell image and segment each blood cells and label them in different article i need Matlab complete code
Relevant answer
Answer
Dear
Kalyan Acharjya
i want to detect and segment each blood cell from the blood cell image using neculie segmentation but i want to use SBF and Laplacian Gauss Algorithm
  • asked a question related to Image Segmentation
Question
12 answers
Dear all, i am currently looking at a techniques which can help in extracting single object from overlapping objects. Can you please suggest any if you know ? Kindly see the example image attached
Relevant answer
Answer
You can use clustering! OR watershed segmentation will be helpful as well! Try this:
  • asked a question related to Image Segmentation
Question
8 answers
I am doing my project in MATLAB and doing a remote sensing area. I am using a split and merge method for segmentation and in that algorithm, using the total frequency of an image.
Relevant answer
Answer
how to find the frequency of an image in matlab
  • asked a question related to Image Segmentation
Question
4 answers
I am doing a research for segmenting 3D medical volumes using different techniques
I used recently NEMA IEC body phantom and CIRS abdominal phantom,model 057 in my research
I am looking for the best predefined medical phantom to validate my achieved results
Relevant answer
Answer
Not sure there's such a thing as the "best" medical phantom, but here goes another set: https://brainweb.bic.mni.mcgill.ca/anatomic_normal_20.html
  • asked a question related to Image Segmentation
Question
5 answers
Below i have attached an original image with segmented output. I need to generate an exact mask of the original image which should have proper boundary with interior full with white color intensity pixel(just like an template).
If there is any existing algorithm, deep learning model or paper available, kindly suggest.
Relevant answer
Answer
Sayeed Shafayet Chowdhury .. Good answer
  • asked a question related to Image Segmentation
Question
10 answers
How to do quantitative evaluation of Image segmentation other than precision, recall and ROC curve. Also how to get these performance metrics in case of object detection and what can be the performance metrics in case of object RECOGNITION
Relevant answer
Answer
There are several, the comparision of the segemntation result with the produced by a human expert is the best
  • asked a question related to Image Segmentation
Question
4 answers
I have annotated my images using Matlab image Labeler for semantic segmentation but i am having difficulties exporting them to work in python ? any help how to do this,
Relevant answer
Answer
Hi Oscar Julian Perdomo Charry thank you very much, actually my supervisor wrote this code for me to export it outside matlab
imagefiles = dir('.imageLabelingSession_SessionData\*.png');
nfiles = length(imagefiles); % Number of files found
for ii = 1 : nfiles
currentfilename = imagefiles(ii).name;
currentimage = imread(['.imageLabelingSession_SessionData\' currentfilename]);
% images{ii} = currentimage;
% img = double(imread(['Test\' currentfilename]));
im = imread(['.imageLabelingSession_SessionData\' currentfilename]);
v = im; % imagesc handle
map = colormap;
minv = min(v(:));
maxv = max(v(:));
ncol = size(map,1);
s = round(1+(ncol-1)*(v-minv)/(maxv-minv));
rgb_image = ind2rgb(s,map);
imwrite(rgb_image,['labels\' 'rgb' currentfilename]);
% labels a new directory to save your images
end
Credit to my supervisor
  • asked a question related to Image Segmentation
Question
13 answers
I am working on a survey between feature descriptors, so does anyone know how I can find HOG or GLOH implementation in matlab.
Relevant answer
Answer
You can use this
features = extractHOGFeatures(I), all details can be found in the link below
  • asked a question related to Image Segmentation
Question
4 answers
I remove the skull and tissue from Brain MRI using morphological operation. After that I tried that code for segmentation
bw = im2bw(binaryImage,0.7);
label = bwlabel(binaryImage);
stats = regionprops(label, 'Solidity' ,'Area');
density= [stats.Solidity];
area=[stats.Area];
high_dense_area =density > 0.5;
max_area = max(area(high_dense_area));
tumor_label= find(area == max_area);
tumor = ismember(label, tumor_label);
biggest = find(area==max(area));
subplot(2,3,6);
imshow(tumor,[]);
title('Tumor Area');
But this code detect tumor in few MR images. Sometimes it does not give the exact tumor region. how can i fix this code
Relevant answer
Answer
Dear
In my opinion, the better methods for brain labeling are fuzzy and MRF you can try it.
Regards
  • asked a question related to Image Segmentation
Question
6 answers
Hello,
Is any one already worked with unsupervised image segmentation? If so, please give me your suggestion. I am using an autoencoder for Unsupervised image segmentation and someone suggests me to use Normalized cut to segment the image... Is there any such algorithm other than Normalized cut??? Also please suggest me some reconstruction loss algorithms which are efficient to use.
Thanks in advance,
Dhanunjaya, Mitta
Relevant answer
Answer
You can take a look here as well :
a very interesting lecture.
  • asked a question related to Image Segmentation
Question
5 answers
I want to label the features of brain MRI such as fluid, midline etc. in MATLAB after segmentation, Image is segmented with Canny algorithm for better results , any idea please
Relevant answer
Answer
Hi Rai
I suggest you to use K mean or Fuzzy C mean algorithm to segmented brain image. you can after that , work labeling for all classes easily.