Science topic
Image Segmentation - Science topic
Explore the latest questions and answers in Image Segmentation, and find Image Segmentation experts.
Questions related to Image Segmentation
Recently, I discovered the dimension of the SOM network do turn out to be the number of data clusters for data clustering or image segments when used for image segmentation.
For example, if the dimension of the SOM is 7 x 7, then the number of clusters(segments) would be 49, if the dimension of 2 x 1, then the number of clusters(segments) would be 2.
1. Therefore, are there techniques for determining the dimension?
2. What should be the basis/yard stick for picking the dimension?
3. If the knowledge of the data is the basis/yard stick for picking the dimension, is that not a version of K-means??
2024 4th International Conference on Image Processing and Intelligent Control (IPIC 2024) will be held from May 10 to 12, 2024 in Kuala Lumpur, Malaysia.
Conference Webiste: https://ais.cn/u/ZBn2Yr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Image Processing
- Image Enhancement and Recovery
- Target detection and tracking
- Image segmentation and labeling
- Feature extraction and image recognition
- Image compression and coding
......
◕ Intelligent Control
- Sensors in Intelligent Photovoltaic Systems
- Sensors and Laser Control Technology
- Optical Imaging and Image Processing in Intelligent Control
- Fiber optic sensing technology in the application of intelligent photoelectric system
......
All accepted papers will be published in conference proceedings, and submitted to EI Compendex, Inspec and Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 19, 2024
Registration Deadline: May 3, 2024
Final Paper Submission Date: May 3, 2024
Conference Dates: May 10-12, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
I would like to add some questions that made me stuck for sometimes. So i have an issue in imbalance segmentation data for landslide modelling using U-Net (my landslide data is way less than my non-landslide data). So my questions are:
1. Should i try to find a proper loss function for the imbalance problem? or should i focus on balancing data to improve my model?
2. Some suggest to use SMOTE (oversampling), but since my data are images (3D) i have found out that it is not suitable to use SMOTE for my data. So, any other suggestions?
Thank you,
Your suggestions will be appreciated.
Hi, I need to know the best type of satellite image segmentation according your experience
Best Wishes
Is it a good idea to extract features from pre-trained, the last 1x1 convolution removed U-NET/Convolutional Autoencoder? Data will be similar and the model will be trained for image segmentation. I know everybody suggests freezing the encoder is the best option but I think there is feature extraction in the decoder part too(In both convolutional autoencoder and U-NET). They are high-level feature extractors, if my data was different, the frozen decoder part wouldn't be a good idea. But what if my data is very similar?
Image segmentation has often been defined as the problem of localizing regions of an image relative to content (e.g., image homogeneity). However, recent image segmentation
approaches have provided interactive methods that implicitly define the segmentation problem relative to a particular task of content localization.
The random walker algorithm determines the segmentation of an image from a set of markers labeling several phases (2 or more). An anisotropic diffusion equation is solved with tracers initiated at the markers’ position. The local diffusivity coefficient is greater if neighboring pixels have similar values, so that diffusion is difficult across high gradients. The label of each unknown pixel is attributed to the label of the known marker that has the highest probability to be reached first during this diffusion process. In this example, two phases are clearly visible, but the data are too noisy to perform the segmentation from the histogram only. We determine markers of the two phases from the extreme tails of the histogram of gray values, and use the random walker for the segmentation.
source:
Random Walks for Image Segmentation by Leo Grady
c) https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_random_walker_segmentation.html
EM509 Stochastic Processes Project : Random Walks for image segmentation: First random walks are introduced in the order of 1D (without barriers and with barriers), 2D (without barriers and with barriers). Then Markov property is explained. The Markov property is proved for the 1D simple case and a complicated 2D case with both absorbing and reflecting barriers. Image segmentation problem is introduced from an application point of view and converted into a mathematical formulation. Random walker algorithm for image segmentation is introduced and it is proven to be a Markov process. The study is concluded by implementing the Random Walker algorithm and testing it by segmenting a set of images.
source:
2) 24 - Random Walker segmentation in Python - https://www.youtube.com/watch?v=6P8YhJa2V6o
Hello everyone,
I want to perform dendrite, sholl, and branch point analyses. I have seen ImageJ, Imaris, rebuild, and LasX. I have manually used all those software but have yet to have a good experience automating the process.
Based on your experience, which one do you recommend?
Thank you,
Ignacio
In one of the projects with the help of python (jupyter notebook), I applied mask RCNN for masking pomegranate fruit and classify it based on its size. During model application in real time the frame rates drop due to system limitations (8 GB RAM, 500 GB SSD without GPU). Someone, please help/suggest me to built a fast model from scratch which can segment the fruit in lesser time for its online application.
I have a photo of bunches of walnut fruit in rows and I want to develop a semi-automated workflow for ImageJ to label them and create a new image from the edges of each selected ROI.
What I have done until now is Segmenting walnuts from the background by suitable threshold> then select the all of the walnuts as a single ROI>
Now I need to know how can I label, the different regions of ROI and count them in numbers to add to the ROI manager. Finally, these ROIs must be cropped from their edges and new image from each walnut should save individually.
Thoughts on how to do this, as well as tips on the code to do so, would be great.
Thanks!
Hi all. I want to develop a Unet model (2D) based on a pretrained backbone named `resnet34`.
this backbone will be used as an encoder of my unet. I attached the `encoder` figure of it.
Then, after adding encoder, I would like to reconstruct decoder corresponding to my encoder. In `decoder` part, Is there any idea to figure out how many layers or residual blocks would be better and which type of layers should we add in corresponding skip layers, i.e residual block(conv2d->batch_norm->relu->conv2d) is better or simple conv2D ?
My dataset hase 3000 medical images. Dose these number of blocks or layers related to number of samples in my dataset?
I attached my reconstruction decoder image.
In some methods, they get a skip layer from the first layer with 512x512x3 shape. in other methods, they get a skip layer from the second layer with 256x256x64 shape. Which method is better to use?
I am working on meta-heuristic optimization algorithms. I would like to solve Image segmentation using Otsu’s method and my algorithm. I could not understand how to use meta-heuristic in image segmentation. Please help me in this regard. I am from maths back ground. If anybody have matlab code for the same, please share with me. I will be grateful to you.
I have a T1 weighted MRI scan (please see the attached). It has regions of tumour+peritumoral edema. I am trying to develop mask images (.nii) of
1. CSF
2. white matter
3. Grey matter
4. tumour region
5. One single .nii file comprised of 1-4.
I need your expert opinions and recommendations on the best way to get this task done. Thank you in adavance!
I am experimenting with Retinal Optical coherence tomography. However, the region of interest named 'irf' has a very small area and I am using Dice loss for the segmentation in Unets.
However, I am not getting satisfactory results as the input images are noisy and also the ROI is very small. Can anyone suggest to me a suitable loss for this kind of challenge?
Is there any post-hoc easily applicable explainable AI tool for detections carried out by complex CNN-base architectures e.g. Mask R-CNN?
I'm working with a radar image and using the SNIC algorithm to segment it in Google Earth Engine. From what I have read, by viewing the region of interest and changing specific parameter values, the best result can be achieved, am I right?
I read the article about the algorithm but it is still not very clear which parameters is better to pin and which to change depending on results from every test performed.
How to set the optimal values of the parameters of the SNIC algorithm?
Thanks in advance
Is there any one would like to do research together in medical image segmentation
Hello dear researchers
How to do the watershed algorithm in eCognition developer software?
Do I need to add a plugin to watershed segmentation? How? There is no watershed segmentation plugin in my software.
Thank you all
Hi , I'm trying to implement domain adaptation Segmentation From DeepGlobe datatset to Vaihingen dataset .
But both classes are different , so to align the classes , I have to merge 'Roads' and 'Buildings' labels to one label in Vaihingen ground truth dataset .
I have to assign same label to Roads and Buildings as 0,255,0 to be treated as same class
Please suggest a way to merge the classes and assign them as same label .
Is there any tool to acheive it ? QGIs or ArcGIs , since I'm new to this , kindly tell me the steps .
I'm going to automate landfill or landslide detection from satellite images in Python using u-net, semantic segmentation and TensorFlow for my master thesis with a supervise approach therefore I need to train the model with labeled images. In your opinion, I would be able to extract those training images directly with label from GEE (google earth engine) so that would be identifiable in TensorFlow . There is some public datasets for training using deep learning approaches do you know any datasets to be suitable for remote sensing?
Any advice would be appreciated, thanks in advance.
I have searched about the REVIEW database but I can't find it for download. besides, The vampire dataset is not a public database. So, I had sent vampire non-commercial license to use the annotation tool. But they didn’t answer.
could anybody send me the datasets?
I am researching handwriting analysis using image processing techniques.
Does anybody can recommend me a tool that coul extract (segment) pores (lineolae) from the following image of a diatom valve?
I mean an ImageJ or FiJi plugin or any other software that can solve this task.
I need to calculate the accuracy, precision, recall, specificity, and F1 score for my Mask-RCNN model. Hence I hope to calculate the confusion matrix to the whole dataset first to get the TP, FP, TN, FN values. But I noticed almost all the available solutions out there for the calculation of confusion matrix, only outputs the TP, FP, and FN values. So how can I calculate metrics like accuracy and specificity which includes TN as a parameter?.Should I consider the TN as 0 during the calculations ?
Hi,
In my research, I have created a new way of weak edge enhancement. I wanted to try my method on the image dataset to compare it with the active contour philosophy.
So, I was looking for images with masks, as shown in the below paper.
If you can help me to get this data, it would be a great help.
Thanks and Regards,
Gunjan Naik
I need to know do I need to calculate the above metrics by selecting every available trained weight with all the test data to evaluate the model performance? Or do I only need to calculate those metrics for the best weight file I was chosen as my final model for the complete test set? The reason for asking this is in the below reference the author of the repository calculates both metrics by taking every weight file for the complete test set.
Please refer line 152 .
I need to know whether I should consider the number of labels available or the number of images available to increase the model performance. I mean in my case in a single image I have multiple annotated labels available. I know when I increase the number of images for training the accuracy of the model will go high. But in my case do I need to pay attention to maximizing the number of labels in the dataset or the number of images in the dataset to get better accuracy.
Ex:
Case 1:
1 image contains 4 annotated labels(2 for ClassA , 2 for ClassB)
|
|
Total: 60 images
Case 2:
image_b_1 contains 1 annotated label(1 for ClassA)
image_b_2 contains 1 annotated label(1 for ClassB)
|
|
Total: 200 images
Which case will give the maximum accuracy results during the training?
Dear colleagues, I am working on a project of Object detection with Bounding box regression (on Keras, TensorFlow), but I can't find decent sources and code samples. My task involves 200 classes, and each picture can contain from 10 to 20 classes that need to be detected. A medical task, more precisely, the recognition of types of medical blood tests.
I fixed some works that are close to mine. But I want to implement my task for the Russian language and on the KERAS & TensorFlow framework. I am sincerely grateful for any advice and support!
I am interested to acquire Z-stacks via brightfield microscopy of spheroids comprised of Head and Neck cancer cells. My objective is to acquire sphericity and volume on day 0 of the experiment (72h after seeding into ultra-low attachment 96-well plates), on day 1 (24h after treatment with drug of interest), day 2 (48h after treatment) and day 3 (72h after treatment). Due to several time points, it is important that the spheroids must not be fixed for sphericity and volume measurement. After some literature research I was not able to find a microscope model, where I can put my 96-well plate for acquiring z-stacks.
Maybe someone here has encountered a similar problem and could help me, I would be very thankful for any help in this matter. I already have found the software, which I would need for the measurement (arivis vision 4D) but still do not know, with which microscope model I should acquire the images. Is it even possible to acquire z-stacks spheroids without putting them on a slide?
I am working on medical Image Segmentation
can anyone help me, I want to understand the principle of the work of the U_NET
Hello,
Is anyone already worked with MR image data set??? If so, Is there any model to remove the motion artifacts in the MR image data set if contains??? What should we do if we have an MR image with motion artifacts??? Please give me your suggestions if it is possible to remove artifacts once the scan is produced.
Thanks in advance,
Dhanunjaya, Mitta
Hello everyone. Thank you for considering this question.
I have some tiff image stacks from MicroCT. I am trying to isolate porosity using a variety of tools available to me including FIJI ImageJ, Avizo, and Python: OpenCV and SciKit Image packages. Most of my X-ray stacks are easily segmented using the histogram, but these high resolution stacks are particularly difficult. Whilst edges are quite clearly defined, pore space and material solid are within identical intensity ranges, therefore using histograms to threshold is producing large pores as solid, or on the other side of the threshold, large pores within the solid material.
As preparation on these image stacks, I have used a median filter to preserve edges and then attempted a variety of thresholding techniques, all of which provide the errors mentioned above. AI is obviously the next step and I have used Tensorflow in Python to write my own U-net but unfortunately I cannot find a suitable thresholded microCT dataset to train the program. This left me in a catch 22. In order to use AI to threshold my dataset, I needed to threshold my dataset to train the AI. This would in turn threshold my already thresholded dataset.
I have included an example from the stack below. Please could anyone suggest an alternative method? As a last resort, I plan to use the histogram to threshold and then correct the pores by painting them in image by image. Time consuming, but at this stage, I am feeling like it is my only option.
Any help or suggestions welcome.
Hi
what are the Deep Learning Techniques for Medical Image Segmentation?
can anyone help me
thanks
Hi everyone,
I am relatively new to Fuzzy based segmentation. I read few articles in this field and reached a few terms that I should ask for clarification. The first term is the Fuzzy c-means, and another is the Fuzzy connectedness (relative connectedness and absolute connectedness). It would be great if any experts can give me some understandable insight into the mentioned approaches (Fuzzy c-means and Fuzzy connectedness). Thanks in advance for your inputs.
Hello everyone,
I need to annotate the MRI images (with .nii.gz format). Each image includes several slices. What I need is an annotation tool that can annotate the area of interest and propagate it in all slices? (considering that the location of object changes in different slices).
Thank you all in advance.
I have a 3D Image of size 100x100x100. I have features (feature dimension 25) extracted for each pixel by considering a small pacth around it. So the input to 1D CNN is 100x100x100x25 reshaped to 1000000x25 in the form nxf where n is the number of samples( in this case pixels) and f being the feature dimension. Is using CNN ideal for this scenario? or is a different deep learning model necessary?
Hello!
I've stained some bright field Arginase-1 Hematoxylin slides, but I'm having a hard time finding any computed histology predicates for evaluating the intensity of the cytoplasm verses the nuclei; as everything seems to be done manually.
I've attached some 20x images of the slides. But I'm concerned about edge detection of the cell and haven't had much luck with segmentation in ImageJ. I lose a lot of edges in-between some of the cells.
- Does anyone have any other software or ImageJ plugins you'd recommend?
- Any specific computed histology guides or methods of hepatocyte specific antigens or something similar?
- Any other guidance?
Thank you for your responses.
Good morning everyone. As a part of my research work, I designed a network that extracts the leaf region from the real field images. when I search about performance evaluation metrics for the segmentation I found a lot of metrics. Here I provided the list
1. Similarity Index = 2*TP/(2*TP+FP+FN)
2. Correct detection Ratio = TP/TP+FN
3. Segmentation errors (OSE, USE, TSE)
4. Hausdorff Distance
5. Average Surface distance
6. Accuracy = (TP+TN)/(FN+FP+TP+TN);
7. Recall = TP/(TP+FN);
8. Precision = TP/(TP+FP);
9. Fmeasure = 2*TP/(2*TP+FP+FN);
10. MCC = (TP*TN-FP*FN)/sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN));
11. Dice = 2*TP/(2*TP+FP+FN);
12. Jaccard = Dice/(2-Dice);
13. Specitivity = TN/(TN+FP);
14. Sensitivity = TP/(TP+FN);
suggest to me which performance evaluation metrics are best suited for my work. thank you.
What is the new/novel objective functions for Optimal Multilevel Thresholding Image Segmentation other than the Kapur's and Otsu's objective functions.
Dear All,
Can any one suggest me a few dataset for Histopathological Breast Cancer Image Segmentation?
Thanks
Dear community , currently working on emotions recognition , as a first step I'm trying to extract features , I was checking some recources , I found that they used the SEED dataset , it contains EEG signals of 15 subjects that were recorded while the subjects were watching emotional film clips. Each subject is asked to carry out the experiments in 3 sessions. There are 45 experiments in this dataset in total. Different film clips (positive, neutral, and negative emotions) were chosen to receive highest match across participants. The length of each film clip is about 4 minutes. The EEG signals of each subject were recorded as separate files containing the name of the subjects and the date. These files contain a preprocessed, down-sampled, and segmented version of the EEG data. The data was down-sampled to 200 Hz. A bandpass frequency filter from 0–75 Hz was used. The EEG segments associated with every movie were extracted. There are a total of 45 .mat files, one for each experiment. Every person carried out the experiment three times within a week. Every subject file includes 16 arrays; 15 arrays include preprocessed and segmented EEG data of 15 trials in one experiment. An array named LABELS contains the label of the corresponding emotion- al labels (−1 for negative, 0 for neutral, and +1 for positive). I found that they loaded each dataset separately (negative , neutral , positive) , and they fixed length of signal at 4096 and number of signal for each class at 100 , and fixed number of features extracted from Wavelet packet decomposition at 83 , my question is why they selected 83 , 4096 and 100 exactly ?
I know that my question is a bit long but I tried to explain clearly the situation , I appreciate your help thank you .
what is the best machine learning method , to segment and extract the length , angle and the size of collagen bundles pictures obtained from the human cornea?
This research is about applying PSO based segmentation to an image. The main idea here is to find the best value of radius of the object in an image after PSO is applied. Is there any specific fitness function for image segmentation to be apply in PSO algorithm?
How do i get started with the implementation of Label propagation (Multi Atlas based Image segmentation) using sparse coding? The same has been implemented in the paper
Conference Paper Representation Learning: A Unified Deep Learning Framework f...
Are there any resources that I can refer to that have implemented this approach? Can somebody guide me on this.
I am interested in using MATLAB to extract texture features using LBP for each pixel in an image and clustering them using K-means algorithm. Anyone with relevant knowledge or the MATLAB code should assist.
I plan to implement 3D image segmentation. I've features extracted from an unsupervised feature extraction method. The features have lower dimension than that of the input image. Which segmentation method suits best for this use case? I plan to implement it in python.
I want to detect circular-like (Blobs) objects in an image.
1) What are all the possible features which can be used to recognize these objects?
2) How can I find the eigenvalues for each component in the image alone?
Any MATLAB code(s) is really helpful?
Hi, guys
My duty now is processing soil CT images. I've read a lot of papers related to segmentation, and I found that Kriging segmentation seems to be a common and effective method. In papers, a software named 3DMA-rock is widely used for Kriging segmentation. I googled the software. It seems to be unavailable now……
So does anyone who has the code for kriging segmentation? Can you share with me? Or do you have any other software suggestions when doing Kriging segmentation?
Thanks a lot!
Mengying
Which is better method to save time and performance in segmentation active contour or level set for segmentaiton images??
I am trying to implement a CNN (U-Net) for semantic segmentation of similar large greyscale ~4600x4600px medical images. The area I want to segment is the empty space (gap) between a round object in the middle of the picture and an outer object, which is ring-shaped, and contains the round object. This gap is "thin" and is only a small proportion of the whole image. In my problem having a small a gap is good, since then the two objects have a good connection to each other.
My questions:
- Is it possible to feed such large images on a CNN? Downscaling the images seems a bad idea since the gap is thin and most of the relevant information will be lost. From what I've seen CNN are trained on much smaller images.
- Since the problem is symmetric in some sense is it a good idea to split the image in 4 (or more) smaller images?
- Are CNNs able to detect such small regions in such a huge image? From what I've seen in the literature, mostly larger objects are segmented such as organs etc.
I would appreciate some ideas and help. It is my first post on the site so hopefully I didn't make any mistakes.
Cheers
Hi every one. Anyone pls suggest me what is the best segmentation algorithm for segmenting leaf region from the real field images. Consider that real field images contains too many unwanted objects like other leaf images, branches, human parts ( like fingers, shoes) etc. And also kindly provide me MATLAB code for this if possible. Thank you.
Dear all, I am trying to implement pose normalization for face images using piece-wise affine warping. I am using delaunayTriangulation to construct face mesh based on detected 68 landmarks for two images: one with frontal face and the other with non-frontal face. The resulted meshes do not have the same number of triangles and also have triangles that are different in direction and location.
Could anyone help please? Thanks.
------------------------------------------------------------
% Construct mesh for frontal face image
filename1 = '0409';
img1 = imread([filename1 '.bmp']);
figure, imshow(img1); hold on;
pts1 = load([filename1 '.mat']); % Load 68-landmarks
DT1 = delaunayTriangulation(pts1.pts);
triplot(DT1,'cyan');
% Construct mesh for non-frontal face image
filename2 = '0411';
img2 = imread([filename2 '.bmp']);
figure, imshow(img2); hold on;
pts2 = load([filename2 '.mat']); % Load 68-landmarks
DT2 = delaunayTriangulation(pts2.pts);
triplot(DT2,'cyan');
Image segmentation in bio medical image by using level set method?
We are looking for a free software that will allow us to detect and help to classify specific tumors in jaw.
I am trying to calculate area fractions of elastin and collagen in histological images stained using Weigert's elastic stain and a collagen counterstain (I am actually uncertain whether it is merely eosin or some other counterstain, like van Gieson). Attached is an example region, where elastin is "black"/dark violet and collagen is pink/magenta.
My idea is to apply color deconvolution and train a simple two-layer perceptron to do pixel classification. The results appear to be acceptable in terms of accuracy, but without proper validation, it is difficult to say for sure. However, manually segmenting even a small area is an encumbering task that would take a disproportionately long time to finish.
My question is whether it is possible to validate the results in another way without creating ground truths for comparison?
How much accuracy we can achieve in retinal blood vessel segmentation?
Greetings
I need a MATLAB Code for blood vessels segmentation.
The link shows an example image from which vessels should be extracted
Thank You in Advance
When i start doing literature i observed that most of the authors used pre-trained CNN models for the identification and classification of plant leaf diseases. Why none of the authors concentrated to designed customised CNN model for this problem? is there any particular reason?
We invite you to apply for the PhD position with the topic "Automatic identification of structures in biomedical mega-images" using the following link: https://tinyurl.com/rqh6qf9
It will be in collaboration between myself and Prof. Ben Giepmans (UMCG) and will have the opportunity to work on large biomedical images in nanotomy generated by a state-of-the-art electron microscope (EM).
I'm looking for a program that will do automatic approximate image segmentation for prostate T2 MRI images.
The program should be:
- Fast
- Automatic without the need for user supplied landmarks
- Robust in the sense of always producing an answer in a reasonable timeframe
- Reliable in that it will always approximately outline the prostate
- Freely available
The program does not need to be:
- Accurate. I only need an approximate answer, accuracy is not as important as reliability
- Open source
Sample Prostate MRI Image with segmentation attached
I'm going to work on hand detection using image segmentation. I am thinking of using Kinect, Leap motion or even using normal cameras, but i can't decide which one is better. I wanted to know if anyone has worked with these cameras or participated in any related work and if there is anything i should know.
Hi, I'm looking to label some objects (irregularly shaped cellular aggregates) to train a network to identify, count, and measure these objects in images. I'm hoping someone knows of a tool similar to MATLAB's image labeler, where you can label objects by pixel or with a paintbrush- style tool. labelImg seems to be the go-to for many people, but it appears to only have the ability to draw a bounding box around the object rather than using custom polygons or pixelwise labels. A big plus would be if I could label the objects in a handful of images and have the tool try to label similar structures in the rest of thur training images, again like MATLAB's labeler.
Python- or Julia- based tools are preferred but not a hard requirement. Does anyone know of a tool/package like this?
I am a new researcher. So, it is very hard for me to understand how the research procedure is generating. And how to choose the best technique. In these circumstances, I can't understand which segmentation algorithm I should use for the MRI image.
I am looking for visual stimuli that produce a similar effect than the well know “Dalmatian dog illusion” (see Figure attached).
If you look briefly at the Dalmatian dog illusion for the first time, it looks like a pattern of meaningless black and white stains (left panel). However, once a priming cue is briefly presented (the red contour in the right panel) the representation of a Dalmatian dog in a field becomes apparent. Once “seen” this representation will remain apparent even after the priming cue is removed and can't be unseen (look at the left panel again without the red contour).
Do you know other types of visual stimuli containing a hidden object shape that never pops-out before and always pops-out after the transient presentation of a priming stimulus?
Thank you!
I want to do texture segmentation. I have used 2-d wavelet decomposition and then calculated energy as feature vector. I have have calculated the feature vector of each pixel from a multi-textured image. Now from feature vectors, how can I achieve segmentation of a multi texture image?
I am working on image segmentation using python open cv package but couldn't able to work with it because of error indicating "ModuleNotFoundError: No module named 'cv2'" and the same error is shown in the image
Can anyone suggest me how to fix this issue for proper working of open cv package.
In object based classification method , first Image is segmented in no of objects and then this objects are used as training , validation and testing data. Do I need data sets of Images of Same scene ? or Can I performed supervised classification with OBIA using 1 image?
I am trying to run some statistics on the contour outline of imaged zooplankton specimens from the LOKI system (https://www.researchgate.net/publication/236931182_Imaging_of_plankton_specimens_with_the_lightframe_on-sight_keyspecies_investigation_LOKI_system ).
The LOKI system extracts Regions Of Interest (ROI) directly in the underwater unit. Thus, each file generally contains the image of an individual specimen. It can be generally assumed, that the largest object in the image is the ROI to evaluate.
To get the contour information I use some functions of the EBImage package (see some sample code below). This works fine for most images.
Nevertheless, in some images the background is heterogeneous and a little brighter. Thus, the background may partially contribute as “false positive” to the ROI. In other cases parts of the background become structures of their own, being sometimes even larger than the ROI (see the two upper rows of the attached panel imge).
I already tried a couple of things like Otsu’s thresholding method or the filter2 functions. I also tried to first subtract a certain value and finally spread the range (multiplication or power function). So far I was not able to find a universal and simple approach.
The ROI is generally part of the brightest structures in the image. But specimen appendages may vary in brightness and be even close to the background. As these structures are also important the background cannot be simply darkened, as it would also loose some of these structures (see last row of the panel image).
Thus, an interesting approach might be to start from the brightest structures and to identify pixels above a local, adaptive threshold; something like a modified Hoshen-Koppelman algorithm.
Now is my question, whether here is someone who may give a hint, an approach or even some lines of code.
As I finally want to process >100k images it is of course advantageous to keep the solution as (computational) cheap as possible. However, to have a good solution is the preference.
Below I attached some quick code to get an idea.
Also included are some images for testing…
Best regards,
Jan
# short test on selected contour outlines of zooplankton specimens
# Dr. Jan Schulz, ICBM, University of Oldenburg
# data retrieved from a LOKI device (DOI: 10.2971/jeos.2010.10017s)
# file date: 30.09.2019
library ("EBImage")
#-------------------------------------------------------------------------------
# create a small fake matrix easily to distinguish from real images
# and to ensure proper functionality when an images could not be loaded correctly
# in subsequent image processing it will be used to emulate a smal 7*7 pixel image,
# with black background and a 3*3 pixels white dot in its centre
fakeMatrix = matrix (0, nrow = 7, ncol = 7)
fakeMatrix [3:5, 3:5] = 1
# initialise the greyIMG matrix with the tiny fake matrix
# just for security reasons, when import fails the routines calculate a very small
# image that can be identified pretty easily
greyIMG = fakeMatrix
aFileList = list ( "D:/Daten/HE434Test/0067_HE 434-067/Haul 1/LOKI_10001.01/Pictures/2014.10.20 11 25/20141020 112511 943 000000 2037 0102.png",
"D:/Daten/HE434Test/0067_HE 434-067/Haul 1/LOKI_10001.01/Pictures/2014.10.20 11 24/20141020 112403 993 000000 2073 0456.bmp",
"D:/Daten/HE434Test/0067_HE 434-067/Haul 1/LOKI_10001.01/Pictures/2014.10.20 11 24/20141020 112434 218 000000 0108 0933.bmp")
# create a file for the plot output
png (paste ("D:/Daten/HE434Test/0067_HE 434-067/Haul 1/Test", ".png"),
width = 21,
height = 29.7,
units = 'cm',
res = 310)
par (mfrow = c (3, 2), # display different plots as an 3x2 array
oma = c (4, 2, 3, 2), # define outer margins as lines of text
mar = c (5, 4, 2, 2))
for (i in 1:length (aFileList)) {
aFileFullPath = aFileList [[i]] [1]
# handle BMP images
if (toupper (tools::file_ext (aFileFullPath)) == "BMP")
{
#read the BMP image
aSourceImage = read.bmp (aFileFullPath)
# use the number of colour in the attributes to re-scale values to [0..1]
aSourceImage = aSourceImage [ , ] / max (attr (aSourceImage, "header")$ncolors)
greyIMG = channel (aSourceImage, "luminance")
}
#handle PNG images
if (toupper (tools::file_ext (aFileFullPath)) == "PNG")
{
# read the PNG file
aSourceImage = readPNG (aFileFullPath)
aSourceImage = channel (aSourceImage, "luminance")
greyIMG = aSourceImage [ , , 1]
}
#-------------------------- improve image and create a binary silhouette image
# do some image enhancing
# adding a value -> increase brightness
# multiplication -> increasing contrast
# power -> gamma correction
# greater as -> filter for pixels being bright enough
if (toupper (tools::file_ext (aFileFullPath)) == "BMP")
{
#enhancedgreyIMG = greyIMG [ , ] ^ 0.8
enhancedgreyIMG = (greyIMG [ , ]- min (greyIMG))*8
}
if (toupper (tools::file_ext (aFileFullPath)) == "PNG")
{
#enhancedgreyIMG = ((greyIMG [ , ] + 0.007) * 16) ^ 4
#enhancedgreyIMG = greyIMG [ , ] ^ 0.8
enhancedgreyIMG = ((greyIMG [ , ] - min (greyIMG))*8 )
}
# now find the objects in the image
IMGmask = fillHull (enhancedgreyIMG)
# use a small disk to close gaps in the structures and
# finally create the primary binary matrix from values still >0.3
# this matrix contains all objects, including dizzy background objects
IMGmask = closing (IMGmask, makeBrush (9, shape = 'diamond')) > 0.3
# now find all individual objects being not connected and
# give them an individual ID, thus IMGmask becomes an integer matrix
IMGmask = bwlabel (IMGmask)
# identify contours of all the objects having an ID
IMGcontour = ocontour (IMGmask)
# now get the ID of longest contour
# It is assumed that the largest object in the image is the
# focus object extracted by the LOKI system and we expect
# that this is our object to investigate