Questions related to Image Data Analysis
I have over 50 datasets in a single forest plot. I am having some trouble in adjusting the font size and colors when designing high resolution forest plots? Any tutorials and remedies?
To get a rough estimate of the depth accuracy of a stereo camera system, I want to use the formula shown in the attached picture. What I am not sure about is what is usually taken as the assumption for the disparity error delta_d. Is it 1 pixel? A certain fraction of a pixel? Why?
Visual data is becoming increasingly used in qualitative research ranging from participant created art (e.g., drawings, photos) to pop culture text (e.g., film, tv, advertisements). What approaches have you found most meaningful, useful, and/or appropriate?
I have downloaded BRATS 2015 training data set inc. ground truth for my project of Brain tumor segmentation in MRI. A file in .mha format contains T1C, T2 modalities with the OT. Please suggest how to access these files in MathWorks (MATLAB) and further how to proceed for segmentation procedure?
I am developing an adaptive thresholding method based on integral sum images and sauvola method. I need a dataset to test my method against the ground truth. any help is appreciated.
I have image(attached) with certain overlapping, i want to segment it with watershed algorithm using distance transform but not able to achieve it. Steps followed: distance transform on binary image,then dilation and opening and finally watershed.
Does anyone know of methods (preferably using Fiji) to assess quantitative data to analyze double positive cells. So I have figures from FITC/TRITC/DAPI channels and not all of my signals overlap and I would like to have a method to show that certain percentage of cells are overlapping.
Are there any plugins or other tools besides FIJI to assess that?
Any help would be greatly appreciated.
I am looking for a publically available video/image dataset in indoor spaces like stores or malls. I came across Multiple Object Tracking (MOT) Dataset. However, I am looking for something more specific.
i want to train a knn classifier to segment an image into four classes. i train the classifier with four classes.but when testing the network I get only two classes. what could cause this error?
I have started my research on the topic detection of glaucoma using image processing techniques. I need more colour fundus image dataset for training and testing tne neural network. Where can I find them?
For finding of homorgraphy transformation matrix, I have intrinsic paramters of camera and latitude and longitude angles of two images and I have to find rotation matrix to find homopgralhy transformation matrix?
How can I find rotation matrix from latitude and longitude angles of two matrix?
Does anyone have a fuzzy method to solve mixed pixel in crop (sugarcane/rice) classification for district level area?
I am staining lung tissue with H&E stain for a school project and have to quantify inflammation using ImageJ. Does anyone know how to do that or have literature that explains it? Thank you in advance.
I am working on LANDSAT 7 surface reflectance (16 bit signed) image there is problem like SRC. This is a common problem in LANDSAT 7 (8bit signed) which will be solved by focal annalyisis tool in ERDAS but when i am dealing with LANDSAT 7 surface reflectance (16 bit signed) its not working.
May i get any solution to solve the SRC.
I am right now working on 1-D local binary pattern of an EEG signal. After getting the local binary pattern of the signal we should extract histogram features. please help me how to extract them using matlab.
I've been working using RTKLIB for first time and unsure regarding on combining the data file. For example, I want to combine three different days data for processing. I've tried using wildcard and it didn't work.
Can anyone suggest me how to write a Mean Shift code for image segmentation in MATLAB?
I have gone through http://saravananthirumuruganathan.wordpress.com/2010/04/01/introduction-to-mean-shift-algorithm/ , but It is doesn't work for images.
I want to take a deeper look at the facial expressions of people with profound intellectual and multiple disabilities (PIMD) to analyse their emotional expressions.
For this purpose, I am searching for a ready-to-use software, which combines image processing (e.g., OpenPose, OpenFace) and machine learning. In addition, I would prefer a software that is free (i.e., Open Source) or at least for non-commercial research purposes.
So, I am not looking for methods, but for ready-to-use software, which includes a feature to train my own models. The reason is that every person with PIMD shows very unique behaviour signals and, therefore, you need one model for each emotion of a person.
Finally, I do not need a GUI or visualization, a simple command line application would enough.
A hint would be very helpful.
I have two stationery time series that I want to see are they correlated or not. I decided to work with cross correlation, there is a good answer in Correlation between two time series but I dont know how to calculate p-value(as http://stats.stackexchange.com/users/11032/michael-chernick said) and decide that its a good answer, if there are correlated do something. I dont know how to do this. I want to code it with C++ and use it for simulation in Omnet++ . Could someone help me to find a way or introduce a document to me to understand the mathematical way to calculate it and program it in C++? P.s : My data are the number of packets which send from two nodes to an OLT in a network continuously. something like this:
from time 0 to 10:
series 1 : 2 5 3 7 9 2 0 1 8 4
series 2: 3 4 6 0 5 9 2 5 3 7 . series 2 has 6 min delay. or maybe there is no delay in other case, I must calculate the delay also (I can calculate delay with omnet++ but I dont know are they correlated at the end or not)
I want to compare different facial expression recognition approaches on a (not yet) publically available dataset.
For this purpose I need at least one software that I can apply on my dataset videos out-of-the-box. A software based on a state-of-the-art method would be perfect. I also would prefer software that is free, at least for non-commercial research purposes.
I am not looking for methods, but for ready-to-use software including the trained recognition models.
Update: I do not need a fancy GUI or visualization, a simple command line application which reads an video file and writes a text file with FACS scores for each of the frames would be most helpful.
Suppose I have a dicom file with pixel size (0.5,0.5), and I want to make the pixel size to (0.7,0.7) or vice versa. How can I change/modify the pixel size of a dicom file?
i tried to rectify 3 uncalibrated images (i dont have any data about the image no matrix ) of an object than from this rectification i got the MI(mutual information) for all images
bit now i would like to extract 3d image from those data, i tried to use triangulation but i got a flat image like 2D image
any could help please
In the FBP algorithm, the ramp filter |w| has the zero response for the 0-frequency, thus its output has zero for the DC component, i.e., the average value would be zero, then, there would be negative values to compensate the common positive values. Since the nagative values are introduced by the ramp filtering and the image is positive generally, how to perform the normalization before the quantitative evaluation against the reference image?
This question is related with my previous one:
I have an image processing question. I will greatly appreciate you if you help me out how I can count the number of pixels in a known range of RGB values in an image in Matlab?
for example in the attached picture, how can I count the number of pixels in the leaves?
I am trying to code an image forgery detector proposed in a scientific paper. This approach uses Mean Squared Error inconsistency between an image and its re-interpolated version using a set of known Color Filter Arrays (CFA). However, I don't know how to do the first step of the algorithm, which is re-sampling an image with a CFA pattern (they have 4 available). Please take a look in the cfa.png image and cfa2.png image. Can you explain to me how to use the CFA patterns to resample an image as described? My first thought was: it seems that these CFA patterns can be used as kernels for convolution. But convolution normally involves image intensities but in the cfa1.png image, they talk about locations (x,y). Can you help me to understand how this is performed?
P.S: You can find the paper at http://isis.poly.edu/~forensics/pubs/dirik_icip09.pdf
I'm trying to analyse images of animal tissues produced with a 3DHistech slide scanner. I usually use ImageJ but have problems fitting large enough images in the memory. Are there any other free alternatives, or tricks that could be used with ImageJ? My analysis is simple, just detecting & measuring circular holes of certain size (adipocytes). The images are like 25000 pixels x 25000 pixels.
i am working on disparity estimation methods. I am using sum of squared difference methods.The method is applied on two images with different views.I got the disparity map. I got a disparity map with zeros ,ones and two's.My asumption is that region covered with 0's in map shows similarity and other regions with 1's and 2 shows dissimilarity.This is my assumption.I would like to measure the similarity in the map in terms of number of 1's covering the regions.
My point is that my methodology is correct or not? please advice me in this regard?
I am working on Ultrasound images and calculating gray level co-occurence matrix (GLCM) and gray level difference matrix (GLDM) at four orientations(0,45,90,135 degree). Here, suppose we find Contrast at 0 ,45,90,135. Now, is there any specific reason why we need to average the Contrast of all the directions and take it as only one feature or we can take it as four different features. And is the averging of feaure has to be done in both the cases i.e. GLDM and GLCM. Please guide.
As part of PhD work, I have been working on content based image retrieval systems, which are based on image feature extraction using local patterns. What are other possible research areas, where feature extraction methods can be employed.
I'm working on diagnosis of eye diseases using image segmentation. I need papers or work previously done in that area
I am currently working on a psychology research project which uses a dual-video task comprised of anxiety-provoking and positive videos to be shown side-by-side. I really want to try and match up the videos as much as possible by perceptual characteristics. For example, sizes of objects on screen, colours, textures, etc. Does anyone know of an algorithm, program, or app which could be used for this purpose?
1) For each key point, I took a 5x5 neighborhood in the detected key point scale and computed magnitude and orientation of every point in 5x5 neighborhood.Now I used the 36 bin histogram and assigned the peak orientation as the orientation of the key point.
I've just stored the peak orientations of key points in an array. How will this array useful in the later stages ?
2) Now, for every key point I took a 16x16 neighborhood and sub-divided into 16 4x4 grids. For each grid I created a 8 bin histogram , creating a 128 descriptor for each key point.
How will this descriptor is robust to orientation ?Please explain this point with some intuition.
Are the above steps correct ? Did I miss any step in between ?
I've computed the same steps for the 2nd image.
Now, Can anyone explain how to match the key points ? with some intuition behind .
Generally scanned image's resolution is 2000+ in width. And maybe it's size is larger than 5 MB which greatly impacted in website's page load & speed in worse manner. So, I want to introduce a feature that can reduce image's resolution to 800px by compromising image quality but not too worse.
Any suggestion to this work?
Can anyone suggest me the standard interpretation values (Tables) for PSNR (Peak to Signal Noise Ratio) and Kappa coefficient (Accuracy Assessment parameters) for Image Processing to check whether the value generated as the outcome of the proposed module falls within the acceptable range or not?
@ 2D Image @Tabular representation @ Intervals
Spectral matting is a method permits the segmentation of the foreground taking into consideration all the details of it.
I'm trying to develop an effective reporting method for surface roughness of colony biofilms. Currently, I only have simple 2-D images available from a scanner, but would be open to suggestions for better imaging methods as well. I've attached a representative image for the kinds of colony biofilms I'm dealing with. These images are representative of the kinds of differences I'd like to describe quantitatively: http://imgur.com/a/G90AC
Thanks for taking the time to read!
Let's say we find SIFT key-points from the gray-scale image (average of the RGB image) but we have calculated several feature images (with the same size as the original image), and we want to calculate SIFT descriptors for all those, from the same image locations. That is we do not want to end up with different number and location of points for each feature image. I could not do this at least in the vl-feat implementation. Is there any implementation (preferably in Matlab) that accepts input for key-point locations to extract SIFT descriptors?
I want to calibrate the coordinates of real word to image. I am unable to get idea how to proceed.
I need to identify the height of moving object in video considering static background as in the reference image and check it if is same as in real world. My camera is also fixed at a particular height
Please suggest .
Using HoG transform i obtained feature vector for each image, now how to classify these images using Sklearn classification algorithm(Knn) using obtained feature vector??
Basically SSIM and PSNR are used mainly to compare reconstructed images from the original image but in the literarture these two criterions are frequently used to compare the segmented images from the original image.
I am doing the vein recognition and localization work.I found that it is difficult to distinguish the capillary and the veins,especially the blur veins.I use the frangi filter to extract the veins, but the dark thin capillary become thick after the filter. Then, my algorithm is confused to identify which is the thick vein.I want to recognize the capillary, so that I can do some judgment about the thickness of the veins. Anyone can give some advise?
The image below is captured by a common industrial camera with infrared filter under the infrared light.
How to calculate confident level in computer vision. I work on object detection and for that purpose detected relevant features. I work on airplane door detection, so I have some relevant features such as, door window, door handle, text boxes, Door frame lines and so on. Firstly , detect individual features, then in the second level and done some logical organisation of those features where eliminate the wrong detected features.And the end I have some final checks where should remain only features that belong to that object. So my question is with which confident level I can declare that this is the object I like to detect. Any help
hi, everyone i am working on script identification from multi-lingual document image, i need to know which is the best feature extraction method to extract features from grayscale document image.
I have an image with green background and some shadows around my materials and I want to remove background and shadows then convert RGB color to lab color and get mean values of l*, a*, b* values separately to use these values into total color change (delta E).
i am working on a camera based document image analysis, where i have to identify the script from multi-lingual document images, hence i need to extract the features using alexnet is it suitable to extract the document image features.
The original data are time series of x and y coordinates which describe the displacement of an object or a subject on field. The data were collected with an semi automatic tracking system at 30 fps. After that it was applied a 6Hz Butterworth low pass filter.
I have a few thousand brain MRI:s (incl. 3d-T1, FLAIR, SWI and others) in 512*512 matrix. I need to resize these to 256*256. Mango performs well, but its hardly practical to convert images one at a time. What would be the easiest way to batch resize?
What are the fields of application for high resolution (> =256*256 pixels) face detection ?
The most common application is in a surveillance system. It may be used to check identification-card or in similar case.
But is there any other application?
I found only three relevant paper, I attached links of them. But they are not exactly what I am looking for.
The images are a obtained from a video sequence captured from my camera.
I am planning to implement the algorithm in opencv c++. I dont prefer Machine Learning approaches.
I have a question about the relation between CNN kernel size and how far a feature map can be shifted.
Given three consecutive 7x7 kernels, which equal to one 11 kernel, a feature map can be shifted 11 pixels using the three consecutive 7x7 kernels?
Hope to hear from you.
can any one explain me the features that can be extracted from fonts?
some are based pixels.some are based curvature...
I work in visual speech recognition, after extract the mouth region and perform 2d DCT ,I want to extract the 24 highest energy coefficients of DCT using matlab.
how can I identify the highest energy location in the dct matrix?
I'm working with ImageJ on analyzing MRI images of Na+ in soil and roots. I have a calibration image - the whiter the image is, there is higher concentration of Na+. I want to be capable of analyzing an image and to estimate Na+ concentration in the soil and in/around the root. Thanks, Adi
I am facing a problem with thresholding while I am using Imagej. I am analyzing 2160 images as one stack. I usually divide them as 3 groups (720 images / each group) but this time I got three different values for threshloding (converting images to Black and white). Is any one has any idea why I got this?
Hello Dear all,
I am working on edge detection, I must firstly apply a smoothing filter to the image before applying other processing. The problem is the mask chosen gives good results in some images and bad results in others, if I change the mask I get the opposite. What is the image property that measures the noise? In this way, I can change the mask automatically depending on the image noise.
Once Haar classifier detects something in an image, i am trying to calculate the average intensity of that region. And compare that region avg. intensity with the intensity of a region where human is present. I am not sure whether this will work or not.
Or should i try passing adaptive thresholded image to the classifier?
I am working on digital image classification. given an image, i need to classify whether it is a cg image or natural image?