Science topic

Image Processing - Science topic

All kinds of image processing approaches.
Questions related to Image Processing
  • asked a question related to Image Processing
Question
4 answers
I am trying to make generalizations about which layers to freeze. I know that I must freeze feature extraction layers but some feature extraction layers should not be frozen (for example in transformer architecture encoder part and multi-head attention part of the decoder(which are feature extraction layers) should not be frozen). Which layers I should call “feature extraction layer” in that sense? What kind of “feature extraction” layers should I freeze?
Relevant answer
Answer
No problem Muhammedcan Pirinççi I am glad it helped you.
In my humble opinion, first, we should consider the difference between transfer learning and fine-tuning and then decide which one better fits our problem. In this regard, I found this link very informative and useful: https://stats.stackexchange.com/questions/343763/fine-tuning-vs-transferlearning-vs-learning-from-scratch#:~:text=Transfer%20learning%20is%20when%20a,the%20model%20with%20a%20dataset.
Afterward, when you decide which approach to use, there are tons of built-in functions and frameworks to do such for you. I am not sure if I understood your question completely, however, I tried to talk about it a little bit. If there is still something vague to you please don't hesitate to ask me.
Regards
  • asked a question related to Image Processing
Question
8 answers
As a generative model, GAN is usually used for generating fake samples but not classification
Relevant answer
Answer
A GAN has a discriminator which can be used for classification. I am not sure why a semi-supervised approach is needed here Muhammad Ali
However, the discriminator is just trained to classify between generated and real data. If this is what you want Mohammed Abdallah Bakr Mahmoud then this should work fine.
Normally I would rather train a dedicated classifier if enough labeled data is available.
  • asked a question related to Image Processing
Question
4 answers
Dear All,
I have performed a Digital Image Correlation test on a rectangular piece of rubber to test the authenticity of my method. However, I faced this chart most of the time. Can anyone show me why this is happening? I am using Ncorr and Post Ncorr for Image processing.
Relevant answer
Answer
Aparna Sathya Murthy Thank you very much for taking the time. I have done it. However, I guess there might be a correction about which I am unaware!
Best regards,
Farzad
  • asked a question related to Image Processing
Question
11 answers
Monkeypox Virus is recently spreading very fast, which is very alarming. Awareness can assist people in reducing the panic that is caused all over the world.
To do that, Is there any image dataset for monkeypox?
Relevant answer
Answer
Medical Datasets
please consider the above links and medical data set
  • asked a question related to Image Processing
Question
11 answers
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Relevant answer
Answer
Imagine a world wide effort to incorporate all cancer datasets into a worldwide cancer Atlas db. Where is IBM Watson on this I wonder! https://en.wikipedia.org/wiki/List_of_databases_for_oncogenomic_research
  • asked a question related to Image Processing
Question
3 answers
As a student who wants to design a chip for processing CNN algorithms, I ask my question. If we want to design a NN accelerator architecture with RISC V for a custom ASIC or FPGA, what problems or algorithms do we aim to accelerate? It is clear to accelerate the MAC (Multiply - Accumulate) procedures with parallelism and other methods, but aiming for MLPs or CNNs makes a considerable difference in the architecture.
As I read and searched, CNN are mostly for image processing. So anything about an image is usually related to CNN. Is it an acceptable idea if I design architecture to accelerate MLP networks? For MLP acceleration which hw's should I work on additionally? Or is it better to focus on CNN's and understand it and work on it more?
Relevant answer
Answer
As I understand from your question, you want to design the chip for your NN. There are two different worlds, one is developing a NN and converting it into an RTL description. Concerning this problem, if your design is sole to implement on ASIC then you have to take care of memories and their sizes. Also, you can use pipelining and other architectural techniques to design a robust architecture. But The other implements it on an ASIC with a commercial library of choice. This is the job of the design engineer who will take care of the physical implementation. Lastly, if you want to implement FPGA then you should take care to exploit DSPs and BRAMs in your design to gett he maximum performance of NN.
  • asked a question related to Image Processing
Question
3 answers
I have a large DICOM dataset, around 200 GB. It is stored in Google Drive. I train the ML model from the lab's GPU server, but it does not have enough storage. I'm not authorized to attach an additional hard drive to the server. Since there is no way to access Google Drive without Colab (if I'm wrong, kindly let me know), where can I store this dataset so that I will be able to access it for training from the remote server?
Relevant answer
Answer
Tareque Rahman Ornob Because all data in Access is saved in tables, tables are at the core of every database. You may be aware that tables are divided into vertical columns and horizontal rows. Rows and columns are referred to as records and fields in Access.
To gain access to data in a database, you must first connect to the database. When you launch SQL*Plus, it connects to your default Oracle database using the username and password you choose.
  • asked a question related to Image Processing
Question
3 answers
Could you tell me please what is the effect of electromagnetic waves on a human cell? And how to model the effect of electromagnetic waves on a human cell using image processing methods?
Relevant answer
Answer
Dear Sir
I also recommend the reference suggusted above by Dr. Guynn because it provides a wide detail about these kinds of issues. For the modeling, I suggest to try the FDTD (finite difference time domain) metheod since it can model any medium by patitioning it into small cells (similar to pixels).
  • asked a question related to Image Processing
Question
3 answers
I am currently working on Image Processing of Complex fringes using MATLAB. I have to do the phase wrapping of images using 2D continuous wavelet transform.
Relevant answer
Answer
  • asked a question related to Image Processing
Question
4 answers
I have a salt(5grains) which undergoes hydration and dehydration for 8 cycles. I have pictures of them swelling and shrinking taken every five minutes under microscope. I can see in the video that salt is swelling and shrinking if i compile the images. But I need to quantify how much increase or decrease in size takes place. Can anyone explain about how I can make use of the pictures
Relevant answer
Answer
Aastha Aastha One basic thing that hasn't been mentioned is that a 2-fold change in diameter is an 8-fold change in mass or volume. (Another way of looking at this is that a 1% change in diameter produces a 3% change in volume - or 2% in surface/projected area). However, one imagines that the mass of the particle doesn't/cannot change and thus the density must reduce (with incorporation of water) in the swelling process as the volume is obviously increasing.
With imaging you're looking at a 2-D representation of a 3-D particle. All of these things need consideration. What is the increase you're trying to document? Length, surface, volume?
  • asked a question related to Image Processing
Question
3 answers
I am working on a classification task and I used 2D-DWT as a feature extractor. I want to ask about more details why I can concatenate 2D-DWT coefficients to make image of features. I am thinking to concatenate these coefficients(The horizontal,vertical and diagonal coeeficients) to make an image of features then fed this to CNN but I want to have an convincing and true evidence for this new approach.
Relevant answer
Answer
For more simplicity, you can use only the LL coefficients, which achieve best results.
  • asked a question related to Image Processing
Question
2 answers
Any short introductory document from image domain please.
Relevant answer
Answer
In general, the linear feature is easier to distinguish than the nonlinear feature.
  • asked a question related to Image Processing
Question
11 answers
Hello members,
I would appreciate it if someone can help me choose a topic in AI Deep Learning or Machine Learning.
I am looking for an Algorithm that can be used in different application and have some issues in terms of accuracy and result, to work on its improvement.
recommend me some papers that help me to find some gaps so I can write my proposal.
Relevant answer
Answer
You can explore federated learning as a distributed DL. Please take a look at the following link.
  • asked a question related to Image Processing
Question
4 answers
I'm looking for the name of an SCI/SCIE journal with a quick review time and a high acceptance rate to publish my paper on image processing (Image Interpolation). Please make a recommendation.
Relevant answer
Answer
Computer Vision and Image Understanding ---> 6.2 weeks ( Submission to first decision)
Journal of Visual Communication and Image Representation ---> 6.7 weeks ( Submission to first decision)
The Visual Computer ----> 46 days ( Submission to first decision)
Signal Processing: Image Communication ---> 6 weeks ( Submission to first decision)
Journal of Mathematical Imaging and Vision ----> 54 days ( Submission to first decision)
  • asked a question related to Image Processing
Question
4 answers
Hi all,
I am looking for experts in area of Biomedical Image Processing.
Any recommendations ?
Please share
  • asked a question related to Image Processing
Question
4 answers
As you can see that the image is taken by changing the camera angle to include the building in the scene. The problem with this is that the measurements are not accurate with the perspective view.
How can I fix this image for the right perspective (centered)?
Thanks.
Relevant answer
Answer
Jabbar Shah Syed To correct the perspective, select Edit>Perspective Warp. When you do this, the pointer changes to a different icon. When you click on the image, it generates a grid with nine pieces. Manipulate the grid's control points (on each corner) and create the grid to encompass the whole structure.
  • asked a question related to Image Processing
Question
3 answers
Red Blood Cells, White Blood Cells, Sickle Cells.
Relevant answer
Answer
Alternatively, you can check this link.
  • asked a question related to Image Processing
Question
4 answers
Suppose I use laplacian pyramid for image denoising application, how would it be better than wavelets? I have read some documents related to laplacian tools in which laplacian pyramids are said to have better selection for signal decomposition than wavelets.
Relevant answer
Answer
I would recommend to you Dual Tree Complex Wavelet
you can find papers about it in my profile
  • asked a question related to Image Processing
Question
4 answers
Dear Friends,
I would like to know about the best method to follow for doing MATLAB based parallel implementation using GPU of my existing MATLAB sequential code. My code involves several custom functions, nested loops.
I tried coverting to cuda-mex function using MATLAB's GPU coder, but I observed that it takes much more time (than CPU) to run the same function.
Proper suggestions will be appreciated.
Relevant answer
Answer
MATLAB may launch a parallel pool automatically based on your selections. To enable this option, go to the Home tab's Environment group and click Parallel > Parallel Preferences, followed by Automatically build a parallel pool. Set your solver to parallel processing mode.
Make a row vector with values ranging from -15 to 15. Use the gpuArray function to move it to the GPU and build a gpuArray object. To work with gpuArray objects, use any MATLAB function that supports gpuArray. MATLAB uses the GPU to do computations automatically.
To start a parallel pool on the cluster, use the parpool function. When you do this, parallel features like parfor loops and parfeval execute on the cluster workers. If you utilize GPU-enabled functions on gpuArrays, the operations will be executed on the GPU of the cluster worker.
  • asked a question related to Image Processing
Question
7 answers
Medical Imaging.
Relevant answer
Answer
Hi,
Have you already found the answer to your question?
  • asked a question related to Image Processing
Question
17 answers
Hello Researchers,
Can you guys tell me the problems or limitations of Computer Vision in this era, on which no one has yet paid heed or problems on which researchers and Industries are working but still didn't get success?
Thanks in Advance!
Relevant answer
Answer
Computer Vision Disadvantages
Lack of specialists - Companies need to have a team of highly trained professionals with deep knowledge of the differences between AI vs. ...
Need for regular monitoring - If a computer vision system faces a technical glitch or breaks down, this can cause immense loss to companies.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
11 answers
Dear Colleagues,
If you are researcher who is studying or already published on Industry 4.0 or digital transformation topic, what is your hottest issue in this field?
Your answers will guide us in linking the perceptions of experts with bibliometric analysis results.
Thanks in advance for your contribution.
  • asked a question related to Image Processing
Question
14 answers
Lane detection is a common use case in Computer Vision. Self-driving cars rely heavily on seamless lane detection. I attempted a road lane detection inspired use case, using computer vision to detect railway track lines. I am encountering a problem here. In the case of road lane detection, the colour difference between road (black) and lane lines (yellow/ white) makes edge detection and thus lane detection fairly easy. Meanwhile, in railway track line detection, no such clear threshold for edge detection exists and the output is as in the second image. Thus making the detection of track lines unclear with noise from the track slab detections etc. This question, therefore, seeks guidance/ advice/ Knowledge exchange to solve this problem. Any feedback on the approach taken to attempt the problem is highly appreciated. Tech: OpenCV
Relevant answer
Answer
I agree with above answers try to use deep learning approaches that give the best results in terms of noise removal and Lane detection
  • asked a question related to Image Processing
Question
5 answers
I'm about to start some analyses of vegetation indexes using Sentinel-2 imagery through Google Earth Engine. The analyses are going to comprise a series of images from 2015/2016 until now, and some of the data won't be available in Level-2A of processing (Bottom-of-Atmosphere reflectance).
I know there are some algorithms to estimate BOA reflectance. However, I don't know how good these estimates are, and the products generated by Sen2Cor look more reliable to me. I've already applied Sen2Cor through SNAP, but now I need to do it in a batch of images. Until now, I couldn't find any useful information about how to do it in GEE (I'm using the Python API).
I'm a beginner, so all tips are going to be quite useful. Is it worth applying Sen2Cor or the other algorithms provide good estimates?
Thanks in advance!
Relevant answer
Answer
I would also prefer u use PEPS(CNES) to download the sentinel 2 images then u use Maja corrections for all the images u want. Am also working on time series from 1984 till now combining Landsat 5 TM and landsat 8 Oli togeda. So from 2015 till now I used Sentinel images from PEPs to validate some of the OLI images from 2015 to 2020 which I found Maja corrections to more working good.
  • asked a question related to Image Processing
Question
3 answers
I am publishing paper in scopus journal and got one comment as follows:
Whether the mean m_z is the mean within the patches 8x8? If the organs are overlap then how adaptive based method with patches 8x8 is separated? No such image has been taken as a evidence of the argument. Please incorporate the results of such type of images to prove the effectiveness of the proposed method. One result is given which are well separated.
Here I am working on method which takes patches of given image and takes mean of them. This mean is used for normalizing the data.
However, I am unable to understand the meaning of second sentence. As per my knowledge, the MRI image is kind of see through, so how will be any overlap of organs?
Any comments?
Relevant answer
Answer
A pixal in an MRI image is proportional to the spin density of the corresponding voxel, weighted by T1 and T2(or T2* depending on the pulse sequence). A single voxel can potentially contain multiple tissue types. This is even more likely in the case of your 8x8 patch.
The reviewer is wondering if your method can correctly normalize a pixel, when patch is a weighted sum of several tissue types. You could have two pixels, each representing the same tissue type and that have similar signal intensity, but the 8x8 patch around these two different voxels contain different distributions of tissue types. In this case, the reviewer thinks that you method would give different normalized values for these two pixels, even though in the original image, they had basically the same intensity. The reviewer is asking you to demonstrate how your method would handle this situation.
  • asked a question related to Image Processing
Question
6 answers
During preprocessing medical image data different techniques should be considered such as cropping, filtering, masking, augmentation. My query is, which techniques are frequently applied to medical image datasets during pre-processing?
Relevant answer
Answer
Image Pre-Processing Techniques are classified into four kinds, which are given below.
1. Brightness transformations/corrections for pixels
2. Geometric Transformations
3. Image Filtering and Segmentation are the third and final steps.
4. Fourier transform and image restoration are examples of Fourier transform and image restoration.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
6 answers
Hello
I'm looking to generate synthetic diffusion images from T1 weighted images of the brain. I read that diffusion images are a sequence of T2 images but with gradients. Maybe could be something related to this. I'm not sure how to generate these gradients too. I'm trying to generate "fake" diffusion images from T1w because of the lack of data from the subjects I'm evaluating.
Can someone please help me?
  • asked a question related to Image Processing
Question
10 answers
Hello,
I have been working on computer vision. I used datasets from Kaggle or other sites for my projects. But now I want to do lane departure warning, and real-time lane detection with real-time conditions(illuminations, road conditions, traffic, etc.). Then the idea to use simulators comes to my mind but there are lots of simulators on online but I'm confused about which one would be suitable for my work!
It would be very supportive if anyone guide me through picking up the best simulator for my works.
  • asked a question related to Image Processing
Question
2 answers
Is it because the imaging equation used by the color constancy model is built on RAW images? Or is it because the diagonal model can only be applied to RAW images? When we train a color constancy model using sRGB images, can we still use certain traditional color constancy models such as gamut mapping, correction moments, or CNN?
Relevant answer
Answer
Color constancy is a type of subjective constancy and a property of the human color perception system that guarantees the perceived color of things remains largely consistent under changing lighting circumstances.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
8 answers
Could anyone suggest a software or code (R or Python) that is capable of recognizing bumblebees (recognizing only not identifying) from video recordings?
  • asked a question related to Image Processing
Question
4 answers
Dear sir/madam,
Greetings for the day,
With great privilege and pleasure, i request anyone belonging to Image Processing domain to review my Ph.D thesis. I hope you will be kind enough to review my research work. Please revert me back on my email id: vivec53@gmail.com at your leisure.
Thanking you in advance.
Relevant answer
Answer
(Reversible Data Hiding) belongs to Electrical Science @Nawab Khan
  • asked a question related to Image Processing
Question
19 answers
Hi. I'm working on 1000 images of 256x256 dimensions. For segmenting I'm using segnet, unet and deeplabv3 layers. when I trained my algorithms it takes nearly 10 hours of training. I'm using 8GB RAM with a 256GB SSD laptop and MATLAB software for coding. Is there any possibility to speed up training without GPU?
  • asked a question related to Image Processing
Question
7 answers
I am researching handwriting analysis using image processing techniques.
Relevant answer
Answer
It depends on the application. In some cases, you may need to do that. Check this paper it will help you to understand this.
  • asked a question related to Image Processing
Question
3 answers
Does anybody can recommend me a tool that coul extract (segment) pores (lineolae) from the following image of a diatom valve?
I mean an ImageJ or FiJi plugin or any other software that can solve this task.
Relevant answer
Answer
First of all, diatoms look amazing! If you happened to have any pretty pictures you'd be happy to share I'm looking for a new desktop wallpaper :)
To answer your question, if you are looking a reasonably easy, but robust approach, you could try the Trainable Weka Segmentation plugin (https://imagej.net/plugins/tws/) which uses machine learning to extract relevant parts of your image. You use a set of training images that you annotate yourself to train the classifier, and then you can apply your classifier to a comparable set of images.
Hope this helps
  • asked a question related to Image Processing
Question
4 answers
I'd like to measure frost thickness on fins of a HEX based on GoPro frames.
I got the ImageJ software. But I don't know if there is a way to select a zone, (a frosted fin) and deduce the average length in one direction.
Currently I do random measurements on the given fin and do the average. However, the random points may not be representative.
I attached two pictures of the fins and frost to illustrate my question.
In advance, thank you very much,
AP
Relevant answer
Answer
Máté Nászai, it works very well! thank you. I had some issues because I did not converted properly my initial picture to binary (when doing "make binary" on the 8-bit image, it does the opposite of the wanted result if I understood correctly).
"Thresholding" the image is working well.
Again, thank you for your time.
AP
  • asked a question related to Image Processing
Question
16 answers
Currently, I'm working on a Deep Learning based project. It's a multiclass classification problem. The dataset can be found here: https://data.mendeley.com/datasets/s8x6jn5cvr/1
I have used Transfer Learning mostly, but couldn't able to get a higher accuracy on the test set. I have used Cross-Entropy and Focal Loss as loss functions. Here, I have 164 samples in the train set, 101 samples in the test set, and 41 samples in the validation set. Yes, about 33% of samples are in the test partition (data partition can't be changed as instructed). I could able to get an accuracy score and f1 score of around 60%. But how can I get higher performance in this dataset with this split ratio? Can anyone suggest me some papers to follow? Or any other suggestion? Suggest me some papers or guidance on my Deep Learning-based multiclass classification problem?
Relevant answer
Answer
For a smaller dataset, i suggest trying conventional ML techniques. A deeper network will require a large training dataset. Try with a shallower network and see if it can give some results, you need to play with parameters, and ultimately play with data, like increase it by using augmentation, etc.
  • asked a question related to Image Processing
Question
3 answers
I am working on CTU (Coding Tree Unit) partition using CNN for intra mode HEVC. I need to prepare database for that. I have referred multiple papers for that. In most of papers they are encoding images to get binary labels splitting or non-splitting for all CU (Coding Unit) sizes, resolutions, and QP (Quantization Parameters).
If any one knows how to do it, please give steps or reference material for that.
Reference papers
Relevant answer
  • asked a question related to Image Processing
Question
3 answers
Hi,
In my research, I have created a new way of weak edge enhancement. I wanted to try my method on the image dataset to compare it with the active contour philosophy.
So, I was looking for images with masks, as shown in the below paper.
If you can help me to get this data, it would be a great help.
Thanks and Regards,
Gunjan Naik
Relevant answer
Answer
The best way to obtain the dataset is to request it from the authors.
  • asked a question related to Image Processing
Question
6 answers
I'm looking for a PhD position and opportunity in one of the English speaking university in European countries (or Australia).
I majored in artificial intelligence. I am in the field of medical image segmentation and My thesis in master was about retinal blood vessels extraction based on active contour. Skilled in Image processing, machine learning, MATLAB and C++.
So could anybody helps me to find a prof and PhD position related on my skills in one of the English speaking university?
Relevant answer
Answer
You have any examination is cleared than you go for scholarship
Like in NET etc
  • asked a question related to Image Processing
Question
3 answers
recently i am collecting red blood cells dataset for classifying into 9 categories of Ninad Mehendale research paper. can anyone suggest the dataset for Red Blood Cell Classification Using Image Processing and CNN this papeer?
  • asked a question related to Image Processing
Question
9 answers
Dear Researchers,
In the remote sensing application to a volcanic activity wherein, the objective is to determine the temperature, which portion (more specifically the range) of the EM spectrum can detect the electromagnetic emissions of hot volcanic surfaces (which are a function of the temperature and emissivity of the surface and can achieve temperature as high as 1000°C)? Why?
Sincerely,
Aman Srivastava
Relevant answer
Answer
8-15
  • asked a question related to Image Processing
Question
6 answers
There are shape descriptors: circularity, convexity, compactness, eccentricity, roundness, aspect ratio, solidity, elongation.
1) What are the real formulas for determining these descriptors?
2) circularity = roundness? solidity = ellipticity?
I compared lectures (M.A. Wirth*) with ImageJ (Fiji) user guide and completely confused: descriptors are almost completely different! Which source to trust?
*Wirth, M.A. Shape Analysis and Measurement. / M.A. Wirth // Lecture 10, Image Processing Group, Computing and Information Science, University of Guelph. – Guelph, ON, Canada, 2001 – S. 29
Relevant answer
Answer
You should not take different measurements as being conflicting, they just try to achieve the same result using different assumptions. For example, the difference between circularity and roundness was discussed in the RG thread:
look for the explanation given by Robert Cameron which I think is the most relevant of the answers.
Regards
  • asked a question related to Image Processing
Question
5 answers
I have grayscale images obtained from SHG microscopy for human cornea collagen bundles, and I have them as tiff stack images and their Czi format. I want to convert those 2D images into a 3D volume but I could not find any method that can be done using MATLAB, Python, or any other program?
Relevant answer
Answer
If you know the physical dimensions of your images and the images in the stack are properly aligned (consecutive), you can create a 3D volume in matlab, then write that volume as nifti (normally for neuroimaging, but should do the trick). There are many tools that can work with nifti, perform 3D volume rendering etc., such as 3D slicer. It is just a representation, the important thing is to have the mapping between physical and image coordinates.
  • asked a question related to Image Processing
Question
4 answers
Hello dear researchers.
It seems that siam rpn algorithm is one of the very good algorithms for object tracking that its processing speed on gpu is 150 fps.But the problem is that if your chosen object is a white phone, for example, and you are dressed in white and you move the phone towards you, the whole bunding box will be placed on your clothes by mistake. So, low sensitivity to color .How do you think I can optimize the algorithm to solve this problem? Of course, there are algorithms with high accuracy such as siam mask, but it has a very low fps. Thank you for your help.
Relevant answer
Answer
Thanks for your valuable answer
  • asked a question related to Image Processing
Question
4 answers
Hi
I'm trying to acquire raw data from Philips MRI.
I followed the save raw data procedures and then I obtained a .idx and a .log file.
I'm not sure if I implemented the procedure correctly.
Are .idx and .log file the file format of Philips MRI raw data?
If so, how to open these files? Is it possible to open these files in matlab?
Thanks
Judith
Relevant answer
Hi, medical image are in DICOM format. But you can manipulate the raw data by using viewer such as Osirix and Horos (Apple). But it still depend on what you want to look at. Certain Philips workstation like ISP can process raw data that you want to extract.
  • asked a question related to Image Processing
Question
1 answer
2 Logistic chaotic sequences generation, we are generating two y sequence(Y1,Y2) to encrypt a data
2D logistic chaotic sequence, we are generating x and y sequence to encrypt a data
whether the above statement is correct, kindly help in this and kindly share the relevant paper if possible
Relevant answer
Answer
here is my article which can be an answer of your question
  • asked a question related to Image Processing
Question
5 answers
How can I tell the distance and proximity as well as the depth of image processing for object tracking? One idea that came to my mind was to detect whether the object was moving away or approaching based on the size of the image.But I do not know if there is an algorithm that I can implement based on?
In fact, how can I distinguish the x, y, z coordinates from the image taken from the webcam?
Thank you for your help
  • asked a question related to Image Processing
Question
59 answers
Hi,
What are the main image processing journals that publish work on the collection, creation and classification of medical imaging databases such as Medical Image Analysis Journal.
Thank you for your support,
Relevant answer
Answer
You can check this list for more information about journals.
Deleted research item The research item mentioned here has been deleted
  • asked a question related to Image Processing
Question
5 answers
I am using transfer learning using pre-trained models in PyTorch for the Image classification task.
When I modified the output layer of the pre-trained model (e,g, alexnet) as per our dataset and run the code for seeing the modified architecture of alexnet it gives output as "none".
Relevant answer
I try to replicate your code, and I don't get "None", I just get an error when I try to do an inference with the model (see image-1). In your forward you do it:
def forward(self, xb):
xb = self.features(xb)
xb = self.avgpool(xb)
xb = torch.flatten(xb,1)
xb = self.classifier(xb)
return xb
but features, avgpool and classifier are "variables" of network, then you need to do:
def forward(self, xb):
xb = self.network.features(xb)
xb = self.network.avgpool(xb)
xb = torch.flatten(xb,1)
xb = self.network.classifier(xb)
return xb
then when I run the forward again, everything looks ok. (see Image-2)
If this not work for you, could you share your .py? I need to check the functions: to_device, evaluate, and check the ImageClassificationBase class. To replicate the error and be able to identify where it is.
  • asked a question related to Image Processing
Question
4 answers
Hi Everyone, I'm currently converting video into images where I noticed 85% of the images doesn't contain the object. Is there any algorithm to check whether an image contains an object or not using the objectness score?
Thanks in advance :)
Relevant answer
Answer
If it is a video and you want to detect objects coming into the field of view, you could simply use 'foreground detection' - refer to <<https://au.mathworks.com/help/vision/ref/vision.foregrounddetector-system-object.html>>.
  • asked a question related to Image Processing
Question
6 answers
Hi Everyone,
I'm currently practising an object detection model which should detect a car, person, truck, etc. in both day and night time. Now, I have started gathering data for both day and night time. I'm not sure whether to train a separate model for daylight and another model for the night-light or to combine together and train it?
can anyone suggest to me the data distribution for each class at day and night light? I presume it should be a uniform distribution. Please correct me if I'm wrong.
Eg: for person: 700 images at daylight and another 700 images for nightlight
Any suggestion would be helpful.
Thanks in Advance.
Relevant answer
Answer
Ben Harper, thanks very much for your recommendation.
  • asked a question related to Image Processing
Question
5 answers
I am a resercher in Medical Image Processing.
Relevant answer
Answer
  • IEEE Transactions on Pattern Analysis and Machine Intelligence(SCI)
  • International Journal of Computer Vision(SCI)
  • Computer Vision and Image Understanding(SCI)
  • Image and Vision Computing(SCI)
  • Pattern Recognition(SCI)
  • IEEE Transactions on Image Processing(SCI)
  • asked a question related to Image Processing
Question
10 answers
Hi. I'm doing a classification problem using deep learning. so that need to train 512x512 images but when i trained my algorithm shows out of memory error. I want to know how much memory size needed to train 512x512 images in MATLAB.
Relevant answer
Answer
Dear Srinivas:
For classification and regression tasks, you can train various types of neural networks using the trainNetwork function.
i.e. you can train:
-- a convolutional neural network (ConvNet, CNN) for image data.
-- a recurrent neural network (RNN) such as a long short-term memory (LSTM) or a gated recurrent unit (GRU) network for sequence and time-series data
-- a multilayer perceptron (MLP) network for numeric feature data.
You can train on either a CPU or a GPU. For image classification and image regression, you can train a single network in parallel using multiple GPUs or a local or remote parallel pool. Training on a GPU or in parallel requires Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a supported GPU device. For information on supported devices, see GPU Support by Release (Parallel Computing Toolbox). To specify training options, including options for the execution environment, use the trainingOptions function.
When training a neural network, you can specify the predictors and responses as a single input or in two separate inputs.
Thus, the entirety of this process depends mainly on the properties of these two hardware (cpu or Gpu).
I hope it will be helpful..
With my best wishes ...
  • asked a question related to Image Processing
Question
5 answers
I want to generate a Lyapunov-exponents-Diagram for my new chaotic map using matlab code. i am unable understand the concept which some of the matlab codes used to get Lyapunov-exponents-Diagram for any chaotic map . kindly help me
Relevant answer
Answer
You may get some ideas here. Just apply the same for your chaotic map.
  • asked a question related to Image Processing
Question
5 answers
Please suggest free SCI/SCIE/Scopus journal has a fast review
Relevant answer
Answer
Fast and unpaid Scopus journals https://youtu.be/-02wlRD8OWU
  • asked a question related to Image Processing
Question
3 answers
I am looking for Imagenet dataset but I can't seem to find any public version available for it. Where can I get the whole dataset?
Relevant answer
Answer
These links might be useful, have a look:
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
6 answers
Hi! I'm trying to train a convolutional neural network (CNN) using Keras for leaves disease classification from images. There are very few plant disease image datasets, so I need to use one of the available TensorFlow datasets for training my model: specifically, two TensorFlow datasets are suitable for this task: 'plant_village' dataset and 'plant_leaves' dataset.
The problem is I don't know how to explore that datasets for see the classes, features, labels... and I don't know how to split them in training, validation and test datasets. I've tried to use the code that is used in TensorFlow docs to explore and manage 'CIFAR10' or 'mnist' datasets, but it doesn't work with the plants image datasets...
Can someone suggest me how to explore and manage 'plant_village' and/or 'plant_leaves' datasets, please?
Relevant answer
Answer
Hello!
The procedure to explore a TensorFlow dataset should be the same regardless of the datasets. Do mind that some datasets are implemented as folders, so the process may vary slightly.
If you are looking for datasets for leaf disease classification, I'd suggest you go through Kaggle. Here's a link: Cassava Leaf Disease Classification | Kaggle
  • asked a question related to Image Processing
Question
4 answers
I have performed all the attack for my image cryptography algorithm. finally i need to test NIST results for my cryptography algorithm. if any one have the code kindly share the code. please do the needful
Relevant answer
Answer
Actually NIST Test suite consists of a bundle tests. Hence, you need not write code for all these tests to find the Randomness of the image. The code for all these tests are given in NIST web site. You need to download the code and run using Eclipse or any other IDE. It will be simple and useful. Only thing you should know that the procedure to the run the different tests.
  • asked a question related to Image Processing
Question
3 answers
NIST randomness test is very important for any encryption algorithm. I want Matlab code for NIST test.
Relevant answer
Answer
I have tested the image using different test suites in the following paper. (Not only NIST)
  • asked a question related to Image Processing
Question
15 answers
Dear all
What are the recent work in deep learning. how to start with python kindly suggest some work and materials to start with that.
Relevant answer
Answer
Interesting recommendations
  • asked a question related to Image Processing
Question
16 answers
I am trying to delineate agricultural fields using Sentinel 2 imagery. I have been implementing different image segmentation algorithms to a time series of this data set. My best output so far has false positive errors of some non-agricultural zones (like forests). Hence, I'm looking for the best way to distinguish forests from ag-fields as a post-processing step.
Relevant answer
Answer
Hi!,
Given that the crop seasons are known and the availability of sentinel data, maybe instead of post-processing you can perform a pre-processing, select a date or dates when the crops are still not developed and classify the forests, then create a mask of forest areas and apply that mask to an image or images for the crops classification. Depending on the complexity and types of crops you may have to anlyse more than one date but in this way you remove the effect of forests.
Hope it helps, I have solved this way a similar problem I had with urban and soil with stubble mixed signatures.
Regards,
Soledad
  • asked a question related to Image Processing
Question
3 answers
Paddy rice and millet both plants are transplanted during the monsoon (June - August) and harvested in post-monsoon (October - December). You have a cloud-free timeseries Sentinel 2 A/B images of the study area for every 5 days during the whole crop cycle. Is it possible to develop rice and millet crop maps separately? If not, which satellite/sensor’s images would be need more? Write down the steps need to follow
Relevant answer
Answer
The leaf and steam of rice and millet look like same and its physio-chemical properties also similar so I suggest to separate millet and rice crop based on its field. The rice grow in big and higher raiser in the edge of the field and the millet crop have no or less higher raiser in the field. So It need more higher spatial resolution satellite images to separate each other.
  • asked a question related to Image Processing
Question
7 answers
Dear community, after using the wavelet transform to extract the important features from my EEG signals , i'm wondering about how to calculate the Shanon entropy of each value of my coefficients (cD1,cD2,....cA6), another thing is how to use the Shanon entropy for dimension reduction ?
Thank you .
Relevant answer
Answer
Hello dear friend Wassim Diai
I hope the next code to calculate the Shanon entropy of given data will be helpful in your work
Wavelet coefficients (cD1,cD2,....A6) will be the entire data.
python 3.7 is used to implement shanon entropy.
pandas library is imported as pd.
Good job
import pandas as pd
data = [3,6,7,12,5,7,.....] #you insert your data here
pd_series = pd.Series(data)
counts = pd_series.value_counts()
entropy = entropy(counts)
print(entropy)
  • asked a question related to Image Processing
Question
10 answers
How can various features (including texture, color and shape) from different components or objects in an image be extracted/selected from images for multi label learning task
Relevant answer
  • asked a question related to Image Processing
Question
13 answers
Problem Formulation
  • Suppose I have several 1000*1000 grids, and at each grid-point, there is some value (in my case it's the number of copies of a specific gene expressed at that pixel location, note that the locations are the same for every grid). What I want is to quantify the similarity between two 2D spatial point-patterns of this kind (i.e., the spatial expression patterns of two distinct genes), and rank all pairs of genes in a "most similar" to "most dissimilar" manner. Note that it is not the spatial pattern in terms of the absolute value of expression level that I care about, rather, it's the relative pattern that I care about. As a result, I might need to utilize some correlation instead of distance metrics when comparing corresponding pixels.
  • The easiest method might be directly viewing all pixels together as a vector and calculate some correlation metric between the two vectors. However, this does not take the spatial information into account. Those genes that I am most interested in have spatial patterns, i.e., clustering and autocorrelation effects their expression pattern (though their "cluster" might take a very thin shape rather than sticking together, e.g., genes specific to the skin cells), which means usually the image would have several peak local regions, while expression levels at other pixels would be extremely low (near 0).
Possible Directions
  • I am not exactly sure if I should consider applying (1) image similarity comparison algorithms from image processing that take local structure similarity into account (e.g., SURF, SIFT, SSIM, etc.), or (2) spatial similarity comparison algorithms from spatial statistics in GIS (there are some papers about this, but I am not sure if there are any algorithms dealing with simple point data rather than the normal region data with shape (they seem to call it polygon map in GIS)), or (3) statistical methods that deal with discrete 2D distributions, which I think might be a bit crude (seems to disregard the regional clustering/autocorrelation effects, ~ Tobler's First Law of Geography).
  • For direction (1), I am thinking about a simple method, that is, first find some "peak" regions in the two images respectively and regard their union as ROIs, and then compare those ROIs in the two images specifically in a simple pixel-by-pixel way (regard them together as a vector), but I am not sure if I can replace the distance metrics with correlation metrics, and am a bit worried that many methods of similarity comparison in image processing might not work well when the two images are dissimilar. For direction (2), I think this direction might be more appropriate because this problem is indeed related to spatial statistics, but I do not yet know where to start in GIS.
A possible caveat of GIS methods: The expression of certain marker genes of a specific cell type might not be clustered in a bulk, but in the shape of a thin layer or irregularly. For example, if the grid is a section of the brain, then the high-expression peak region for cortex layer-specific genes (e.g., Ctip2 for layer V) might form a thin arc curved layer in the 1000*1000 grid.
Any suggestion would be greatly appreciated!
Relevant answer
Answer
Hi, you may have already resolved this question, but if not, I'd recommend looking into methods from spatial ecology to quantify spatial point patterns. I'd recommend looking into the statistical test developed by Plotkin et al., 2000 and some of the R packages build specifically for spatial ecology: https://r-spatialecology.github.io/shar/news/index.html
  • asked a question related to Image Processing
Question
10 answers
Using Fiedler vectors as features in Image processing.
Relevant answer
  • asked a question related to Image Processing
Question
12 answers
Dear sir/Madam
I'm pursuing a Ph.D. in the area of image processing. As per my academic regulations I have to publish two papers in SCI/SCOPUS indexed journals. Already I sort out some good journals but their publication time is so long. I want to complete my course as early as possible. So kindly suggest to me some rapid/fast publications in the area of image processing using deep learning. kindly help me in this regard. thank you for your consideration.
  • asked a question related to Image Processing
Question
5 answers
What feature analysis techniques or new approaches can be applied to analyzing cardic ultrasound images for detection of a defect.
Relevant answer
Answer
Suggest following ..perhaps expanding the reach of the Q to more related
areas might bring more answers ..
1. Comparison with certified standard Ground Truths of images of healthy and
defective hearts
2. From 1 it will mean how we can identify some measures to
distinguish. defects .. using comparisons ..possibilities are Euclidean distance measure..EDM
3. Shape/ texture / statistical moments of higher order
4. Fiedler vector for spectral partition using SVD
Cheers
  • asked a question related to Image Processing
Question
3 answers
Hello,
I have a pile of powder and I'm trying to calculate its volume with image processing.
I'm not quite sure how to determine its height in each centimeter with image processing.
the picture might be from different angles. can u help me, please?
Thanks
Relevant answer
Answer
One approach is to use stereo-Photogrammetry. You need to take an overlapping image pair with a known scale factor such as a bar or any known distance that appears in the two images. Then you can any Photogrammetric software such as AGI-SOFT, Photomodeler, or the like to do your volume computation. In general, please follow the Photogrammetric theory for volume computation, which starts by 3D points computation, then followed by volume computation. Please check my profile for more information about photogrammetry.
  • asked a question related to Image Processing
Question
4 answers
I am working on dental disease prediction by using Image Processing and Deep Learning. I need a dataset of camera images labeled with the dental disease. Any Kind of reply will be very important to my work and will be very much appreciated. Thanks in advance.
Relevant answer
Answer
Fath U Min Ullah I am planning to detect all the possible diseases that are labeled with the database. but the database should be images like phone images Adu Asare Baffour so I can prepare a module for mobile apps.
Thanks, Abdelhameed Ibrahim sure we can work together.
  • asked a question related to Image Processing
Question
9 answers
Hi all,
I have a computer vision task in hand. Much as it's quite simple in my opinion, I'm very naive in this area thus looking for the simplest and fastest methods.
There's a laser pointer projected on a screen that keeps bouncing around. I need to capture the location and the velocity of the projected dot with respect to some reference point. I would really appreciate it if someone elaborates on the procedure in simple terms.
Regards,
Armin
Relevant answer
Answer
Hi
If you are familiar with Python programming this problem can be solved easily. For doing this you need to use OpenCV and numpy libraries. There is many example in GitHub regarding to object detection, But I think your problem is even simpler and a simple threshold detection code can obtain dot position in each frame in real time and this is what you want to know. below link can help you :
  • asked a question related to Image Processing
Question
3 answers
Hello everyone,
I hope you are doing well.
I am using a Vantage Verasonics Research Ultrasound System to do Ultrafast Compound Doppler Imaging. I acquire the beamformed IQData with compounding angles (na = 3) and ensemble size of (ne = 75) which are transmitted at the ultrafast frame rate (PRFmax = 9kHz) and (PRFflow = 3kHz). Can I used the Global SVD clutter filter to process the beamformed IQData instead of conventional high-pass butterworth filter.
Your kind responses will be highly appreciated.
Thank you
Relevant answer
Answer
From one of the best group in the field :
  • asked a question related to Image Processing
Question
3 answers
How can Enhance my segmentation results? I calculated the maximum filtering response for my 3D volume and then I performed adaptive thresholding to segment the bundles, but my segmentation results I not that good. I have tried k-mean clustering-based segmentation but it failed. I need to segment the image then extract some features for classification purposes.
I have attached a couple of images as an example.
Relevant answer
Answer
Canny edge detection algorithm might be useful in your problem
  • asked a question related to Image Processing
Question
8 answers
I want to Identify darkest object in uploaded image. I have tried Imagej. In IMAGEJ, for each image I have to do different threshold and analyzing. Here some are getting excluded for different images with same value of threshold. I want to learn if counting is possible very accurate automatically with some image processing technique. Is it possible to identify with OpenCV?
Relevant answer
Answer
Since, your problem is to identify darkest object. you can get the histogram of the image. Then apply adapive thresholding to find the threshold automatically (threshold will be different for different images but that will be automated)
Moreover, it seems that the objects shape is hexagonal that you want to detect. I think you can train a HOG+SVM classifier with bounding boxes.