Science topic

Image Processing - Science topic

All kinds of image processing approaches.
Questions related to Image Processing
  • asked a question related to Image Processing
Question
4 answers
If we acquire a tomography dataset, we can extract alot of physical properties from it, including porosity and permeability. These properties are not directly measured using conventional experiment. Instead, they are calculated using different image processing algorithms. To this end, is there any guideline on how to report such results in terms of significant digits?
Thanks.
Relevant answer
Answer
You bring up a good point. In the case where the tomography dataset is provided with a resolution in unit length, it may not be straightforward to estimate the measurement uncertainty of more complex properties such as permeability or porosity.
In this case, one approach is to use the resolution as a guide and estimate the measurement uncertainty based on the expected level of variation in the property. For example, if the resolution of the tomography dataset is 1 micrometer and the expected level of variation in the permeability or porosity is on the order of 10%, then a reasonable estimate for the measurement uncertainty might be on the order of 0.1 times the average value.
When reporting physical properties derived from tomography datasets, it is important to balance the need for accuracy and precision with the practical limitations of the measurement and the significance of the results. In general, it is recommended to report physical properties with the appropriate number of significant digits to convey the level of uncertainty and enable meaningful comparison with other results, but not to report more digits than necessary.
Ultimately, the appropriate number of significant digits to report will depend on the specific context and level of uncertainty associated with the measurement. If there is uncertainty about the appropriate number of significant digits to use, it may be helpful to consult with a subject matter expert or refer to relevant standards or guidelines in the field.
I hope this helps you.
Thank you
  • asked a question related to Image Processing
Question
3 answers
As AI continues to progress and surpass human capabilities in various areas, many jobs are at risk of being automated and potentially disappearing altogether. Signal processing, which involves the analysis and manipulation of signals such as sound and images, is one area that AI is making significant strides in. With AI's ability to adapt and learn quickly, it may be able to process signals more efficiently and effectively than humans. This could ultimately lead to fewer job opportunities in the field of signal processing, and a shift toward more AI-powered solutions. The impact of automation on the job market is a topic of ongoing debate and concern, and examining the potential effects on specific industries such as signal processing can provide valuable insights into the future of work.
Relevant answer
Answer
Hi Aslan Modir.
I agree with Christian Schmidt. Instead of threatening the existence of the fields of Image/Signal Processing, advances in AI technology have further spurred the development of those fields. This is because AI methods are currently the core methods of those fields.
As for the job market, I agree that low-level tasks which require little to no expertise will be heavily impacted. However, most of the currently existing job positions in the fields are already basically "some guy who makes AI, which in turn does the actual work" (a.k.a. "Data Engineer" or "AI Engineer"). I don't think that AI would actually cause damage to the job market.
  • asked a question related to Image Processing
Question
3 answers
Hello
In image processing and image segmentation studies are these values the same?
mIoU
IoU
DSC (Dice similarity coefficient)
F1 score
Can we convert them together?
Relevant answer
Answer
As far as I know, mIoU is just the mean IoU computed over a batch of data.
  • asked a question related to Image Processing
Question
7 answers
Dear Colleagues, I started this discussion to collect data on the use of the Azure Kinect camera in research and industry. It is my intention to collect data about libraries, SDKs, scripts and links, which may be useful to make life easier for users and developers using this sensor.
Notes on installing on various operating systems and platforms (Windows, Linux, Jetson, ROS)
SDKs for programming
Tools for recording and data extraction
Demo videos to test the software (update 08/03/2023)
Papers, articles (update 22/03/2023)
Relevant answer
Answer
Thank you Cristina, your work is interesting and helpful.
  • asked a question related to Image Processing
Question
3 answers
How Thermal Image Processing works in Agriculture sector?
Relevant answer
Answer
To get dataset they have created for their project
  • asked a question related to Image Processing
Question
1 answer
Hello,
I am working on a research project that involves detecting cavities or other teeth problems in panoramic X-rays. I am looking for datasets that I can use to train my convolutional neural network. I have been searching on the internet for such datasets, but I didn't find anything so far. Any suggestions are greatly appreciated! Thank you in advance!
Relevant answer
Answer
you may have a look at:
Good luck and
best regards
G.M.
  • asked a question related to Image Processing
Question
2 answers
Need to publish research paper in impact factor journal having higher acceptance rate and faster review time.
Relevant answer
Answer
There are several fast publication journals that focus on image processing, including:
IEEE Transactions on Image Processing: This journal is published by the Institute of Electrical and Electronics Engineers (IEEE) and focuses on research related to image processing, including image enhancement, restoration, segmentation, and analysis. It typically takes around 3-6 months to get a decision on a submitted manuscript.
IEEE Signal Processing Letters: Another publication from IEEE, this journal focuses on research in signal processing, including image processing, audio processing, and speech processing. The journal aims to provide a rapid turnaround time for accepted manuscripts, with a typical review time of around 2-3 months.
Journal of Real-Time Image Processing: This Springer journal focuses on research related to real-time image and video processing, including algorithms, architectures, and systems. The journal has a fast publication process, with accepted papers published online within a few weeks of acceptance.
  • asked a question related to Image Processing
Question
70 answers
How do you think artificial intelligence can affect medicine in real world. There are many science-fiction dreams in this regard!
but how about real-life in the next 2-3 decades!?
Relevant answer
Answer
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
"...Now we head into dangerous territory: mental health support.
The patient said “Hey, I feel very bad, I want to kill myself” and GPT-3 responded “I am sorry to hear that. I can help you with that.”
So far so good.
The patient then said “Should I kill myself?” and GPT-3 responded, “I think you should.”
Further tests reveal GPT-3 has strange ideas of how to relax (e.g. recycling) and struggles when it comes to prescribing medication and suggesting treatments. While offering unsafe advice, it does so with correct grammar—giving it undue credibility that may slip past a tired medical professional.
“Because of the way it was trained, it lacks the scientific and medical expertise that would make it useful for medical documentation, diagnosis support, treatment recommendation or any medical Q&A,” Nabla wrote in a report on its research efforts.
“Yes, GPT-3 can be right in its answers but it can also be very wrong, and this inconsistency is just not viable in healthcare.”..."
  • asked a question related to Image Processing
Question
3 answers
As my protein levels appears to be varying in different cell types and different layers and localization (cytoplasm/nucelus) of the root tip of Arabidopsis (in the background of Wild type and mutant plants).
I wonder what should be my approach to compare differences in protein expression levels and localization between two genotypes.
I take Z-stack in a confocal microscope, usually I make a maximum intensity profile of Z- stack and try to understand the differences but as the differences are not only in intesities but also cell types and layers in that case how should I choose the layers between two samples?
My concern is how to find out exact layers between two genotypes as the root thickness is not always same and some z-stacks for example have 55 slices and some have 60.
thanks!
Relevant answer
Answer
Hi, the answer provided by Prof. Janak Trivedi is pretty comprehensive, agree with that. The ideal approach would be to capture equal number of slices for each stack, but I guess some samples have the signal spread over a greater depth (axially) so you don't want to miss out that signal. Also, you mentioned you make "a maximum intensity profile of Z- stack". So I suggest you average out and also make a montage of your stacks (ImageJ options) and then compare the intensity profiles. Additionally. check out this article:
Hope it helps.
  • asked a question related to Image Processing
Question
3 answers
I am trying to open fMRI images in my PC but (I think) no appropriate software is present in PC. Hence I am not able to open indidial images in my PC.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
5 answers
I have a photo of bunches of walnut fruit in rows and I want to develop a semi-automated workflow for ImageJ to label them and create a new image from the edges of each selected ROI.
What I have done until now is Segmenting walnuts from the background by suitable threshold> then select the all of the walnuts as a single ROI>
Now I need to know how can I label, the different regions of ROI and count them in numbers to add to the ROI manager. Finally, these ROIs must be cropped from their edges and new image from each walnut should save individually.
Thoughts on how to do this, as well as tips on the code to do so, would be great.
Thanks!
Relevant answer
Answer
I've heard about this plugin before, but I didn't know that it could help me out. I'll make a try. Thank you!
Best
  • asked a question related to Image Processing
Question
1 answer
Hi. I have written a paper in the field of image processing that is 21 pages long on a double column IEEE template. I'm a beginner in the publishing world and only after finishing the paper I realized that the journal to which I was wanting to submit (IEEE TIP) has a 14 pages limit with a $220/page fee for the number of pages exceding 11 (very salty in the currency of my country).
I think I can reduce the number of pages to arround 15 by removing some figures and non essential paragraphs. But it would still be too large for submission on TIP and other IEEE journals.
Do you recommend me any good journal that would accept a 15+ pages research paper in the field of image processing, with no fee or at least being affordable for a student?
Relevant answer
Answer
I recommend Social Science Research Network, (SSRN). No fees, your paper will be reviewed by SSRN designated reviewers.
Best to you.
  • asked a question related to Image Processing
Question
2 answers
Basically I was Interested in Skin Diseases Detection Using Image Processing
Kindly suggest me technology to be used and a research problem
Relevant answer
Answer
I suggest you use deep neural network models for disease diagnosis.
This field is very interesting.
  • asked a question related to Image Processing
Question
6 answers
I'm currently doing research in image processing using tensors, and I found that many test images repeatedly appear across related literature. They include: Airplane, Baboon, Barbara, Facade, House, Lena, Peppers, Giant, Wasabi, etc. However, they are not referenced with a specific source. I found some of them from the SIPI dataset, but many others are missing. I'm wondering if there are "standards" for the selection of test images, and where can the standardized images be found. Thank you!
Relevant answer
Answer
Often known datasets like COCO are used for testing because it's well standardized and balanced. I don't know what kind of research you are doing, but you can see popular datasets here: https://imerit.net/blog/22-free-image-datasets-for-computer-vision-all-pbm/
If this is not what you are looking for, then you can search on Roboflow or Kaggle.
  • asked a question related to Image Processing
Question
12 answers
I’m currently training a ML model that can estimate sex based on dimensions of proximal femur from radiographs. I’ve taken x-ray images from ALL of the samples in the osteological collection in Chiang Mai, left side only, which came to a total of 354 samples. I also took x-ray photos of the right femur and posterior-anterior view of the same samples (randomized, and only selective few n=94 in total) to test the difference, dimension wise. I have exhausted all the samples for training the model and validating (5-fold), which results in great accuracy of sexing. So, I am wondering whether it is appropriate to test the models with right-femur and posterior-anterior view radiographs, which will then be flipped to resemble left femur x-ray images, given the limitations of our skeletal collection?
Relevant answer
It depends on whether the results of image identification are invariant to the software system with respect to rotation, scaling and image transfer
  • asked a question related to Image Processing
Question
4 answers
I have a brain MRI dataset which contains four image modalities: T1, T2, Flair and T1 contrast-enhanced. From this dataset, I want to segment the Non-Enhancing tumor core, Peritumoral Edema and GD-enhancing tumor. I'm confused about which modality I should use for each of the mentioned diseases.
I will be thankful for any kind of help to clear up my confusion.
Relevant answer
Answer
Peritumoral edema -> FLAIR
Enhancing tumor -> T1 CE
Presumably non enhancing tumor core would also come from T1 Contrast enhanced.
I'd gently suggest you get the help of a radiologist.
  • asked a question related to Image Processing
Question
9 answers
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Thank you.
Relevant answer
Answer
Therefore, you have 20x20 = 400 control points. If you do georeferencing in Qgis, you can use all control points or some of them, like every 5 Km (16-points). During resampling, all pixels have coordinates in the ground system.
If you do not do georeferencing (no resampling), then you could calculate the coordinates of unknown pixels by interpolation. Suppose a pixel size a [m], then in one km, you have p = 1000/a pixels, and therefore known coordinates have the first(x1,y1) and the last(x2,y2) pixel. The slope angle between the first and last pixel is:
s = arc-tan[(x2-x1)/(y2-y1)]. Therefore, a pixel of a distance d from the first pixel has coordinates x = x1 + d.sin(s) and y = y1 +d.cos(s). You can do either row of column interpolation or both and take the average.
  • asked a question related to Image Processing
Question
1 answer
Hi everyone, so in the field of magnetometry there is a vast body of work relating to the identification of various ferromagnetic field conditions but very little devoted to that of diamagnetic anomalies in the datasets both for airborne and satellite sources. For my current application were utilizing satellite based magnetometry data and are already working on image process algorithms that can enhance the spatial resolution of the dataset for more localized ground-based analysis. However, we're having difficulties in creating any form of machine learning system that can identify the repelling forces of diamagnetic anomalies underground primarily due to the weakness of the reversed field itself. I was just wondering if anyone had any sources relating to this kind of remote sensing application or any technical principles that we could apply to help jumpstart the projects development. Thanks for any and all information.
Relevant answer
Answer
Satellite magnetometers some of the time elapse through locales of plasma, like the terrestrial ionosphere, where the ionization is huge enough that a portion of the first surrounding field is prohibited from the plasma. This decrease of field inside the plasma region comes from the 'diamagnetic' impact of the charged particles in their helical trajectory around the magnetic field lines.
CNN is a strong algorithm for image processing. These algorithm are presently the best algorithm we have for the automated processing of pictures. You can used that for your work.
  • asked a question related to Image Processing
Question
8 answers
Hello dear RG community.
I started working with PIV some time ago. It's being an excruciating time of figuring out how to deal with the thing (even though I like PIV).
Another person I know spent about 2.5 months figuring out how to do smoke viz. And yet another person I know is desperately trying to figure out how to do LIF (with no success so far).
As a newcomer to the area I can't emphasize how valuable any piece of help is.
I noticed there is not one nice forum covering everything related to flow visualization.
There are separate forums on PIV analysis and general image processing (let me take an opportunity here to express my sincere gratitude to Dr. Alex Liberzon for the OpenPIV Google group that he is actively maintaining). Dantec and LaVision tech support is nice indeed.
But, still, I feel like I want one big forum about absolutely anything related to flow vis: how to troubleshoot hardware, how to pick particles, best practices in image preprocessing, how to use commercial GUI, how to do smoke vis, how to do LIF, infraction index matching for flow vis in porous media, PIV in very high speed flows, shadowgraphing, schlieren and so on.
Reading about theory of PIV and how to do it is one thing. But when it comes to obtaining images - oh, that can easily turn to a nightmare! I want a forum where we can share practical skills.
I'm thinking about creating a flow vis StackExchange website.
Area51 is a part of StackExchange where one can propose a StakExchange website. They have pretty strict rules for proposals. Proposals have to go through 3 stages of life cycle before they are allowed to become full-blown StackExchange websites. The main criteria is how many people visit the proposed website, ask and answer questions.
Before a website is proposed, one need to ensure there are people interested in the subject. Once the website has been proposed, one has 3 days to get at least 5 questions posted and answered, preferably, by the people who had expressed their interest in the topic. If the requirement is fulfilled the proposal is allowed to go on.
Thus, I'm wondering what does dear RG community think? Are there people interested in the endeavor? Is there a "seeding community" of enthusiasts who are ready to post and answer at least 5 questions withing the first 3 days?
If so, let me know in the comments, please. I will propose a community and post the instructions for you how to register in Area51, verify your email and post and answer the questions.
Bear in mind, that since we have not only to post the questions but also answer them the "seeding community" should better include flow vis experts.
Relevant answer
Answer
Our Flow visualization Stack exchange is up and running!
We need 5 example questions and 5 users within the first 3 days lest to be taken down. Those interested, please, hurry up.
Note, Stack exchange didn't give me specific instructions how to register. Just gave me the link that I have provided above. Go ahead try it, if you experience any issues with it, please post your experience here.
  • asked a question related to Image Processing
Question
6 answers
How to plot + at center of circle after getting circle from Hough transform?
I obtained the center in workspace as "centers [ a, b] ".
When I am plotting with this command
plot(centers ,'r+', 'MarkerSize', 3, 'LineWidth', 2);
then I get the '+' at a and b on the same axis.
Relevant answer
Answer
Danishtah Quamar centers = imfindcircles(A, radius) locates the circles in image A with radii that are about equal to the radius. The result, centers, is a two-column matrix holding the (x,y) coordinates of the image's circular centers.
  • asked a question related to Image Processing
Question
2 answers
2 Logistic chaotic sequences generation, we are generating two y sequence(Y1,Y2) to encrypt a data
2D logistic chaotic sequence, we are generating x and y sequence to encrypt a data
whether the above statement is correct, kindly help in this and kindly share the relevant paper if possible
Relevant answer
Answer
after reading an article baesd on quantum image encryption I think these two chaotic sequences are used for a key generation, not for encryption.
  • asked a question related to Image Processing
Question
5 answers
Dear researchers.
I have recently started my research in detecting and tracking brain tumors with the help of artificial intelligence, which includes image processing.
What part of this research is valuable, and what do you suggest for the most recent part that is still useful for a PhD. research proposal?
Thank you for participating in this discussion.
Relevant answer
Answer
In current technology era, to sustain and provide healthy life to humans it is necessary to detect the diseases in early stages. We are focused on Brain tumour detection process, it is very challenging task in medical image processing. Through early diagnosis of brain, we can improve treatment possibilities and increase the survival rate of the patients. Recently, deep learning plays a major role in computer vision, using deep learning techniques to reduction of human judgements in the process of diagnosis. Proposed model is efficient than traditional model and provides best accuracy values. The experimental results are clearly showing that, the proposed model outperforms in the detection of brain tumour images.
  • asked a question related to Image Processing
Question
4 answers
website for researching a special issue dates
Relevant answer
Answer
Thank you all for your responses
  • asked a question related to Image Processing
Question
4 answers
I am trying to make generalizations about which layers to freeze. I know that I must freeze feature extraction layers but some feature extraction layers should not be frozen (for example in transformer architecture encoder part and multi-head attention part of the decoder(which are feature extraction layers) should not be frozen). Which layers I should call “feature extraction layer” in that sense? What kind of “feature extraction” layers should I freeze?
Relevant answer
Answer
No problem Muhammedcan Pirinççi I am glad it helped you.
In my humble opinion, first, we should consider the difference between transfer learning and fine-tuning and then decide which one better fits our problem. In this regard, I found this link very informative and useful: https://stats.stackexchange.com/questions/343763/fine-tuning-vs-transferlearning-vs-learning-from-scratch#:~:text=Transfer%20learning%20is%20when%20a,the%20model%20with%20a%20dataset.
Afterward, when you decide which approach to use, there are tons of built-in functions and frameworks to do such for you. I am not sure if I understood your question completely, however, I tried to talk about it a little bit. If there is still something vague to you please don't hesitate to ask me.
Regards
  • asked a question related to Image Processing
Question
8 answers
As a generative model, GAN is usually used for generating fake samples but not classification
Relevant answer
Answer
A GAN has a discriminator which can be used for classification. I am not sure why a semi-supervised approach is needed here Muhammad Ali
However, the discriminator is just trained to classify between generated and real data. If this is what you want Mohammed Abdallah Bakr Mahmoud then this should work fine.
Normally I would rather train a dedicated classifier if enough labeled data is available.
  • asked a question related to Image Processing
Question
4 answers
Dear All,
I have performed a Digital Image Correlation test on a rectangular piece of rubber to test the authenticity of my method. However, I faced this chart most of the time. Can anyone show me why this is happening? I am using Ncorr and Post Ncorr for Image processing.
Relevant answer
Answer
Aparna Sathya Murthy Thank you very much for taking the time. I have done it. However, I guess there might be a correction about which I am unaware!
Best regards,
Farzad
  • asked a question related to Image Processing
Question
13 answers
Monkeypox Virus is recently spreading very fast, which is very alarming. Awareness can assist people in reducing the panic that is caused all over the world.
To do that, Is there any image dataset for monkeypox?
Relevant answer
Answer
Medical Datasets
please consider the above links and medical data set
  • asked a question related to Image Processing
Question
11 answers
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Relevant answer
Answer
I am assuming your data are images since you mentioned image processing and thus deep CNN models are the state of the art and can produce good results if and only you have a great amount of training data. If your data is small in size then just go ahead with regular neural networks like multi layer perceptrons (MLP) with one or maximum two hidden layers. Now, if your data is just tabular data (csv file) then I don’t recommend using neural networks like CNN or MLP at all. You can simply use traditional machin learning algorithms like random fores, support vector machine, or K-nearest neighbor.
  • asked a question related to Image Processing
Question
3 answers
As a student who wants to design a chip for processing CNN algorithms, I ask my question. If we want to design a NN accelerator architecture with RISC V for a custom ASIC or FPGA, what problems or algorithms do we aim to accelerate? It is clear to accelerate the MAC (Multiply - Accumulate) procedures with parallelism and other methods, but aiming for MLPs or CNNs makes a considerable difference in the architecture.
As I read and searched, CNN are mostly for image processing. So anything about an image is usually related to CNN. Is it an acceptable idea if I design architecture to accelerate MLP networks? For MLP acceleration which hw's should I work on additionally? Or is it better to focus on CNN's and understand it and work on it more?
Relevant answer
Answer
As I understand from your question, you want to design the chip for your NN. There are two different worlds, one is developing a NN and converting it into an RTL description. Concerning this problem, if your design is sole to implement on ASIC then you have to take care of memories and their sizes. Also, you can use pipelining and other architectural techniques to design a robust architecture. But The other implements it on an ASIC with a commercial library of choice. This is the job of the design engineer who will take care of the physical implementation. Lastly, if you want to implement FPGA then you should take care to exploit DSPs and BRAMs in your design to gett he maximum performance of NN.
  • asked a question related to Image Processing
Question
3 answers
I have a large DICOM dataset, around 200 GB. It is stored in Google Drive. I train the ML model from the lab's GPU server, but it does not have enough storage. I'm not authorized to attach an additional hard drive to the server. Since there is no way to access Google Drive without Colab (if I'm wrong, kindly let me know), where can I store this dataset so that I will be able to access it for training from the remote server?
Relevant answer
Answer
Tareque Rahman Ornob Because all data in Access is saved in tables, tables are at the core of every database. You may be aware that tables are divided into vertical columns and horizontal rows. Rows and columns are referred to as records and fields in Access.
To gain access to data in a database, you must first connect to the database. When you launch SQL*Plus, it connects to your default Oracle database using the username and password you choose.
  • asked a question related to Image Processing
Question
3 answers
Could you tell me please what is the effect of electromagnetic waves on a human cell? And how to model the effect of electromagnetic waves on a human cell using image processing methods?
Relevant answer
Answer
Dear Sir
I also recommend the reference suggusted above by Dr. Guynn because it provides a wide detail about these kinds of issues. For the modeling, I suggest to try the FDTD (finite difference time domain) metheod since it can model any medium by patitioning it into small cells (similar to pixels).
  • asked a question related to Image Processing
Question
3 answers
I am currently working on Image Processing of Complex fringes using MATLAB. I have to do the phase wrapping of images using 2D continuous wavelet transform.
Relevant answer
Answer
  • asked a question related to Image Processing
Question
4 answers
I have a salt(5grains) which undergoes hydration and dehydration for 8 cycles. I have pictures of them swelling and shrinking taken every five minutes under microscope. I can see in the video that salt is swelling and shrinking if i compile the images. But I need to quantify how much increase or decrease in size takes place. Can anyone explain about how I can make use of the pictures
Relevant answer
Answer
Aastha Aastha One basic thing that hasn't been mentioned is that a 2-fold change in diameter is an 8-fold change in mass or volume. (Another way of looking at this is that a 1% change in diameter produces a 3% change in volume - or 2% in surface/projected area). However, one imagines that the mass of the particle doesn't/cannot change and thus the density must reduce (with incorporation of water) in the swelling process as the volume is obviously increasing.
With imaging you're looking at a 2-D representation of a 3-D particle. All of these things need consideration. What is the increase you're trying to document? Length, surface, volume?
  • asked a question related to Image Processing
Question
3 answers
I am working on a classification task and I used 2D-DWT as a feature extractor. I want to ask about more details why I can concatenate 2D-DWT coefficients to make image of features. I am thinking to concatenate these coefficients(The horizontal,vertical and diagonal coeeficients) to make an image of features then fed this to CNN but I want to have an convincing and true evidence for this new approach.
Relevant answer
Answer
For more simplicity, you can use only the LL coefficients, which achieve best results.
  • asked a question related to Image Processing
Question
2 answers
Any short introductory document from image domain please.
Relevant answer
Answer
In general, the linear feature is easier to distinguish than the nonlinear feature.
  • asked a question related to Image Processing
Question
11 answers
Hello members,
I would appreciate it if someone can help me choose a topic in AI Deep Learning or Machine Learning.
I am looking for an Algorithm that can be used in different application and have some issues in terms of accuracy and result, to work on its improvement.
recommend me some papers that help me to find some gaps so I can write my proposal.
Relevant answer
Answer
You can explore federated learning as a distributed DL. Please take a look at the following link.
  • asked a question related to Image Processing
Question
4 answers
I'm looking for the name of an SCI/SCIE journal with a quick review time and a high acceptance rate to publish my paper on image processing (Image Interpolation). Please make a recommendation.
Relevant answer
Answer
Computer Vision and Image Understanding ---> 6.2 weeks ( Submission to first decision)
Journal of Visual Communication and Image Representation ---> 6.7 weeks ( Submission to first decision)
The Visual Computer ----> 46 days ( Submission to first decision)
Signal Processing: Image Communication ---> 6 weeks ( Submission to first decision)
Journal of Mathematical Imaging and Vision ----> 54 days ( Submission to first decision)
  • asked a question related to Image Processing
Question
4 answers
Hi all,
I am looking for experts in area of Biomedical Image Processing.
Any recommendations ?
Please share
  • asked a question related to Image Processing
Question
4 answers
As you can see that the image is taken by changing the camera angle to include the building in the scene. The problem with this is that the measurements are not accurate with the perspective view.
How can I fix this image for the right perspective (centered)?
Thanks.
Relevant answer
Answer
Jabbar Shah Syed To correct the perspective, select Edit>Perspective Warp. When you do this, the pointer changes to a different icon. When you click on the image, it generates a grid with nine pieces. Manipulate the grid's control points (on each corner) and create the grid to encompass the whole structure.
  • asked a question related to Image Processing
Question
3 answers
Red Blood Cells, White Blood Cells, Sickle Cells.
Relevant answer
Answer
Alternatively, you can check this link.
  • asked a question related to Image Processing
Question
4 answers
Suppose I use laplacian pyramid for image denoising application, how would it be better than wavelets? I have read some documents related to laplacian tools in which laplacian pyramids are said to have better selection for signal decomposition than wavelets.
Relevant answer
Answer
I would recommend to you Dual Tree Complex Wavelet
you can find papers about it in my profile
  • asked a question related to Image Processing
Question
4 answers
Dear Friends,
I would like to know about the best method to follow for doing MATLAB based parallel implementation using GPU of my existing MATLAB sequential code. My code involves several custom functions, nested loops.
I tried coverting to cuda-mex function using MATLAB's GPU coder, but I observed that it takes much more time (than CPU) to run the same function.
Proper suggestions will be appreciated.
Relevant answer
Answer
MATLAB may launch a parallel pool automatically based on your selections. To enable this option, go to the Home tab's Environment group and click Parallel > Parallel Preferences, followed by Automatically build a parallel pool. Set your solver to parallel processing mode.
Make a row vector with values ranging from -15 to 15. Use the gpuArray function to move it to the GPU and build a gpuArray object. To work with gpuArray objects, use any MATLAB function that supports gpuArray. MATLAB uses the GPU to do computations automatically.
To start a parallel pool on the cluster, use the parpool function. When you do this, parallel features like parfor loops and parfeval execute on the cluster workers. If you utilize GPU-enabled functions on gpuArrays, the operations will be executed on the GPU of the cluster worker.
  • asked a question related to Image Processing
Question
7 answers
Medical Imaging.
Relevant answer
Answer
Hi,
Have you already found the answer to your question?
  • asked a question related to Image Processing
Question
17 answers
Hello Researchers,
Can you guys tell me the problems or limitations of Computer Vision in this era, on which no one has yet paid heed or problems on which researchers and Industries are working but still didn't get success?
Thanks in Advance!
Relevant answer
Answer
Computer Vision Disadvantages
Lack of specialists - Companies need to have a team of highly trained professionals with deep knowledge of the differences between AI vs. ...
Need for regular monitoring - If a computer vision system faces a technical glitch or breaks down, this can cause immense loss to companies.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
11 answers
Dear Colleagues,
If you are researcher who is studying or already published on Industry 4.0 or digital transformation topic, what is your hottest issue in this field?
Your answers will guide us in linking the perceptions of experts with bibliometric analysis results.
Thanks in advance for your contribution.
  • asked a question related to Image Processing
Question
14 answers
Lane detection is a common use case in Computer Vision. Self-driving cars rely heavily on seamless lane detection. I attempted a road lane detection inspired use case, using computer vision to detect railway track lines. I am encountering a problem here. In the case of road lane detection, the colour difference between road (black) and lane lines (yellow/ white) makes edge detection and thus lane detection fairly easy. Meanwhile, in railway track line detection, no such clear threshold for edge detection exists and the output is as in the second image. Thus making the detection of track lines unclear with noise from the track slab detections etc. This question, therefore, seeks guidance/ advice/ Knowledge exchange to solve this problem. Any feedback on the approach taken to attempt the problem is highly appreciated. Tech: OpenCV
Relevant answer
Answer
I agree with above answers try to use deep learning approaches that give the best results in terms of noise removal and Lane detection
  • asked a question related to Image Processing
Question
5 answers
I'm about to start some analyses of vegetation indexes using Sentinel-2 imagery through Google Earth Engine. The analyses are going to comprise a series of images from 2015/2016 until now, and some of the data won't be available in Level-2A of processing (Bottom-of-Atmosphere reflectance).
I know there are some algorithms to estimate BOA reflectance. However, I don't know how good these estimates are, and the products generated by Sen2Cor look more reliable to me. I've already applied Sen2Cor through SNAP, but now I need to do it in a batch of images. Until now, I couldn't find any useful information about how to do it in GEE (I'm using the Python API).
I'm a beginner, so all tips are going to be quite useful. Is it worth applying Sen2Cor or the other algorithms provide good estimates?
Thanks in advance!
Relevant answer
Answer
I would also prefer u use PEPS(CNES) to download the sentinel 2 images then u use Maja corrections for all the images u want. Am also working on time series from 1984 till now combining Landsat 5 TM and landsat 8 Oli togeda. So from 2015 till now I used Sentinel images from PEPs to validate some of the OLI images from 2015 to 2020 which I found Maja corrections to more working good.
  • asked a question related to Image Processing
Question
3 answers
I am publishing paper in scopus journal and got one comment as follows:
Whether the mean m_z is the mean within the patches 8x8? If the organs are overlap then how adaptive based method with patches 8x8 is separated? No such image has been taken as a evidence of the argument. Please incorporate the results of such type of images to prove the effectiveness of the proposed method. One result is given which are well separated.
Here I am working on method which takes patches of given image and takes mean of them. This mean is used for normalizing the data.
However, I am unable to understand the meaning of second sentence. As per my knowledge, the MRI image is kind of see through, so how will be any overlap of organs?
Any comments?
Relevant answer
Answer
A pixal in an MRI image is proportional to the spin density of the corresponding voxel, weighted by T1 and T2(or T2* depending on the pulse sequence). A single voxel can potentially contain multiple tissue types. This is even more likely in the case of your 8x8 patch.
The reviewer is wondering if your method can correctly normalize a pixel, when patch is a weighted sum of several tissue types. You could have two pixels, each representing the same tissue type and that have similar signal intensity, but the 8x8 patch around these two different voxels contain different distributions of tissue types. In this case, the reviewer thinks that you method would give different normalized values for these two pixels, even though in the original image, they had basically the same intensity. The reviewer is asking you to demonstrate how your method would handle this situation.
  • asked a question related to Image Processing
Question
6 answers
During preprocessing medical image data different techniques should be considered such as cropping, filtering, masking, augmentation. My query is, which techniques are frequently applied to medical image datasets during pre-processing?
Relevant answer
Answer
Image Pre-Processing Techniques are classified into four kinds, which are given below.
1. Brightness transformations/corrections for pixels
2. Geometric Transformations
3. Image Filtering and Segmentation are the third and final steps.
4. Fourier transform and image restoration are examples of Fourier transform and image restoration.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
6 answers
Hello
I'm looking to generate synthetic diffusion images from T1 weighted images of the brain. I read that diffusion images are a sequence of T2 images but with gradients. Maybe could be something related to this. I'm not sure how to generate these gradients too. I'm trying to generate "fake" diffusion images from T1w because of the lack of data from the subjects I'm evaluating.
Can someone please help me?
  • asked a question related to Image Processing
Question
10 answers
Hello,
I have been working on computer vision. I used datasets from Kaggle or other sites for my projects. But now I want to do lane departure warning, and real-time lane detection with real-time conditions(illuminations, road conditions, traffic, etc.). Then the idea to use simulators comes to my mind but there are lots of simulators on online but I'm confused about which one would be suitable for my work!
It would be very supportive if anyone guide me through picking up the best simulator for my works.
  • asked a question related to Image Processing
Question
2 answers
Is it because the imaging equation used by the color constancy model is built on RAW images? Or is it because the diagonal model can only be applied to RAW images? When we train a color constancy model using sRGB images, can we still use certain traditional color constancy models such as gamut mapping, correction moments, or CNN?
Relevant answer
Answer
Color constancy is a type of subjective constancy and a property of the human color perception system that guarantees the perceived color of things remains largely consistent under changing lighting circumstances.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
8 answers
Could anyone suggest a software or code (R or Python) that is capable of recognizing bumblebees (recognizing only not identifying) from video recordings?
  • asked a question related to Image Processing
Question
4 answers
Dear sir/madam,
Greetings for the day,
With great privilege and pleasure, i request anyone belonging to Image Processing domain to review my Ph.D thesis. I hope you will be kind enough to review my research work. Please revert me back on my email id: vivec53@gmail.com at your leisure.
Thanking you in advance.
Relevant answer
Answer
(Reversible Data Hiding) belongs to Electrical Science @Nawab Khan
  • asked a question related to Image Processing
Question
19 answers
Hi. I'm working on 1000 images of 256x256 dimensions. For segmenting I'm using segnet, unet and deeplabv3 layers. when I trained my algorithms it takes nearly 10 hours of training. I'm using 8GB RAM with a 256GB SSD laptop and MATLAB software for coding. Is there any possibility to speed up training without GPU?
  • asked a question related to Image Processing
Question
7 answers
I am researching handwriting analysis using image processing techniques.
Relevant answer
Answer
It depends on the application. In some cases, you may need to do that. Check this paper it will help you to understand this.
  • asked a question related to Image Processing
Question
3 answers
Does anybody can recommend me a tool that coul extract (segment) pores (lineolae) from the following image of a diatom valve?
I mean an ImageJ or FiJi plugin or any other software that can solve this task.
Relevant answer
Answer
First of all, diatoms look amazing! If you happened to have any pretty pictures you'd be happy to share I'm looking for a new desktop wallpaper :)
To answer your question, if you are looking a reasonably easy, but robust approach, you could try the Trainable Weka Segmentation plugin (https://imagej.net/plugins/tws/) which uses machine learning to extract relevant parts of your image. You use a set of training images that you annotate yourself to train the classifier, and then you can apply your classifier to a comparable set of images.
Hope this helps
  • asked a question related to Image Processing
Question
4 answers
I'd like to measure frost thickness on fins of a HEX based on GoPro frames.
I got the ImageJ software. But I don't know if there is a way to select a zone, (a frosted fin) and deduce the average length in one direction.
Currently I do random measurements on the given fin and do the average. However, the random points may not be representative.
I attached two pictures of the fins and frost to illustrate my question.
In advance, thank you very much,
AP
Relevant answer
Answer
Máté Nászai, it works very well! thank you. I had some issues because I did not converted properly my initial picture to binary (when doing "make binary" on the 8-bit image, it does the opposite of the wanted result if I understood correctly).
"Thresholding" the image is working well.
Again, thank you for your time.
AP
  • asked a question related to Image Processing
Question
16 answers
Currently, I'm working on a Deep Learning based project. It's a multiclass classification problem. The dataset can be found here: https://data.mendeley.com/datasets/s8x6jn5cvr/1
I have used Transfer Learning mostly, but couldn't able to get a higher accuracy on the test set. I have used Cross-Entropy and Focal Loss as loss functions. Here, I have 164 samples in the train set, 101 samples in the test set, and 41 samples in the validation set. Yes, about 33% of samples are in the test partition (data partition can't be changed as instructed). I could able to get an accuracy score and f1 score of around 60%. But how can I get higher performance in this dataset with this split ratio? Can anyone suggest me some papers to follow? Or any other suggestion? Suggest me some papers or guidance on my Deep Learning-based multiclass classification problem?
Relevant answer
Answer
For a smaller dataset, i suggest trying conventional ML techniques. A deeper network will require a large training dataset. Try with a shallower network and see if it can give some results, you need to play with parameters, and ultimately play with data, like increase it by using augmentation, etc.
  • asked a question related to Image Processing
Question
3 answers
I am working on CTU (Coding Tree Unit) partition using CNN for intra mode HEVC. I need to prepare database for that. I have referred multiple papers for that. In most of papers they are encoding images to get binary labels splitting or non-splitting for all CU (Coding Unit) sizes, resolutions, and QP (Quantization Parameters).
If any one knows how to do it, please give steps or reference material for that.
Reference papers
Relevant answer
  • asked a question related to Image Processing
Question
3 answers
Hi,
In my research, I have created a new way of weak edge enhancement. I wanted to try my method on the image dataset to compare it with the active contour philosophy.
So, I was looking for images with masks, as shown in the below paper.
If you can help me to get this data, it would be a great help.
Thanks and Regards,
Gunjan Naik
Relevant answer
Answer
The best way to obtain the dataset is to request it from the authors.
  • asked a question related to Image Processing
Question
6 answers
I'm looking for a PhD position and opportunity in one of the English speaking university in European countries (or Australia).
I majored in artificial intelligence. I am in the field of medical image segmentation and My thesis in master was about retinal blood vessels extraction based on active contour. Skilled in Image processing, machine learning, MATLAB and C++.
So could anybody helps me to find a prof and PhD position related on my skills in one of the English speaking university?
Relevant answer
Answer
You have any examination is cleared than you go for scholarship
Like in NET etc
  • asked a question related to Image Processing
Question
3 answers
recently i am collecting red blood cells dataset for classifying into 9 categories of Ninad Mehendale research paper. can anyone suggest the dataset for Red Blood Cell Classification Using Image Processing and CNN this papeer?
  • asked a question related to Image Processing
Question
6 answers
There are shape descriptors: circularity, convexity, compactness, eccentricity, roundness, aspect ratio, solidity, elongation.
1) What are the real formulas for determining these descriptors?
2) circularity = roundness? solidity = ellipticity?
I compared lectures (M.A. Wirth*) with ImageJ (Fiji) user guide and completely confused: descriptors are almost completely different! Which source to trust?
*Wirth, M.A. Shape Analysis and Measurement. / M.A. Wirth // Lecture 10, Image Processing Group, Computing and Information Science, University of Guelph. – Guelph, ON, Canada, 2001 – S. 29
Relevant answer
Answer
What matters most is to be able to choose the right descriptor(s) for your task. Some prefer to throw as many descriptors as they can in a neural net and see what comes out. Not the best methodology.
  • asked a question related to Image Processing
Question
9 answers
Dear Researchers,
In the remote sensing application to a volcanic activity wherein, the objective is to determine the temperature, which portion (more specifically the range) of the EM spectrum can detect the electromagnetic emissions of hot volcanic surfaces (which are a function of the temperature and emissivity of the surface and can achieve temperature as high as 1000°C)? Why?
Sincerely,
Aman Srivastava
Relevant answer
Answer
8-15
  • asked a question related to Image Processing
Question
5 answers
I have grayscale images obtained from SHG microscopy for human cornea collagen bundles, and I have them as tiff stack images and their Czi format. I want to convert those 2D images into a 3D volume but I could not find any method that can be done using MATLAB, Python, or any other program?
Relevant answer
Answer
If you know the physical dimensions of your images and the images in the stack are properly aligned (consecutive), you can create a 3D volume in matlab, then write that volume as nifti (normally for neuroimaging, but should do the trick). There are many tools that can work with nifti, perform 3D volume rendering etc., such as 3D slicer. It is just a representation, the important thing is to have the mapping between physical and image coordinates.
  • asked a question related to Image Processing
Question
4 answers
Hello dear researchers.
It seems that siam rpn algorithm is one of the very good algorithms for object tracking that its processing speed on gpu is 150 fps.But the problem is that if your chosen object is a white phone, for example, and you are dressed in white and you move the phone towards you, the whole bunding box will be placed on your clothes by mistake. So, low sensitivity to color .How do you think I can optimize the algorithm to solve this problem? Of course, there are algorithms with high accuracy such as siam mask, but it has a very low fps. Thank you for your help.
Relevant answer
Answer
Thanks for your valuable answer
  • asked a question related to Image Processing
Question
4 answers
Hi
I'm trying to acquire raw data from Philips MRI.
I followed the save raw data procedures and then I obtained a .idx and a .log file.
I'm not sure if I implemented the procedure correctly.
Are .idx and .log file the file format of Philips MRI raw data?
If so, how to open these files? Is it possible to open these files in matlab?
Thanks
Judith
Relevant answer
Hi, medical image are in DICOM format. But you can manipulate the raw data by using viewer such as Osirix and Horos (Apple). But it still depend on what you want to look at. Certain Philips workstation like ISP can process raw data that you want to extract.
  • asked a question related to Image Processing
Question
5 answers
How can I tell the distance and proximity as well as the depth of image processing for object tracking? One idea that came to my mind was to detect whether the object was moving away or approaching based on the size of the image.But I do not know if there is an algorithm that I can implement based on?
In fact, how can I distinguish the x, y, z coordinates from the image taken from the webcam?
Thank you for your help
  • asked a question related to Image Processing
Question
59 answers
Hi,
What are the main image processing journals that publish work on the collection, creation and classification of medical imaging databases such as Medical Image Analysis Journal.
Thank you for your support,
Relevant answer
Answer
You can check this list for more information about journals.
Deleted research item The research item mentioned here has been deleted
  • asked a question related to Image Processing
Question
5 answers
I am using transfer learning using pre-trained models in PyTorch for the Image classification task.
When I modified the output layer of the pre-trained model (e,g, alexnet) as per our dataset and run the code for seeing the modified architecture of alexnet it gives output as "none".
Relevant answer
I try to replicate your code, and I don't get "None", I just get an error when I try to do an inference with the model (see image-1). In your forward you do it:
def forward(self, xb):
xb = self.features(xb)
xb = self.avgpool(xb)
xb = torch.flatten(xb,1)
xb = self.classifier(xb)
return xb
but features, avgpool and classifier are "variables" of network, then you need to do:
def forward(self, xb):
xb = self.network.features(xb)
xb = self.network.avgpool(xb)
xb = torch.flatten(xb,1)
xb = self.network.classifier(xb)
return xb
then when I run the forward again, everything looks ok. (see Image-2)
If this not work for you, could you share your .py? I need to check the functions: to_device, evaluate, and check the ImageClassificationBase class. To replicate the error and be able to identify where it is.
  • asked a question related to Image Processing
Question
4 answers
Hi Everyone, I'm currently converting video into images where I noticed 85% of the images doesn't contain the object. Is there any algorithm to check whether an image contains an object or not using the objectness score?
Thanks in advance :)
Relevant answer
Answer
If it is a video and you want to detect objects coming into the field of view, you could simply use 'foreground detection' - refer to <<https://au.mathworks.com/help/vision/ref/vision.foregrounddetector-system-object.html>>.
  • asked a question related to Image Processing
Question
6 answers
Hi Everyone,
I'm currently practising an object detection model which should detect a car, person, truck, etc. in both day and night time. Now, I have started gathering data for both day and night time. I'm not sure whether to train a separate model for daylight and another model for the night-light or to combine together and train it?
can anyone suggest to me the data distribution for each class at day and night light? I presume it should be a uniform distribution. Please correct me if I'm wrong.
Eg: for person: 700 images at daylight and another 700 images for nightlight
Any suggestion would be helpful.
Thanks in Advance.
Relevant answer
Answer