Science topics: Computer ScienceImages
Science topic
Images - Science topic
Explore the latest questions and answers in Images, and find Images experts.
Questions related to Images
I am trying to analyze the size of agarose beads in a cell counter chip. The image was taken using brightfield so I am having a hard time processing the image well enough to where ImageJ can differentiate between the different droplets. It would be a lot easier if there was a machine learning algorithm or open source code that can help me differentiate between the different droplets and record the size of each droplet. I attached the images with the droplets that need to be processed.

I have exposed the Gaff Chromic film and then scanned it, now in MATLAB, I have to write a code to convert it to dose and then obtain the calibration curve of the film.
Which language is used for image processing and difference between digital image and digital image processing?
How can I normalize a data set of spectrograms using Python coding?
Im trying to create an image classification model that classifies plants from an image dataset made up of 33 classes, the total amount of images is 41,808, the images are unbalanced but that is something me and my thesis team will work on using Kfold; but going back to the main problem.
The VGG16 model itself is from a pre-trained model from keras
My source code should be attached in this question (paste_1292099)
The results of a 15-epoch run is also attached as well
what I have done so far is changing the optimizers from SGD to Adam, but the results are generally the same.
Am I doing something wrong or is there anything I can do to improve on this model to get it to atleast be in a "working" state, regardless if its overfitting or the like as that can be fixed later.
This is also the link to our dataset:
It is specifically a dataset consisting of Medicinal Plants and Herbs in our region with their augmentations. The are not yet resized and normalized in the dataset.

Hello everybody,
I analyzed a picture in ImageJ, but I encountered a problem with the threshold. I tried to count cells with analyzed particles. Before starting, I adjusted the picture by changing the contrast and applying a Gaussian filter. Until yesterday, I did it exactly as described, but today the program displayed a message: 'the threshold may not be correct (255-255).'
Can someone help me with this issue? Has anyone with the same problem been able to resolve it?
When I try to correlate the gps coordinates among points obtained from field and satellite for the same location there is a big difference among the two. I assume that this is partly due to the satellite image shift. Can anyone tell me how to correlate the two datasets that are actually same but have difference in coordinates?
I obtained .txrm, .exm file formats from micro ct scan in xradia context. I want to do the post processing in dragonfly but when I import the files, it asks about pixels and image spacing. Can anyone suggest how can I know these attributes to fill in the window. I am new to dragonfly, I have seen the lessons but the raw files used in them had some attributes. I am lost regarding which file format i should load as well. Any help is appreciated.
The main idea is following , since it is costly and time consuming to get labelled image dataset specially medicals imaging , im trying to select similar features like find similarity between the few target dataset eg: CHEST X-RAY image dataset and pretrained model's dataset eg: ResNet50 which is built on ImageNet dataset and compare with the customer dataset to identify similarities on.
The image pixel addition could be performed by a sliding window over an unknown image, comparing many places on a single known image to a single known image, to find highest total pixel sums. This would locate the known object at these highest scoring windows.
I wish to understand color image processing based on quaternion Fourier transform. Where will I get basics of that? Is there anyone working on this topic in India?
Why do we need digital image processing and which algorithm is used for image processing and application of digital image processing?
How many steps are there in digital image processing and digital image processing and its types?
I want to calculate band structure for Mn3Ge using quantum espresso. Mn3Ge is a coplanar antiferromagnet in which the three spins of Mn are situated on equilateral triangle with making angle 120 degree with eachother (I am attaching image file). Now I am struck how to write the spin configuration in the input scf file of Mn3Ge. Here i am writing the input which i wrote for spin configuration but this gave me wrong result. Plaese help me to write the spin configuration for this material
starting_magnetization(1) = 0.1
starting_magnetization(2) = 0.1
starting_magnetization(3) = 0.1
starting_magnetization(4) = 0.1
starting_magnetization(5) = 0.1
starting_magnetization(6) = 0.1
angle2(1) = 120
angle2 (2) = 120
angle2 (3) = 120
angle2 (4) = 120
angle2 (5) = 120
angle2 (6) = 120
/
&ELECTRONS
diagonalization = 'davidson'
conv_thr = 1.0d-08
electron_maxstep = 80
mixing_beta = 0.4
/
ATOMIC_SPECIES
Ge 72.64 Ge_sr.upf
Mn 54.938045 Mn_sr.upf
ATOMIC_POSITIONS crystal
Mn 0.1613340000 0.3226680000 0.7500000000
Mn 0.6773320000 0.8386660000 0.7500000000
Mn 0.1613340000 0.8386660000 0.7500000000
Mn 0.8386660000 0.6773320000 0.2500000000
Mn 0.3226680000 0.1613340000 0.2500000000
Mn 0.8386660000 0.1613340000 0.2500000000
Ge 0.6666666700 0.3333333300 0.7500000000
Ge 0.3333333300 0.6666666700 0.2500000000
area of infarct in percentage of total ventricle area. excluding the background
I have done Rietveld refinement on PXRD but i do not know where to find the information of Final R indices[I>2sigma(I)] , R indicies (all data) , Extinction coefficient, Largest diff. peak and hole.

What deep learning algorithms are used for image processing and which CNN algorithm is used for image classification?
We did not compare our proposed method with other segmentation techniques because we present our work as a new technique and concept.
What is the most common compression algorithm and which machine learning algorithm is best for image processing?
image processing and machine learning
Dear all,
do you know what is the correct subsample size (number of pixel) to carry out a confusion matrix? I am digitizing the true classes on an image to check the accuracy of the classifier. However, digitizing the whole image is too much time-consuming and, so far, I have chosen a subsamble representative of the whole area/image. How much should it be large respect to the original image?
Thank you very much
Hi, I’m a beginner in satellite image analysis. I want to know the lat/lon coordinates of some bursts of a sentinel-1 image. I looked at the file names of the downloaded zip, but couldn’t find any promising files(attached: file structure). Can someone teach me how I can obtain them?
Context: My purpose is to generate a coherence image and project to QGIS. I used SNAP following up to p12 of this tutorial(https://step.esa.int/docs/tutorials/S1TBX%20TOPSAR%20Interferometry%20with%20Sentinel-1%20Tutorial_v2.pdf). but the coordinates were somehow lost from the first step(importing and choosing bursts so as to produce a split file). not sure why but it apparently happens with other satellites(https://earthenable.wordpress.com/2016/11/21/how-to-export-sar-images-with-geocoding-in-esa-snap/). I was able to produce the coherence without coordinates, so i’m thinking if I can get the coordinates from the sentinel file, I can just add it to the geotiff myself.
I also want to ask, is this idea wrong? are the sentinel coordinates different from the coherence image as it undergoes back geocoding?
I picked up a single colony of Salmonella Typhimurium with a gene specific primer but several times i have tried...but not getting any results.
So please suggest how can I troubleshoot this.
Thanku

Is digital image processing machine learning and which algorithm is used for image processing in machine learning?
Why CNN is better than SVM for image classification and which is better for image classification machine learning or deep learning?
Is digital image processing part of AI and digital image processing used in computer vision?
I am following the vignette's protocol for the design II in AdehabitatHS using my data. But when I try to rasterize the polygons (14):
>pcc<-mcp(locs[,"Name"],unout="km2")
>pcc#it is a Spatial Polygons Data Frame showing the 14 polyogns
>image(maps)
>plot(pcc, col=rainbow(14),add=TRUE)
>hr<-do.call("data.frame",lapply(1:nrow(pcc),function(i){over(maps,geometry(pcc[i,]))}))
>hr[is.na(hr)]<-0
>names(hr)<-slot(pcc,"data")[,1]
>coordinates(hr)<-coordinates(maps)
>gridded(hr)<-TRUE
I got the following Error:
suggested tolerance minimum: 4.36539e-08
Error in points2grid(points, tolerance, round) :
dimension 2 : coordinate intervals are not constant
I would appreciate any suggestion on how to solve this problem.
I cannot figure out if this is a problem with my rasters (4 images) or with the Polygons, although I strongly believe this last ones are the issue.
Is there any one applied Fiji image J software for analysis of CAM or RAAR assay for angiogenesis? i need some information
Can any of the researchers in the field of image processing help me, and give me some a novel ideas to begin with research?.
I would be so grateful ^_^
We want to plot polar diagram for young modulus and Poisson's ration for 2D monolayer material. If have any script or software please suggest me. Image are attached here.
Thank You,

Dear researchers, according to the attached image, is there a way to find the standard deviation based on the sample size and mean?
Kind regards
Adel

I am doing my graduate research on Through-the-wall radar imaging (TWRI) to detect and image human and animal (cat) targets. I need the dataset to test my simulation. Can someone help me with this?
Hi! I am planning to use image J angiogenesis analyzer as my image analyzer for my research study. Our method used is CAM assay. I am still confused on how to use the software. Also, what size of the image and what magnification for the stereo microscope should be used? Thank you!
I have three experimental groups and corneal imaging performed on Day 1,3,5,7,9,11 and 14.
So which Image Analysis software is recommended to measure the diameter of ocular infected area?
Thank you!
I am actually working on image processing topic which is going on by using artificial intelligence
I need to know how can I relate two events which are in different time sequences on a single map sing AI ??
I took MRI phantom images at various concentrations of iron (100, 50, 25, and 12.5 ug/mL) for my iron oxide based contrast agent system. The images came out as expected, with more darkening resulting at higher concentrations. I have been introduced to the image analysis software, Horos, where I can analyze these images for I guess pixel intensity but I am wondering how to use that to calculate a relaxivity value?
Or do you suggest other software or programs for calculating relaxivity from a DICOM file? Thank you!
I have a signal having dimensions 500 rows and 10000 columns. I want to convert it to a unique image and extract features using GLCM in MATLAB. Any help regarding this?
Who was involved in signal detection in a moving video stream? Interested in methods of automatic recording when a given object appears in the image stream.
How do i calculate Land Surface Temperature Landsat 8 level 2 image?
Do I have to convert DN (TOA) to radiance, then convert radiance to brightness temperature, then emissivity, and finally the LST calculation?
OR is it enough to rescale the thermal bands and convert kelvin to celsius degrees (since level 2 is already corrected)?
I have calculated the euclidean distance between two coordinates given in the image using the following formula:
dist(p,q)=\sqrt{(p_{x}-q_{x})^{2}+(p_{y}-q_{y})^{2}+(p_{z}-q_{z})^{2}}
What shall be the unit of the distance in this case? for example, the distance between two point is 6.23.
Shall I consider nm or angstorm?

Kindly give suggestions regarding calculation of dendritic spine density .I have a doubt that whether i have to calculate the spine numbers directly under microscope or from acquired images . To find the percentage of immature and mature spines,do we have any formulas to be used like preference index?. how many neurons i should use to calculate the spine numbers. I could not find exact way.
LST = BT / 1 + w (BT / p) * Ln (ε) (formula 1)
What is the name of this method?
Additionally, we use the following formula for Landsat 8:
LST = (BT / (1 + (0.00115 * BT / 1.4388) * Ln(ε))) (formula 2)
What are the differences between Formula 1 and Formula 2? If we use separately Formula 1 and Formula 2 to calculate the LST of one Landsat 8 image, will the results be the same?
Image mosaic using python, stitch using feature matching techniques
Please assist by sharing articles that examined self image as variable, looking for existing instruments for my study.
I want to convert text to a special form that i want. how can i use GAN networks to build this model ? i think it's similar to colorizing the gray image.
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Hello everyone,
How to create a neural network with numerical values as input and an image as output?
Can anyone give a hint/code for this scenario?
Thank you in advance,
Aleksandar Milicevic
- Additionally, what factors should be considered when selecting an appropriate image processing algorithm for lineament extraction?
Hi! I am working with MODIS images, I need to estimate EVI, to do it correctly, first I need to evaluate the quality assessment data layer and overlap it with the EVI layer to dismiss those pixels that are corrupted. Any idea of a tutorial to do this with qgis?
I am currently conducting research on body image satisfaction and need to find appropriate instruments for measurement. I am looking for tools or questionnaires that specifically assess body image satisfaction among participants.
As I am still a final-year undergraduate student, I prefer to use instruments that are freely available and do not require any payment or subscription.
If anyone could suggest such instruments or questionnaires that have been used in previous research or are commonly available, I would greatly appreciate it.
Thank you in advance!
What is the state of the art in multimodal 3D rigid registration of medical images with Deep Learning?
I have a 3d multimodal medical image dataset and want to do rigid registration.
What is the art of 3d multimodal rigid registration?
Example of the shape of the data:
The fixed image 512*512*197 and the moving images 512*512*497.
Hi, I am trying to perform a radiometric calibration of an Aster image on ENVI, I watched a youtube tutorial where they use the Radiometric calibration tool, but when I tried to do it on my computer the Radiometric Calibration tool is not displayed, does anyone has an idea of why, I already restarted either the program and the computer but it seems it's not helping.

I have performed anti-Oct-4 staining with A488 on a colony of hiPSCs. In ZEN, I have applied an intensity channel (rainbow) to the image. As you can see, the highest intensity is located in the nucleus of the cells.
Is there any way to add an intensity scale bar to the image?


As I found, most CNN backbones (resnet, convnext, inception, efficientnet etc.) work in RGB. I have a dataset with greyscale images. My question is: What is the standard way to convert greyscale to RGB in order to make it work with RGB-backed CNN models available?
What I am currently doing is: `adapter = tf.keras.layers.Conv2D(3 ,1)` - essentially takes 1 channel as input and outputs 3 channels with a kernel size of 1. An alternative way can be to simply copy the BL value (say 140) to RGB (140,140,140). Are there better approaches that I already mentioned?
Dear everyone,
what should be the most advisable pattern of actions in case I have detected one of my results (an image) was used without my permission by the researchers from my previous University? They published it without acknowledging or mentioning me and, moreover, they did not know about the sample preparation protocol I used, resulting their interpretation being completely wrong and misleading. They have cut a part from the original image and photoshopped a fake info bar to make it look like the two images (mine and their) were both measured with the same instrument and conditions.
I have contacted the journal referring to their own publishing policy and providing unambigious proofs of my authorship over this image - however, the journal, knowing all the details and acknowledging that the image does belong to me, did not seem to be invested in scientific part and immediately redirected details of my inquiry to those authors and offered to "maybe, acknowledge me". These authors have already plagiarised my manuscript before and I successfully returned it and got my name in it - but all of that was with the help of editorial board, whilst in this case the editorial board is rather attacking me.
A question: what to do if the journal is protecting plagiarism?
Thank you!
Dear all,
I am working on Optical Coherence Tomography (OCT) and I need to analyse 100s of OCT images automatically. Is there any algorithm used for OCT image analysis that does not require programming skills?
Thanks
I think it could be easier getting first the image.
I need to do a matrix and without the image (really complex) I think it's near impossible.

As a researcher, image classification is an area where images are the primary data used in various domains such as agriculture, health, education, and technology. However, ethical considerations are a significant concern in this area. How should ethics be handled and taken into account?
Hello all
I am currently simulating wind turbines without contacting the generator (attached image) I have adopted a steady wind speed of 12 m / s but the system response (mechanical energy) is incorrect. Where is the problem ... who can help me ??
thank's.
Currently, I am doing research in image filtering. I came across this strange result where with an increase in sigma value PSNR decreases while SSIM increases.
I will attach a document where I have discussed it more thoroughly.

Bio-medical Image Processing
I'm trying to predict a disease based on both facial features and questionnaires.But I have two different datasets such that one is on the question and answers with labelling and the other is with images and labels as well. I mean both the datasets are different but they predict the same thing. How can I merge and use these datasets together to predict using deep learning?
JacoP plugin is widely used for the co-localization analysis
How to set scale bar in the image using image j? What to mention against "known distance" in the setting scale menu?
Hi 👋
i use image j to analyze my band of western blot previously i select band then analyze it by plotting curve and calculating area under curve .. recently when i analyze band this appear with no any curve draw .. what may be the reason ?? Iam new user for this application …
