Science topics: Computer ScienceImages
Science topic
Images - Science topic
Explore the latest questions and answers in Images, and find Images experts.
Questions related to Images
Hello! I'm new to using AFM to analyze nanoparticles in a solution, and I have a question. I see a "line fit 2.79 nm" bar on the right side of my image. Does this refer to the height of the features in the image? For example, does the white area correspond to an approximate height of 2.79 nm? I noticed in another sample, which doesn't have nanoparticles, that it shows "line fit 700 pm." From this, I’m guessing that this value might represent the maximum height. Could you confirm this? Thank you!
this is a problem of when using ftk to image a drive in computer forensics
I am a biomedical engineer with experience in deep learning applications for classifying and predicting various diseases. Recently, I have been working on projects involving disease classification and prediction through deep learning, with a particular focus on CNNs and advanced image processing techniques. Currently, I am preparing for a master’s degree in data science to deepen my expertise in this field. I am open to collaborating on research articles and can assist as an editor or peer reviewer for relevant journals. Additionally, I am available to work as a volunteer researcher at universities, contributing my skills in biomedical engineering, bioprinting, and microbiology.
If you are interested in collaboration or if I can assist with your research, please feel free to contact me at biomedical.emr@gmail.com.
Dear all,
I am looking for 2 or more brain slices with scale (within image or outside image) with a minum resolution of 300x400. The slice could capture the whole slice or even a specif area, or half the slice, a quarter an so on. The important thing is that the image shuld capture at least area/region fields an not a super detailed 20/40x zoom like those used for neurons recognition/labeling, but something in the range 1/5/10x zoom (mid to wide field). If you have fluorescence image would be great. However those images are only for validation purpose for a software aimed to reconstruct a whole 3d brain statting from slice images and performing fluorescence recognition and feature extraction out of each slice. Your images won't be used for artificial network training and or published anywhere. It's only for the final stage of validation. I have already used my own slices and Allen brain repository. Any other suggestion will be more than welcome!
Image Processing Algorithms, Quantum Computing.
How can I load the level 2 Landsat 8 image in the Envi Software?
Dear researchers. I tried using the IHC PROFILER in image j to quantify nuclear DAB staining. I followed the instructions in the original article by "Varghese F, Bukhari AB, Malhotra R, De A (2014) IHC Profiler: An Open Source Plugin for the Quantitative Evaluation and Automated Scoring of Immunohistochemistry Images of Human Tissue Samples. PLoS ONE 9(5): e96801. doi:10.1371/journal.pone.0096801".
I find it difficult to identify the nuclear positivity when working with "threshold" tool. very grainy areas appear- not representing the nuclei as nuclei. instead too many dots are seen in each nucleus.
I use images clicked in 40x magnification from human oral squamous cell carcinoma tissues. Esteemed researchers who have used this method of quantification can shed your experiences. kindly be moderately descriptive in your response. Thanking you in advance.
particularly models that achieve similar performance levels.
Hi, Can anyone please help me about the MDA-MB231 cell culture protocol. I was growing MDA-MB231 cell line. I am new in cell culture, so if you can help me in this regard, I can be really helpful.
Problem is the cell growing very slow at P3 or P4. Seeding ~0.3 million cells at T 25 flasks takes almost 4/5 days for being 80% confluent. Initially at P1 I was using DMEM+10% FBS+ 5% Pen-strep, it was growing really slowly, so I changed media to 20% FBS. Changing the media at 20% FBS helped at P1/P2 , but at latter phase like P4/P5, it again grow very slowly with 20% FBS.
I also see some absurd things in phase contrast image (attached), why there seems lot of vesicles inside the cell, is it normal or I got contamination! For other cells, phase contrast image not look like so contrast, why this cell show show such lot of contrast inside the cell.
Also any suggestion for better culture media for MDA-MB 231.
Thanks
SAYAN
Hi, Does anyone know how to edit IHC image? what software to use?I want to remove bubble appears at the background?change background color?
I am doing some FIB cuts and analyzing my layer stack. I always see dots in the InP and InGaAsP layers, but I don't know their origin. These dots do not appear when I do a SEM of a cleaved sample, so I assume the origin is related with the Ion beam process. I attached a picture of a SEM image after an FIB to make clear which dots I am referring to. Any help or reference to solve this question is highly appreciated!
Instead of directly using slice-wise matrix completion methods?
What do you know about color memory?
There is no red color in this image..
Your brain fills in the red color.
The image is made entirely of light blue, black, and white.
Zoom in on the image and you'll see..
Hello,
I obtain western blot bands at desired region on the membrane but there is no space between the bands (attaching image for reference), may i know what could be the reason?
TIA
How do we typically choose between Convolutional Networks and Visual Language Models when it comes to Supervised Learning tasks for Images and Videos ?
What are the design consideration we need to make ?
We can use the RF data for deep learning model training. Our main Treget RF image to B-mode image. How can we solve this using GAN? If anyone has good code experience please share it with me.
How do I reference/attribute an image from a scientific paper used in an educational YouTube video?
I'm making a video discussing various brain regions, but am unsure where to find source images that are permissible to use? Can I take them from papers and reference them? What is the protocol?
I made 2 membrane samples of 91% CuO and 93% CuO using the sol-gel method. The SEM image of the grain is obtained evenly but the surface is slightly rough
📷
Hi there. I am new to the field of corrections and I was wondering if by having a dark field and flat field image from a test, if it is possible to obtain a formula with the offset and gain value so I can apply it to any image that comes from the sensor in order to apply a relative radiometric correction.
From what I've read the dark image gives me the offset value, right? And the relative gain value is (dark - flat)/mean(dark-flat). Is this right?
Considering that these are test images, if I want to apply this formula to real images of the sensor then I'm guessing that I have to obtain a default value from the gain and offset matrices. Maybe by getting their mean?
Not sure if this is the way to go. I've also seen that for the dark field i could make a histogram and see the value with the highest peak and maybe choose that as my offset value? But not sure how that would work for the flat image.
Any help is appreciated as I am a little bit loss on what are the best steps here.
Nowadays scientific illustration became an integral part of publications. Apart from original research images, graphical abstract is imperative for journal submission. It even receives more attention than the main article. I, as a person with mediocre drawing skills, always fascinated by published graphical abstracts. Recent times, many illustration making softwares came in rescue. Many softwares provide free/ paid services like icon libraries, flow charts, etc., which form the basis for creating illustrations according to one's requirement. Some softwares provide templates, which can be modified by paying subscription. What perplexed me is, Are these templates making us copycats?
For example, a simple google image search resulted in exact matches of nearly more than 50 publications, where a image template was slightly modified. As a person, who used to see the images first, this is really confusing to identify a publication based on its graphical abstract. In essence, these templates will fail to create a hype/curiosity among the audience (as it is initially aimed for) and making all our work similar like "Agent Smith". Is it acceptable?
Hi, I'm Prithiviraja. I'm currently building a deep learning model to color SAR image. I came across lot of resources only using ASPP for feature extraction from SAR Image. I'm planning to use both FPN and ASPP for that process, while FPN is mostly used for object detection. Kindly tell me your suggestion.
Is is correct to convert .raw to .jpg file extension and than calculate the intensity profile
Dear All:
I want to quantify the fluorescence intensity of images using fluorescence microscope.
I would really appreciate it if someone could tell me how to use image j correctly.
I want to check if there exist any pathology in my mice with their valve but I am not sure if its the one on the right side of this image. I sectioned a ctrl one and saw it much clearer.
Following laser and source replacement of one of our Bruker Tensor 27 units, the calibration peak from the laser path through the empty chamber is registering a good amplitude, but it is far out of the normal range. I'm seeing a peak position at ~65000, when it should be between 58000 and 62000.
Is there any way to fix this? So far, tweaking the interferometer position only decreases the signal amplitude and does not alter the peak position coming in. The laser position in the holder seems fine. Any help would be appreciated.
Failed OQ and image of the display attached.. Thanks!
Why is the image brightness inconsistent at lower magnifications in an SEM, showing a gradient where one part of the image is brighter than the other, regardless of the location? I notice that the left side is consistently brighter at low magnifications, but this effect is not visible at higher magnifications.
Corporate Social Responsibility (CSR) has emerged as a critical factor in shaping brand image perception among various stakeholder groups. Research indicates that CSR initiatives can significantly enhance a company's reputation and brand image when implemented authentically and strategically. For instance, a study by Martínez et al. (2014) found a strong positive correlation between consumers' perception of CSR and both functional and affective aspects of brand image[1]. This suggests that companies engaging in genuine CSR efforts can effectively improve their overall brand perception among customers.
The impact of CSR extends beyond customers to other key stakeholder groups, including employees, shareholders, and broader community stakeholders. Employees, in particular, play a crucial role in this dynamic. When companies involve employees in CSR activities and communicate their efforts effectively, it can lead to increased job satisfaction, loyalty, and pride in the organization. This, in turn, can transform employees into brand ambassadors, further enhancing the company's image externally[1]. For shareholders, CSR initiatives can signal long-term value creation and risk mitigation, potentially improving their perception of the brand and its future prospects.
However, it is essential to note that the positive influence of CSR on brand image is contingent upon avoiding greenwashing or superficial green labeling. Authenticity and transparency in CSR efforts are crucial for building trust and credibility among all stakeholder groups. As Bianchi et al. (2019) highlight, a company's reputation and international presence play a significant role in shaping consumer perceptions and attitudes toward the brand[1]. Therefore, to maximize the positive impact of CSR on brand image, companies must ensure their initiatives are genuine, aligned with their core business values, and effectively communicated to all stakeholder groups. This approach can lead to a more robust and positive brand image across customers, employees, shareholders, and the broader community.
I am trying to run a PCR to verify insertion of my construct into the AAVS1 locus in iPSC using CRISPR.
I designed three primer pairs to amplify the left and right insertion regions (one binding inside the insert, one binding outside; see image), and one primer pair to amplify a region inside the insert. The insert contains a fluorescent protein, which I can see expressed in the cells under the microscope, so I am pretty sure that the insertion has worked correctly; however, I cannot get any specific PCR product for sequencing. (Even if it has been inserted unspecifically somewhere in the genome, since I am seeing the fluorescent reporter in the cells, I would expect at least to get a positive result for the "internal" region.)
I used DNAzol to purifiy gDNA from the cells, and when I checked it on a gel I noticed two additional bands, which I thought might be rRNA (see image), and when I treated the samples with RNAse the bands disappeared, so I continued happily with the PCR.
For the "internal" region, I am able to use the plasmid as a control, and here I can see a specific PCR product with the expected size, however, for all other plasmid / gDNA template combinations, I get a huge amount of large-sized unspecific PCR products (see second image). Which is why I am currently suspecting that something is not right with the gDNA? But it looks pretty good on a gel.
I am using KOD1 polymerase (KOD1 master mix) according to manufacturer instructions when it comes to amplification from gDNA:
PCR: Total 25 µl per reaction
1.25 µl DMSO
1 µl primer fwd [10 µM]
1 µl primer rev [10 µM]
8.25 µl H2O
12.5 µl KOD master mix
0.5 µl DNA (= 25ng)
Init. Denat. 94°C 1.5 min
Denat. 94°C 5 sec
Anneal 58°C 5 sec
Extension 68°C 1 sec
and also tried
Init. Denat. 94°C 3 min
Denat. 94°C 45 sec
Anneal 58°C 45 sec
Extension 68°C 1 min
I have triple-checked the specificity of the primers, and compared with other primers used in literature for the same purpose (AAVS1 locus). I have re-designed new primers that bind in slighly different places. I have tried different elongation / annealing times and temperatures... It always looks the same (large-size unspecific products).
I a last-ditch effort, I cut out pieces of gel from the "unspecific" results around the size where I would expect the PCR product, and repeated the PCR with those as a template - I got some promising looking results on a gel, but when I sent them for sequencing, it was all unspecific.
I am currently at my wit's end and hope someone else has seen something similar and was able to solve it in the end!!
(Plan B will be to re-do the DNA extraction and try again from the beginning I guess...)
I am trying to use morlet wavelet on BCIC-IV-2a dataset to create a 2D image. Since, the dataset is 1000 samples long and with 22 channels, after applying the transform the shape becomes (22,127,1000) where 127 is of course wavelet coefficients.
My aim is to apply pre-trained Resnet model upon this image, for that I resized the image to (224,224) from (127,1000).
Can you suggest further steps that can be taken for proper training ? Or have I missed necessary processing on the wavelet image here ?
to check the randomness of image data in python program
1- What is the reason for the importance of synthetic CT image generating (Due to the low radiation dose in new devices)?
2- What is the pros for using the UNet model compared to other deep learning models (Due to the need to data augmentation to reduce the error)?
3- MR and CT images are complementary. Therefore, it is not possible to expect to produce a flawless image and it is not medically safe.
My question pertaining to the DAB staining in cytoplasm of human oral squamous cell carcinoma tissue.
When quantifying the epithelial cancer cells do we have to crop remove the stromal tissue? because stromal positivity is also seen in the sections.
Any separate plugin in image j or any other open source software to quantify stromal positivity?
kindly share your valuable suggestion.
After immunohistochemistry of previously fixed in PFA and EtOH and then frozen 20 μm sections of zebrafish brain, DAPI staining is very weak (right) compared to the same sections stained without preceding IHC (left). Please also take into account that the exposure time for the right image is ten times longer than for the left one – so in reality, it is much dimmer than in the picture. The concentration of stock DAPI (Invitrogen™ D3571) dilactate solution is 5 mg/mL DAPI (10.9 mM), and the final working solution is 300 nM (i.e., 1:36,333.33 from the stock solution, as per the manufacturer's recommendations). Incubation time is 5 minutes, followed by 3 washes in PBS for 1 minute each and mounting with VECTASHIELD® Vibrance™ Antifade Mounting Medium. Both stock solution aliquots and the working solution are freshly prepared. No antigen retrieval was used for this IHC, and for blocking, 0.1% Triton + 5% NGS in PBS was used. Please recommend if you have any ideas on how to improve staining. Thank you!
Hello,
about contact map in molecular dynamics simulation.
I obtained the contact map of the protein-ligand complex using this command:
gmx distance -f md.xtc -s md.tpr -n index.ndx -oall contact.xvg
The output file is .xvg file. I don't know how to convert this file into a contact map image.
I want this analysis to revise the article.
Thank you
Suppose, for example, I have two domains of dataset X and Y, and obviously, they are not paired. I want to train it for an image translation task to get a model to infer an image and translate it from domain X to Y.
I know GANs are heavily used for such tasks, but can other core methods or algorithms do that?
After immunohistochemistry of previously fixed in PFA and EtOH and then frozen 20 μm sections of zebrafish brain, DAPI staining is very weak (right) compared to the same sections stained without preceding IHC (left). Please also take into account that the exposure time for the right image is ten times longer than for the left one – so in reality, it is much dimmer than in the picture. The concentration of stock DAPI (Invitrogen™ D3571) dilactate solution is 5 mg/mL DAPI (10.9 mM), and the final working solution is 300 nM (i.e., 1:36,333.33 from the stock solution, as per the manufacturer's recommendations). Incubation time is 5 minutes, followed by 3 washes in PBS for 1 minute each and mounting with VECTASHIELD® Vibrance™ Antifade Mounting Medium. Both stock solution aliquots and the working solution are freshly prepared. No antigen retrieval was used for this IHC, and for blocking, 0.1% Triton + 5% NGS in PBS was used. Please recommend if you have any ideas on how to improve staining. Thank you!
Can the FE-SEM TESCAN MIRA4 reach a resolution of 1 or 2nm? is it possible?
May I know if someone has tried it already. If someone has tried it, may I have the procedure for getting a 1 or 2nm image.
Thank you.
I have collected a signal from vector network analyzer by placing food product in between a couple of antennas in which 1 antenna acts as transmitter and one acts as receiver. i want to convert the collected signal to tomographic image of the food product placed in between the antennas. I'm unable to understand the process how to convert a S21 (real and imaginary format) signal to tomographic image, is it possible to reconstruct the tomographic image of the product from the single signal. i want to reconstruct the tomographic image of the food product placed in between the 2 antennas in MATLAB using reconstruction algorithm. please help me in my project.
I have collected a signal from vector network analyzer by placing food product in between a couple of antennas in which 1 antenna acts as transmitter and one acts as receiver. i want to convert the collected signal to tomographic image of the food product placed in between the antennas. I'm unable to understand the process how to convert a S21 (real and imaginary format) signal to tomographic image, is it possible to reconstruct the tomographic image of the product from the single signal. i want to reconstruct the tomographic image of the food product placed in between the 2 antennas in MATLAB using reconstruction algorithm. please help me in my project.
I fabricated Ti3C2Tx using concentrated HF 40%, I plot an XRD as attached image below.. please let to know if I obtained it or not.
I ask the question in the form of an image entitled A Modernist Pictorial Tableau, which anticipates the future development of image generation software such as Midjourney, as it challenges the current progress not just of AI but also any attempt by cognitive neuroscientists to make sense of art and aesthetics via computing.
I have been running native page for FAM DNA substrate ( fluorescence samples) for protein DNA binding reaction. Binding is there but towards the end of the lane , I am loosing signals drastically.
I have checked my protein for nuclease contamination and it is nuclease free protein.
I am unable to figure it out , at times the signals are lost in the middle lanes also.
I am giving extra attention to pipetting, so I know that is not the problem.
I have attached the image.
there is an uploaded image of correlated matrix. please help what types of this dataset will be treaded labeled or unlabeled.
The above are manually labeled extrinsic matrices based on the first image
It can be seen that the projection error at the edge is large, while the error at the center is small.
What could be the reason? How can I solve it
Thanks.
Anomaly detection in scanned image data set
Référence électronique
Manuel Gárate, « There will be blood (Petróleo Sangriento) », Nuevo Mundo Mundos Nuevos [En ligne], Images en mouvement, mis en ligne le 28 février 2008, consulté le 21 juillet 2024. URL : http://journals.openedition.org/nuevomundo/25712 ; DOI : https://doi.org/10.4000/nuevomundo.25712
Hi. I have brightfield images of stem cells. I need to count the total number of cells/image. Some cells are individual and some cells are in colonies. Using average cross-sectional area of individual cells, how can I calculate the number of individual cells in a colony. And then also calculate the total number of cells on the image?
I have tried the watershed feature but it does not provide accurate results unfortunately.
Thank you in advance!
I'm a PhD student researching seismic amplification in a NZ sedimentary basin. I hope that somebody will be able to help me in my interpretation of the 2D seismic lines that traverse the basin. As per the attached image, I am somewhat baffled by the amount confusion in the seismic signals below the surface(shown within red rectangles) that are appearing where I would expect a regular stratigraphic sequence. I have attempted to delineate fault lines but I don't know how successful I am in this aspect. I point out that the mid formation (250-1250m) is consolidated marine sediments, above this mostly sediments from a volcanic source (silts, sands and pebbles mostly unconsolidated).
While the region has historically been understood as passive in terms of seismic activity, this may not be the case as this research may prove.
One of the suggestions for the signal complexity is that the surface region has been so fractured by seismic activity that the resulting unsorted and unstratified volcanic sediments impair the clarity of the signal that is penetrating below.
I hope that someone with experience may have an answer.
I want to check my protein expression but the sample (target protein) on my SDS PAGE shows a little difference as compared to the control. I measured the band density through IMAGE J but the band that appears darker shows less mean value than the control.
is there a better software of am I using it wrong?
Steps that I follow:
1. Upload the image and change it to greyscale.
2. adjust brightness and contrast if needed.
3. Invert the image if the bands are too dark , the bands will turn white and the background will be black
4. Make rectangular selections, analyze and measure each band one by one.
5. Higher the mean value, higher is the band density.
Dear Researchers
I'm facing problems during Rietveld refinement.
As you can see in the image, some peak positions are not matching, but all others are perfectly matching the position. The intensity of the two main peaks does not match.
What are the possible reasons and how I can overcome this.
I did MICOM and obtained partial invariance for both my demographics: teaching experience.
I split the teaching experience into three groups to do Multigroup analysis
However, I am not able to confirm or reject my hypothesis that teaching experience moderates the relationships of all the constructs with the endogenous variable due to different significance within a group.
The question is:
1. If there is significance for the third group of teaching experience in SI and none for the others, do I generalize that as the moderation being generally significant or not siginificant? (Refer image)
2. should I split the data for teaching experience into two groups instead?
I have done FTIR of human hemoglobin in healthy subjects (blue) and diseased subjects (red and orange) (image attached). Please guide me on whether I have done this right or not. How can I interpret the alpha helix and beta sheet from this spectrum?
Image attached is at 20X. Cultured media for checking presence of bacteria and yeast and both tests were negative.
Any Upcoming call for papers in journals (SCI/SCIE indexed) related to image processing or computer vision.
Special issues related to my area, whose submission dates are within a few months.
Recently, I discovered the dimension of the SOM network do turn out to be the number of data clusters for data clustering or image segments when used for image segmentation.
For example, if the dimension of the SOM is 7 x 7, then the number of clusters(segments) would be 49, if the dimension of 2 x 1, then the number of clusters(segments) would be 2.
1. Therefore, are there techniques for determining the dimension?
2. What should be the basis/yard stick for picking the dimension?
3. If the knowledge of the data is the basis/yard stick for picking the dimension, is that not a version of K-means??
I am trying to refine my data in full prof and my chi2 is 0,834E11 and everytime i try to refine the data the plot deviates further from the actual data and the chi2 doesnt change. The first image is with the parameters found in literature and the second is after refinement, in some cases the peaks from the refinement disapear