Science topics: Computer ScienceImages
Science topic

Images - Science topic

Explore the latest questions and answers in Images, and find Images experts.
Questions related to Images
  • asked a question related to Images
Question
6 answers
Hello! I'm new to using AFM to analyze nanoparticles in a solution, and I have a question. I see a "line fit 2.79 nm" bar on the right side of my image. Does this refer to the height of the features in the image? For example, does the white area correspond to an approximate height of 2.79 nm? I noticed in another sample, which doesn't have nanoparticles, that it shows "line fit 700 pm." From this, I’m guessing that this value might represent the maximum height. Could you confirm this? Thank you!
Relevant answer
Answer
Andreea Iosageanu The particles i am using are microbeads .. which needs some liquid to maintain its shape , but they keep swimming in that liquid droplet.
  • asked a question related to Images
Question
1 answer
this is a problem of when using ftk to image a drive in computer forensics
Relevant answer
Answer
Can you clarify the description of the problem? Your question was asked in a very general way.
  • asked a question related to Images
Question
2 answers
I am a biomedical engineer with experience in deep learning applications for classifying and predicting various diseases. Recently, I have been working on projects involving disease classification and prediction through deep learning, with a particular focus on CNNs and advanced image processing techniques. Currently, I am preparing for a master’s degree in data science to deepen my expertise in this field. I am open to collaborating on research articles and can assist as an editor or peer reviewer for relevant journals. Additionally, I am available to work as a volunteer researcher at universities, contributing my skills in biomedical engineering, bioprinting, and microbiology.
If you are interested in collaboration or if I can assist with your research, please feel free to contact me at biomedical.emr@gmail.com.
Relevant answer
Answer
Dear Emir Öncü,
I am looking for collaboration with microbiologist globally.
If you interested we can do it.
With regards.
  • asked a question related to Images
Question
1 answer
Dear all,
I am looking for 2 or more brain slices with scale (within image or outside image) with a minum resolution of 300x400. The slice could capture the whole slice or even a specif area, or half the slice, a quarter an so on. The important thing is that the image shuld capture at least area/region fields an not a super detailed 20/40x zoom like those used for neurons recognition/labeling, but something in the range 1/5/10x zoom (mid to wide field). If you have fluorescence image would be great. However those images are only for validation purpose for a software aimed to reconstruct a whole 3d brain statting from slice images and performing fluorescence recognition and feature extraction out of each slice. Your images won't be used for artificial network training and or published anywhere. It's only for the final stage of validation. I have already used my own slices and Allen brain repository. Any other suggestion will be more than welcome!
Relevant answer
Answer
We can support that image as you want.
Please connect with me by mail.
  • asked a question related to Images
Question
3 answers
Image Processing Algorithms, Quantum Computing.
Relevant answer
Answer
Dear Doctor
Go To
Quantum Computing in Image Processing
  • January 2023
  • DOI: 10.3233/ATDE221232
  • In book: Recent Developments in Electronics and Communication Systems
  • License CC BY-NC 4.0
  • Kumud Sachdeva, Rajan Sachdeva, Himanshu Gupta
[Abstract
If you read that quantum machine learning applications solve some traditional machine learning problems at an amazing speed, be sure to check if they return quantum results. Quantum outputs, such as magnitude-encoded vectors, limit the use of applications and require additional specifications on how to extract practical and useful results. In satellite images, degradation of image contrast and color quality is a common problem, which brings great difficulties for information extraction and visibility of image objects. This is due to atmospheric conditions, which can cause a loss of color and contrast in the image. Regardless of the band, image enhancement plays a vital role in presenting details more clearly. The research includes new computational algorithms that accelerate multispectral satellite imagery to improve land cover mapping and study feature extraction. The first method is a quantum Fourier transform algorithm based on quantum computing, and the second method is a parallel modified data algorithm based on parallel computing. These calculation algorithms are applied to multispectral images for improvement. The quantum-processed image is compared with the original image, and better results are obtained in terms of visual interpretation of the extracted information. Quantitatively evaluate the performance of the improved method and evaluate the applicability of the improved technology to different land types.]
  • asked a question related to Images
Question
2 answers
Dear researchers. I tried using the IHC PROFILER in image j to quantify nuclear DAB staining. I followed the instructions in the original article by "Varghese F, Bukhari AB, Malhotra R, De A (2014) IHC Profiler: An Open Source Plugin for the Quantitative Evaluation and Automated Scoring of Immunohistochemistry Images of Human Tissue Samples. PLoS ONE 9(5): e96801. doi:10.1371/journal.pone.0096801".
I find it difficult to identify the nuclear positivity when working with "threshold" tool. very grainy areas appear- not representing the nuclei as nuclei. instead too many dots are seen in each nucleus.
I use images clicked in 40x magnification from human oral squamous cell carcinoma tissues. Esteemed researchers who have used this method of quantification can shed your experiences. kindly be moderately descriptive in your response. Thanking you in advance.
Relevant answer
Answer
The deconvolution plugin in IHC Profiler gives DAB-stained images separately from hematoxylin. same as how you mentioned I use the threshold function to quantify nuclear DAB. The final scores range from 1-4. 1-negative and 4- strong positive. This is based on percentage positivity and intensity. you can don't need to correct for hematoxylin as it is already deconvoluted for nuclear DAB.
  • asked a question related to Images
Question
2 answers
particularly models that achieve similar performance levels.
Relevant answer
Answer
LSTM (Long Short-Term Memory) is commonly used for sequential data, including time-series datasets like EEG signals, as well as in Natural Language Processing (NLP) and speech analysis, because these types of data are treated as consecutive sequences. For image classification, deep learning models are among the most widely used techniques. Pre-trained models such as VGG19 or ResNet can be applied directly or combined to create ensemble models, which use multiple pre-trained models together. Integrating an attention mechanism with a deep learning model can further improve the feature extraction process.
  • asked a question related to Images
Question
4 answers
Hi, Can anyone please help me about the MDA-MB231 cell culture protocol. I was growing MDA-MB231 cell line. I am new in cell culture, so if you can help me in this regard, I can be really helpful.
Problem is the cell growing very slow at P3 or P4. Seeding ~0.3 million cells at T 25 flasks takes almost 4/5 days for being 80% confluent. Initially at P1 I was using DMEM+10% FBS+ 5% Pen-strep, it was growing really slowly, so I changed media to 20% FBS. Changing the media at 20% FBS helped at P1/P2 , but at latter phase like P4/P5, it again grow very slowly with 20% FBS.
I also see some absurd things in phase contrast image (attached), why there seems lot of vesicles inside the cell, is it normal or I got contamination! For other cells, phase contrast image not look like so contrast, why this cell show show such lot of contrast inside the cell.
Also any suggestion for better culture media for MDA-MB 231.
Thanks
SAYAN
Relevant answer
Answer
5% pen-strep is too much for most of cell lines. It might cause toxicity due to cell stress and affect the growth. You may want to consider change it to 1%
  • asked a question related to Images
Question
1 answer
Hi, Does anyone know how to edit IHC image? what software to use?I want to remove bubble appears at the background?change background color?
Relevant answer
Answer
To edit Immunohistochemistry (IHC) images, you can use several software options that are suitable for scientific image processing, among the following.
(1.) ImageJ/Fiji (https://imagej.net/software/fiji/downloads): This is a powerful, open-source image processing program that is widely used in scientific research. You can use plugins to enhance images, remove artifacts like bubbles, and change background colors. The “Color Deconvolution” plugin can help you separate stains, and you can adjust the background using the “Enhance Contrast” feature.
(2.) Photoshop(https://www.adobe.com/products/photoshop/free-trial-download.html): Adobe Photoshop is a professional image editing tool that provides advanced features for editing IHC images. You can easily remove unwanted bubbles using the clone stamp or healing brush tools, and you can change the background color using layer adjustments.
(3.) GIMP(https://www.gimp.org/downloads/): Similar to Photoshop but free and open-source, GIMP offers a range of editing tools. You can use the healing tool or clone tool to remove bubbles and adjust colors using the “Colors” menu for background modifications.
(4.) BioImage Suite(https://www.slicevault.com/): This software is tailored for biological image analysis and provides tools specifically for processing and analyzing IHC images.
Note: Make sure to keep a copy of your original image in case you want to revert to it later.
Hope this helps,
Shafik
  • asked a question related to Images
Question
6 answers
I am doing some FIB cuts and analyzing my layer stack. I always see dots in the InP and InGaAsP layers, but I don't know their origin. These dots do not appear when I do a SEM of a cleaved sample, so I assume the origin is related with the Ion beam process. I attached a picture of a SEM image after an FIB to make clear which dots I am referring to. Any help or reference to solve this question is highly appreciated!
Relevant answer
Answer
By experience, all III-V compounds will produce this type of contrast when doing a FIB x-section. The contrast you see is produced by droplets of Ga for GaAs or In for InP. The only way to get rid of them would be to reduce the dose, therefore the current and time used for the final polishing of the area. On the other hand, it helps to identify different combinations of III-V layers because every material has a slightly different contrast. Here 2 papers that explain the process:
  • asked a question related to Images
Question
1 answer
Instead of directly using slice-wise matrix completion methods?
Relevant answer
Answer
To simplify the challenge of colour image completion, we use well-established matrix completion techniques to unfold the tensor into a matrix. Application of optimisation techniques and cross-channel reconstruction of missing pixels are facilitated by matrix completion, which is computationally more efficient and takes advantage of the low-rank structure of images.
  • asked a question related to Images
Question
5 answers
Relevant answer
Answer
Upgrade your Ubuntu to 24.10 via Software Updater, or install lower version of Ubuntu. This error happens only on 24.04.
  • asked a question related to Images
Question
5 answers
What do you know about color memory?
There is no red color in this image..
Your brain fills in the red color.
The image is made entirely of light blue, black, and white.
Zoom in on the image and you'll see..
Relevant answer
Answer
Tom A Campbell I was suspicious because cyan was used in the image. Cyan is the opposite color of red. It made me think of an after image resulting from staring at colors.
(See attached image:)
When you stare at the center black dot of the left image for a while, and then switch to the white image on the right, you will see the complementary colors.
You can find a better Coca-Cola image and some more examples on following Japanese site. The page is loading a bit slow but it is worth the effort to take a look:
  • asked a question related to Images
Question
4 answers
Hello,
I obtain western blot bands at desired region on the membrane but there is no space between the bands (attaching image for reference), may i know what could be the reason?
TIA
Relevant answer
Answer
The bands on the gel are not continuous. You still see the boundaries between the 6 samples, they have just migrated closely together. This is a normal effect of SDS-PAGE, since the differences in salt concentartions and pH between the gel and the samples lead to a broadening of the samples (electric current pulling the proteins not only downwards, but also sidewards at the edges), which is usually stooped when the sample from the neighboring well interferes with sidewards movement. If you need to have free space beween the samples, leave one lane free (or better fill it with empty loading buffer) between the samples.
  • asked a question related to Images
Question
4 answers
How do we typically choose between Convolutional Networks and Visual Language Models when it comes to Supervised Learning tasks for Images and Videos ?
What are the design consideration we need to make ?
Relevant answer
Answer
Dear Titas De ,
CNNs can be trained and deployed more efficiently than VLMs, especially for large-scale datasets. This is due to their simpler architecture and the ability to leverage parallel processing hardware for faster computations.
Regards,
Shafagat
  • asked a question related to Images
Question
1 answer
want to work on it
Relevant answer
Answer
You can assess apple yield using satellite imagery by employing UAVs with deep learning for fruit detection and analyzing multispectral images with machine learning models. Additionally, monitoring growth patterns through time-series data can enhance yield predictions.
  • asked a question related to Images
Question
2 answers
We can use the RF data for deep learning model training. Our main Treget RF image to B-mode image. How can we solve this using GAN? If anyone has good code experience please share it with me.
Relevant answer
Answer
Dear Imrus Salehin,
For image reconstruction, particularly in medical imaging, GANs can be effective, with several variants such as CycleGAN, Pix2Pix, SRGAN, and WGAN-GP showing good performance for reconstruction tasks. However, newer models like Diffusion Models, Transformers for Image Generation with Diffusion, and Vision Transformers (ViT) are often better choices for achieving high-quality and stable reconstructions.
Here are some references you may find helpful:
For GANs:
  1. Super-Resolution Image Reconstruction Based on Self-Calibrated Convolutional GAN. https://arxiv.org/pdf/2106.05545
  2. CycleGAN: https://junyanz.github.io/CycleGAN/
  3. Pix2Pix GAN: https://www.tensorflow.org/tutorials/generative/pix2pix
For Diffusion Models:
  1. https://github.com/hocop/Image-Reconstruction-using-Diffusion-Model
  2. https://academic.oup.com/bjrai/article/1/1/ubae013/7745314
For Vision Transformers (ViT):
Kind Regards, Md Foysal Ahmed
  • asked a question related to Images
Question
1 answer
How do I reference/attribute an image from a scientific paper used in an educational YouTube video?
I'm making a video discussing various brain regions, but am unsure where to find source images that are permissible to use? Can I take them from papers and reference them? What is the protocol?
Relevant answer
Answer
To use an image from a scientific paper in your educational YouTube video, you need to ensure you have permission. First, check if the image is under a Creative Commons license, which allows for reuse with proper attribution. If not, you should contact the publisher or author for permission. When you use the image, include a citation in your video description and on the image itself, mentioning the source, author, and publication. This way, you respect copyright laws and give proper credit to the original creators.
  • asked a question related to Images
Question
51 answers
Relevant answer
Answer
With “modern” and its cognates (pre-modern, early modern, late modern, modernity, modernism, postmodern, etc. etc.) being bandied about so much these days, and with no general agreement as to the time periods or ideological scope falling under such designations, a few words to the unwise in the subtext to the question would have been helpful. Some would argue that the Third Reich was an antimodernist reaction to modernity, much as Islamism today. Others would claim that these reactions constitute a reactionary modernism, insofar as it adopts the technologies of modernity without giving up premodern conservatism. As they say, it's complicated.
  • asked a question related to Images
Question
4 answers
I made 2 membrane samples of 91% CuO and 93% CuO using the sol-gel method. The SEM image of the grain is obtained evenly but the surface is slightly rough
📷
Relevant answer
Answer
Sarada K Gopinathan , thanks, i will sent for you
  • asked a question related to Images
Question
3 answers
Hi there. I am new to the field of corrections and I was wondering if by having a dark field and flat field image from a test, if it is possible to obtain a formula with the offset and gain value so I can apply it to any image that comes from the sensor in order to apply a relative radiometric correction.
From what I've read the dark image gives me the offset value, right? And the relative gain value is (dark - flat)/mean(dark-flat). Is this right?
Considering that these are test images, if I want to apply this formula to real images of the sensor then I'm guessing that I have to obtain a default value from the gain and offset matrices. Maybe by getting their mean?
Not sure if this is the way to go. I've also seen that for the dark field i could make a histogram and see the value with the highest peak and maybe choose that as my offset value? But not sure how that would work for the flat image.
Any help is appreciated as I am a little bit loss on what are the best steps here.
Relevant answer
Answer
First of all, we have to consider the possibility of averaging a stack of several captures (>10) to eliminate temporal noise. So when I say "image", I mean the average of the stack. In this way, we can minimize the problem of spatial non-uniformity. What we have to do is:
1) to obtain an image in darkness, I'll call it DSNU;
2) to obtain an image in front of a Flat Field at the end of the scale in the dynamic range of the instrument, but making sure that there is no pixel in the image that enters into saturation; I'll call this FF;
3) to obtain PRNU = FF - DSNU;
4) to obtain the average value of PRNU, I'll call it PRNUm;
5) and considering that we already have the reference image, a raw image, which I'll call RAW;
6) finally you can obtain your corrected flat field image (Icorr), assuming a linear behavior throughout the entire dynamic range of the sensor, as follows: Icorr = (RAW-DSNU) x PRNUm / (FF - DSNU).
Regards!
  • asked a question related to Images
Question
2 answers
Nowadays scientific illustration became an integral part of publications. Apart from original research images, graphical abstract is imperative for journal submission. It even receives more attention than the main article. I, as a person with mediocre drawing skills, always fascinated by published graphical abstracts. Recent times, many illustration making softwares came in rescue. Many softwares provide free/ paid services like icon libraries, flow charts, etc., which form the basis for creating illustrations according to one's requirement. Some softwares provide templates, which can be modified by paying subscription. What perplexed me is, Are these templates making us copycats?
For example, a simple google image search resulted in exact matches of nearly more than 50 publications, where a image template was slightly modified. As a person, who used to see the images first, this is really confusing to identify a publication based on its graphical abstract. In essence, these templates will fail to create a hype/curiosity among the audience (as it is initially aimed for) and making all our work similar like "Agent Smith". Is it acceptable?
Relevant answer
Answer
Dear Doctor
Go To
Lessons from industry for science cyberinfrastructure: Simplicity, scale, and sustainability via SaaS/PaaS
Ian Foster
A version of this paper was published in the Proceedings of SCREAM’15: The Science of Cyberinfrastructure: Research, Experience, Applications and Models, June 16, 2015, Portland, Oregon, USA.
[Abstract Commercial information technology has changed dramatically over the past decade, with profound consequences for both software developers and software consumers. Software-as-aservice (SaaS) enables remote use of powerful capabilities, from accounting and payroll to weather alerts and transporation logistics, that used to require expensive in-house facilities and expertise. Platform-as-a-service (PaaS) offerings from cloud providers simplify the development and operation of SaaS software. These developments have slashed costs, reduced barriers to access and entry, and spurred innovation. Science cyberinfrastructure, in contrast, seems stuck in the 20th Century.
Summary With effort and vision, we can ensure that a broad spectrum of software is accessible in this way, greatly simplified due to outsourcing of functionality and sustained by a broad community of subscribers. We will thus promote values we hold dear, such as accessibility, reproducibility, and more money for research.]
  • asked a question related to Images
Question
3 answers
Hi, I'm Prithiviraja. I'm currently building a deep learning model to color SAR image. I came across lot of resources only using ASPP for feature extraction from SAR Image. I'm planning to use both FPN and ASPP for that process, while FPN is mostly used for object detection. Kindly tell me your suggestion.
Relevant answer
Answer
Yes, combining a Feature Pyramid Network (FPN) and Atrous Spatial Pyramid Pooling (ASPP) in a single network can significantly enhance feature extraction, especially for tasks like object detection and semantic segmentation. FPN excels at capturing multi-scale features by creating a pyramid of feature maps that incorporate both high-level semantics and low-level spatial details, allowing the model to handle objects at different scales more effectively. On the other hand, ASPP leverages dilated convolutions at multiple rates to capture contextual information over various spatial scales without reducing resolution. By applying ASPP on top of FPN’s multi-level feature maps, the model can benefit from both rich local detail and broad contextual awareness, leading to improved performance in recognizing objects or patterns across diverse scales and contexts. This combination strengthens the network's ability to detect fine details while also capturing long-range dependencies, making it a powerful architecture for complex vision tasks that require both precise spatial resolution and comprehensive global context.
  • asked a question related to Images
Question
3 answers
Is is correct to convert .raw to .jpg file extension and than calculate the intensity profile
Relevant answer
Answer
% Define image parameters (you should know these in advance)
width = 512; % Image width (in pixels)
height = 512; % Image height (in pixels)
bitDepth = 8; % Image bit depth (e.g., 8, 16)
% Open the .raw file
fileID = fopen('image.raw', 'rb');
% Read the image data
rawData = fread(fileID, width * height, 'uint8'); % 'uint8' for 8-bit images
fclose(fileID);
% Reshape the raw data into a matrix form
imageData = reshape(rawData, [width, height]);
% Display the image
imshow(imageData, []);
% Display the image to allow user to select a line
imshow(imageData, []);
% Use improfile to extract intensity profile along a line
intensityProfile = improfile(imageData);
% Plot the intensity profile
figure;
plot(intensityProfile);
title('Intensity Profile');
xlabel('Position along the line');
ylabel('Intensity');
  • asked a question related to Images
Question
1 answer
Dear All:
I want to quantify the fluorescence intensity of images using fluorescence microscope.
I would really appreciate it if someone could tell me how to use image j correctly.
Relevant answer
Answer
To quantify fluorescence intensity from DCFH-DA staining (or any fluorescence-based assay) using ImageJ, follow these steps:
Step-by-Step Guide:
  1. Open ImageJ: Download and install ImageJ (or FIJI, an enhanced version of ImageJ) if you haven't already.
  2. Open the Fluorescence Image: Go to File > Open and select your fluorescence image (typically in formats like TIFF, PNG, or JPEG).
  3. Convert to Grayscale (if necessary): If your image is in color, convert it to grayscale. This step is important because intensity measurements are typically based on grayscale values. Go to Image > Type > 8-bit to convert the image to grayscale.
  4. Set the Scale (optional, if you need physical units): If the image includes scale information, go to Analyze > Set Scale. Input the scale information (e.g., pixels per micron), if available.
  5. Subtract Background: To improve the accuracy of the intensity measurement, subtract any background fluorescence. Go to Process > Subtract Background and set an appropriate rolling ball radius (e.g., 50-100 depending on your image).
  6. Select the Region of Interest (ROI): Use the Rectangle, Oval, or Polygon selection tool to outline the area of interest (the region where you want to quantify fluorescence). You can skip the selection for the entire image.
  7. Measure Fluorescence Intensity: Once the ROI is selected, go to Analyze > Measure or simply press M. A results window will appear showing several metrics, including the Mean Gray Value (this is the average intensity of the selected area). You can repeat this for different regions of the image if needed.
  8. Quantify Intensity for Multiple ROIs (if required): You can measure fluorescence intensity in multiple regions by selecting new areas and repeating the measurement process. Alternatively, use Analyze > Analyze Particles to automatically select multiple regions and measure their intensities.
  9. Export and Analyze Data: The results (including fluorescence intensity) can be copied from the results window or saved as a .csv file by going to File > Save As.
  10. Optional: Intensity Calibration: If needed, perform intensity calibration to convert pixel intensities to actual concentration units. This requires creating a standard curve from known concentrations of fluorescent molecules.
  • asked a question related to Images
Question
1 answer
I want to check if there exist any pathology in my mice with their valve but I am not sure if its the one on the right side of this image. I sectioned a ctrl one and saw it much clearer.
Relevant answer
Answer
here they are, outlined in blue
  • asked a question related to Images
Question
2 answers
Following laser and source replacement of one of our Bruker Tensor 27 units, the calibration peak from the laser path through the empty chamber is registering a good amplitude, but it is far out of the normal range. I'm seeing a peak position at ~65000, when it should be between 58000 and 62000.
Is there any way to fix this? So far, tweaking the interferometer position only decreases the signal amplitude and does not alter the peak position coming in. The laser position in the holder seems fine. Any help would be appreciated.
Failed OQ and image of the display attached.. Thanks!
Relevant answer
Answer
Removing the interferometer block was an unnecessary and dangerous way to resolve this issue. The solution was quite simple.
  • asked a question related to Images
Question
5 answers
Why is the image brightness inconsistent at lower magnifications in an SEM, showing a gradient where one part of the image is brighter than the other, regardless of the location? I notice that the left side is consistently brighter at low magnifications, but this effect is not visible at higher magnifications.
Relevant answer
Answer
Gun Alignment
  • asked a question related to Images
Question
4 answers
Corporate Social Responsibility (CSR) has emerged as a critical factor in shaping brand image perception among various stakeholder groups. Research indicates that CSR initiatives can significantly enhance a company's reputation and brand image when implemented authentically and strategically. For instance, a study by Martínez et al. (2014) found a strong positive correlation between consumers' perception of CSR and both functional and affective aspects of brand image[1]. This suggests that companies engaging in genuine CSR efforts can effectively improve their overall brand perception among customers.
The impact of CSR extends beyond customers to other key stakeholder groups, including employees, shareholders, and broader community stakeholders. Employees, in particular, play a crucial role in this dynamic. When companies involve employees in CSR activities and communicate their efforts effectively, it can lead to increased job satisfaction, loyalty, and pride in the organization. This, in turn, can transform employees into brand ambassadors, further enhancing the company's image externally[1]. For shareholders, CSR initiatives can signal long-term value creation and risk mitigation, potentially improving their perception of the brand and its future prospects.
However, it is essential to note that the positive influence of CSR on brand image is contingent upon avoiding greenwashing or superficial green labeling. Authenticity and transparency in CSR efforts are crucial for building trust and credibility among all stakeholder groups. As Bianchi et al. (2019) highlight, a company's reputation and international presence play a significant role in shaping consumer perceptions and attitudes toward the brand[1]. Therefore, to maximize the positive impact of CSR on brand image, companies must ensure their initiatives are genuine, aligned with their core business values, and effectively communicated to all stakeholder groups. This approach can lead to a more robust and positive brand image across customers, employees, shareholders, and the broader community.
Relevant answer
Answer
Yes, it can, like it can for a range of primary and tertiary stakeholders. To a certain extent, it depends on how involving and consultative management are with their stakeholders as to the level of brand strengthening and authentic value it creates for the brand .
  • asked a question related to Images
Question
1 answer
I am trying to run a PCR to verify insertion of my construct into the AAVS1 locus in iPSC using CRISPR.
I designed three primer pairs to amplify the left and right insertion regions (one binding inside the insert, one binding outside; see image), and one primer pair to amplify a region inside the insert. The insert contains a fluorescent protein, which I can see expressed in the cells under the microscope, so I am pretty sure that the insertion has worked correctly; however, I cannot get any specific PCR product for sequencing. (Even if it has been inserted unspecifically somewhere in the genome, since I am seeing the fluorescent reporter in the cells, I would expect at least to get a positive result for the "internal" region.)
I used DNAzol to purifiy gDNA from the cells, and when I checked it on a gel I noticed two additional bands, which I thought might be rRNA (see image), and when I treated the samples with RNAse the bands disappeared, so I continued happily with the PCR.
For the "internal" region, I am able to use the plasmid as a control, and here I can see a specific PCR product with the expected size, however, for all other plasmid / gDNA template combinations, I get a huge amount of large-sized unspecific PCR products (see second image). Which is why I am currently suspecting that something is not right with the gDNA? But it looks pretty good on a gel.
I am using KOD1 polymerase (KOD1 master mix) according to manufacturer instructions when it comes to amplification from gDNA:
PCR: Total 25 µl per reaction
1.25 µl DMSO
1 µl primer fwd [10 µM]
1 µl primer rev [10 µM]
8.25 µl H2O
12.5 µl KOD master mix
0.5 µl DNA (= 25ng)
Init. Denat. 94°C 1.5 min
Denat. 94°C 5 sec
Anneal 58°C 5 sec
Extension 68°C 1 sec
and also tried
Init. Denat. 94°C 3 min
Denat. 94°C 45 sec
Anneal 58°C 45 sec
Extension 68°C 1 min
I have triple-checked the specificity of the primers, and compared with other primers used in literature for the same purpose (AAVS1 locus). I have re-designed new primers that bind in slighly different places. I have tried different elongation / annealing times and temperatures... It always looks the same (large-size unspecific products).
I a last-ditch effort, I cut out pieces of gel from the "unspecific" results around the size where I would expect the PCR product, and repeated the PCR with those as a template - I got some promising looking results on a gel, but when I sent them for sequencing, it was all unspecific.
I am currently at my wit's end and hope someone else has seen something similar and was able to solve it in the end!!
(Plan B will be to re-do the DNA extraction and try again from the beginning I guess...)
Relevant answer
Answer
Dear Anja,
If I understand your protocol in Image 2 (PCRproduct.png) correct. You are using 25 ng gDNA in your PCR. That is way to low. You should think about copy numbers. Of cause 25 ng plasmid has millions of of copys in there, while in one genome your AAVS1 knock in might exists only once. Please try to use 300-500 ng for your gDNA along with your low plasmid DNA concentration.
And than you should optimize your PCR condition subsequently. You should not really see any products in 1 and 2 both lane 1-3.
And after doing comparable stuff with TALEN in Hek293 cells. If your are planing to use single cell clones. You can use both outside primers to detect if your cell is homo or heterozygous.
Best wishes
Soenke
  • asked a question related to Images
Question
2 answers
I am trying to use morlet wavelet on BCIC-IV-2a dataset to create a 2D image. Since, the dataset is 1000 samples long and with 22 channels, after applying the transform the shape becomes (22,127,1000) where 127 is of course wavelet coefficients.
My aim is to apply pre-trained Resnet model upon this image, for that I resized the image to (224,224) from (127,1000).
Can you suggest further steps that can be taken for proper training ? Or have I missed necessary processing on the wavelet image here ?
Relevant answer
Answer
Dear Dibbo Dey ,
The processing of brain signals for Motor imagery (MI) classification to have better accuracy is a key issue in the Brain-Computer Interface (BCI). While conventional methods like Artificial neural network (ANN), Linear discernment analysis (LDA), K-Nearest Neighbor (KNN), Support vector machine (SVM), etc. have made significant progress in terms of classification accuracy, deep transfer learning-based systems have shown the potential to outperform them. BCI can play a vital role in enabling communication with the external world for persons with motor disabilities.
Regards,
Shafagat
  • asked a question related to Images
Question
2 answers
What are the recent advances in DICOM image security?
Relevant answer
Answer
DICOM (Digital Imaging and Communications in Medicine) images are widely used in the medical field for storing and transmitting medical imaging data. However, the security of DICOM images is a significant concern due to the sensitive nature of the data they contain. Here are some key aspects related to the security of DICOM images:
1. Inherent Security Risks
  • Lack of Encryption: Traditional DICOM files do not include encryption by default, meaning that the images and associated metadata are stored in a readable format. This can be a risk if the files are intercepted or accessed by unauthorized individuals.
  • Metadata Exposure: DICOM images contain not just the image data but also metadata, which can include patient information, physician details, and other sensitive data. If not properly protected, this information can be exposed.
2. Transmission Security
  • Network Vulnerabilities: When DICOM images are transmitted over a network, especially an unsecured one, they are vulnerable to interception, eavesdropping, and man-in-the-middle attacks.
  • TLS/SSL: To secure the transmission of DICOM images, many systems now use Transport Layer Security (TLS) or Secure Sockets Layer (SSL) to encrypt the data as it is transmitted over the network.
3. Storage Security
  • Access Control: Ensuring that only authorized personnel can access DICOM images is critical. This involves implementing strong authentication and authorization mechanisms, such as role-based access control (RBAC).
  • Audit Trails: Maintaining audit logs of who accessed, modified, or transmitted DICOM images can help in detecting and responding to unauthorized access.
  • Data Integrity: It’s important to ensure the integrity of DICOM files to prevent unauthorized alterations. Cryptographic hash functions can be used to verify the integrity of the files.
4. Regulatory Compliance
  • HIPAA and GDPR: In regions like the United States and Europe, healthcare organizations are required to comply with regulations such as HIPAA (Health Insurance Portability and Accountability Act) and GDPR (General Data Protection Regulation), which mandate strict security measures for the handling of medical data, including DICOM images.
5. Advanced Security Measures
  • End-to-End Encryption: Some systems now offer end-to-end encryption for DICOM images, ensuring that the data remains encrypted from the point of capture until it reaches the intended recipient.
  • DICOM File Encryption: Newer versions of the DICOM standard include the ability to encrypt DICOM files, although this is not yet universally implemented.
  • Anonymization: Removing or obfuscating patient-identifiable information from DICOM files (anonymization) can help in reducing the risk of privacy breaches.
6. Emerging Threats
  • Malware and Ransomware: DICOM files can be susceptible to malware and ransomware attacks, where malicious actors encrypt or corrupt the files, rendering them inaccessible.
  • Exploiting DICOM File Structure: Researchers have identified ways to embed malware within DICOM files by exploiting the DICOM file structure, potentially allowing the malware to evade detection.
Conclusion
DICOM images are not inherently secure, but with proper implementation of encryption, access control, and compliance with regulatory standards, the security of DICOM files can be significantly enhanced. The ongoing evolution of threats and security practices means that continuous vigilance and adaptation of security measures are necessary to protect these critical medical data assets.
  • asked a question related to Images
Question
6 answers
Line mappping
Relevant answer
Answer
Line scan is mostly taken across interfaces. Matrix-precipitate, Matrix-grain boundary phase etc.
Prior to that, you must have an idea of the elements present in both matrix and precipitate. In order to get an idea, take point scans on precipitate and adjacent matrix (3 points each would be good, statistically).
Analyze the elements and take line scans with the desirable elements.
PS: For line scans, use dark colours.
  • asked a question related to Images
Question
1 answer
to check the randomness of image data in python program
Relevant answer
Answer
1. Generate the Chaotic Sequence:
  • Use a chaotic map (e.g., Logistic map, Tent map, etc.) to generate the chaotic sequence. Chaotic sequences are sensitive to initial conditions and parameters, so choose your initial values and parameters carefully.
  • For instance, the Logistic map is defined by the equation: xn+1=r⋅xn⋅(1−xn)x_{n+1} = r \cdot x_n \cdot (1 - x_n)xn+1​=r⋅xn​⋅(1−xn​) where rrr is the control parameter, and xnx_nxn​ is the sequence value at iteration nnn.
2. Normalize the Sequence:
  • The generated chaotic sequence usually takes values in the range [0, 1] or [-1, 1]. Normalize the sequence to fit within the range of 0 to 1 if needed.
3. Convert the Sequence to a Bit Stream:
  • Convert the chaotic sequence into a binary bit stream. If you need 10-bit values, you can map the normalized chaotic sequence to 10-bit integers (0 to 1023).
  • For each value xnx_nxn​ in the sequence: bn=int(xn×1024)b_n = \text{int}(x_n \times 1024)bn​=int(xn​×1024) where b_n is the 10-bit representation of the chaotic sequence value.
4. Concatenate the Bit Stream:
  • Concatenate the binary representation of each b_n to form a bit stream.
  • Ensure that each sequence is of length 1,000,000 bits. You might need to repeat or truncate the sequence accordingly.
5. Repeat for Multiple Sequences:
  • Repeat the above steps to generate multiple (e.g., 10) bit streams, each of length 1,000,000 bits, for testing.
Example in Python:--------------------------------------------------------
import numpy as np
def logistic_map(x, r, n):
sequence = []
for i in range(n):
x = r * x * (1 - x)
sequence.append(x)
return sequence
def chaotic_bit_stream(x0, r, n, bit_length):
sequence = logistic_map(x0, r, n)
bit_stream = ''
for x in sequence:
# Map to 10-bit integer
b = int(x * (2 ** bit_length))
# Convert to binary and pad to 10 bits
b_bin = format(b, f'0{bit_length}b')
bit_stream += b_bin
# Stop if we reach the desired length
if len(bit_stream) >= n:
break
return bit_stream[:n]
# Parameters
x0 = 0.7 # Initial condition
r = 3.7 # Control parameter
n = 1000000 # Length of each bit stream
bit_length = 10 # Length of bit representation
# Generate the bit stream
bit_stream = chaotic_bit_stream(x0, r, n, bit_length)
print(f"Bit Stream Length: {len(bit_stream)}")
# Repeat for multiple streams
bit_streams = [chaotic_bit_stream(x0 + i * 0.01, r, n, bit_length) for i in range(10)]
  • asked a question related to Images
Question
1 answer
Can anyone help me please ?
Relevant answer
Answer
You can take size etc with ImageJ, yes. Here is procedure https://youtu.be/zrGm-gBhsUQ
  • asked a question related to Images
Question
6 answers
1- What is the reason for the importance of synthetic CT image generating (Due to the low radiation dose in new devices)? 2- What is the pros for using the UNet model compared to other deep learning models (Due to the need to data augmentation to reduce the error)? 3- MR and CT images are complementary. Therefore, it is not possible to expect to produce a flawless image and it is not medically safe.
Relevant answer
Answer
Vahid Abbasian Thank you for helping me
  • asked a question related to Images
Question
6 answers
My question pertaining to the DAB staining in cytoplasm of human oral squamous cell carcinoma tissue.
When quantifying the epithelial cancer cells do we have to crop remove the stromal tissue? because stromal positivity is also seen in the sections.
Any separate plugin in image j or any other open source software to quantify stromal positivity?
kindly share your valuable suggestion.
Relevant answer
Answer
Dear Petro V. Kuzyk, find attached the video to know the way i crop removed and inverted background color instead of black to white and then quantified the IHC DAB staining.
you may please send me demo video of the ways you suggested above. particularly with the ROI manager method.
  • asked a question related to Images
Question
7 answers
After immunohistochemistry of previously fixed in PFA and EtOH and then frozen 20 μm sections of zebrafish brain, DAPI staining is very weak (right) compared to the same sections stained without preceding IHC (left). Please also take into account that the exposure time for the right image is ten times longer than for the left one – so in reality, it is much dimmer than in the picture. The concentration of stock DAPI (Invitrogen™ D3571) dilactate solution is 5 mg/mL DAPI (10.9 mM), and the final working solution is 300 nM (i.e., 1:36,333.33 from the stock solution, as per the manufacturer's recommendations). Incubation time is 5 minutes, followed by 3 washes in PBS for 1 minute each and mounting with VECTASHIELD® Vibrance™ Antifade Mounting Medium. Both stock solution aliquots and the working solution are freshly prepared. No antigen retrieval was used for this IHC, and for blocking, 0.1% Triton + 5% NGS in PBS was used. Please recommend if you have any ideas on how to improve staining. Thank you!
Relevant answer
Answer
Update: DAPI concentrations of 1:3K and 1:6K for 15 minutes were much less effective compared to 1:1K, so I decided to stick with the 1:1K concentration. Adding antibodies did not interfere with the perfect staining achieved at 1:1K. Thanks again to everyone who provided feedback!
  • asked a question related to Images
Question
2 answers
Hello,
about contact map in molecular dynamics simulation.
I obtained the contact map of the protein-ligand complex using this command:
gmx distance -f md.xtc -s md.tpr -n index.ndx -oall contact.xvg
The output file is .xvg file. I don't know how to convert this file into a contact map image.
I want this analysis to revise the article.
Thank you
Relevant answer
Answer
Vahid Abbasian Thank you for your answer, sepas.
I recently made map contact diagram through one of the tools (chart) in Discovery Studio software.
regards
  • asked a question related to Images
Question
3 answers
Suppose, for example, I have two domains of dataset X and Y, and obviously, they are not paired. I want to train it for an image translation task to get a model to infer an image and translate it from domain X to Y.
I know GANs are heavily used for such tasks, but can other core methods or algorithms do that?
Relevant answer
Answer
I searched for you question and get this result:
Yes, there are several methods for unpaired image-to-image translation beyond Generative Adversarial Networks (GANs). While GANs are popular for this task, other approaches can be effective as well. Here are some alternatives:
CycleGAN: While it is based on GAN architecture, the CycleGAN specifically addresses unpaired image-to-image translation by introducing cycle consistency loss, ensuring that images can be transformed back and forth between domains.
Attention Mechanisms: Some models utilize attention mechanisms to focus on relevant parts of the image during translation. These can be combined with traditional neural networks rather than GANs.
Variational Autoencoders (VAEs): VAEs can be applied in unpaired scenarios by learning a latent space that can represent images from both domains. Variational methods can facilitate smooth transitions between the domains.
Neural Style Transfer: Techniques based on neural style transfer can transform images by applying the style of one image to the content of another. While not a direct method for image-to-image translation, it can be adapted for unpaired tasks.
Domain Adaptation Techniques: Some approaches, such as Domain Adversarial Neural Networks (DANN), use adversarial training to align the features of the source and target domains, allowing for translation without paired examples.
Self-Supervised Learning: Self-supervised methods can leverage unlabeled data from both domains to learn suitable mappings without requiring pairs.
Feature Matching Loss: Instead of adversarial losses, this approach compares the feature representations of images in both domains, guiding the translation process based on the closeness of the learned features.
Graph-Based Approaches: Some methods utilize graph-based techniques where images are treated as nodes in a graph, allowing for relationships and translations to be learned more flexibly.
Diffusion Models: Recent advances in diffusion models have shown promise in generating images and can be explored for unpaired image translation.
Each of these methods has its strengths and trade-offs, and the best choice often depends on the specific requirements of the task at hand and the nature of the datasets involved.
  • asked a question related to Images
Question
2 answers
After immunohistochemistry of previously fixed in PFA and EtOH and then frozen 20 μm sections of zebrafish brain, DAPI staining is very weak (right) compared to the same sections stained without preceding IHC (left). Please also take into account that the exposure time for the right image is ten times longer than for the left one – so in reality, it is much dimmer than in the picture. The concentration of stock DAPI (Invitrogen™ D3571) dilactate solution is 5 mg/mL DAPI (10.9 mM), and the final working solution is 300 nM (i.e., 1:36,333.33 from the stock solution, as per the manufacturer's recommendations). Incubation time is 5 minutes, followed by 3 washes in PBS for 1 minute each and mounting with VECTASHIELD® Vibrance™ Antifade Mounting Medium. Both stock solution aliquots and the working solution are freshly prepared. No antigen retrieval was used for this IHC, and for blocking, 0.1% Triton + 5% NGS in PBS was used. Please recommend if you have any ideas on how to improve staining. Thank you!
Relevant answer
Answer
Hello Howard,
Thanks a lot for your reply! I used VECTASHIELD® Vibrance™ Antifade Mounting Medium in both cases, so it probably isn't affecting the staining. I agree that after IHC, it looks more like autofluorescence. More importantly, your advice to increase the concentration of DAPI and the duration of incubation worked perfectly. I tried a 1:1000 dilution for 15 minutes and need to titrate it down to see if I can use a lower concentration. The full set of additional checks I performed is described in this discussion (https://www.researchgate.net/post/Weak_DAPI_staining_after_immunohistochemistry#view=66b38572b7e618ee9e041db8/1), and here I’m attaching the final images as of now. Thank you so much for your help, I really appreciate it!
  • asked a question related to Images
Question
1 answer
Can the FE-SEM TESCAN MIRA4 reach a resolution of 1 or 2nm? is it possible?
May I know if someone has tried it already. If someone has tried it, may I have the procedure for getting a 1 or 2nm image.
Thank you.
Relevant answer
Answer
1) The resolution of a SEM is specified by the vendor for certain conditions (e.g. 30kV) on a test sample (e.g. Au on carbon). If you use a SEM the resolution was demonstrated during installation, after maintenance by the service! For lower beam voltages the resolution values may be worse (more nm, some nm). You should find values in the technical data sheet. It's challenging to get the specified values, but it's a good training (a standard sample is required).
2) You can't expect to get the best resolution values on every sample! Limiting factors are electron scattering, roughness, composition, layer thickness, detector....
  • asked a question related to Images
Question
2 answers
I have collected a signal from vector network analyzer by placing food product in between a couple of antennas in which 1 antenna acts as transmitter and one acts as receiver. i want to convert the collected signal to tomographic image of the food product placed in between the antennas. I'm unable to understand the process how to convert a S21 (real and imaginary format) signal to tomographic image, is it possible to reconstruct the tomographic image of the product from the single signal. i want to reconstruct the tomographic image of the food product placed in between the 2 antennas in MATLAB using reconstruction algorithm. please help me in my project.
Relevant answer
Answer
Malcolm White Could you please show me and detailed example? I have attached one sample of my datasets here.
My experiment was performed to collect data from 0 -360 degree with 10 degree increment.
  • asked a question related to Images
Question
5 answers
I have collected a signal from vector network analyzer by placing food product in between a couple of antennas in which 1 antenna acts as transmitter and one acts as receiver. i want to convert the collected signal to tomographic image of the food product placed in between the antennas. I'm unable to understand the process how to convert a S21 (real and imaginary format) signal to tomographic image, is it possible to reconstruct the tomographic image of the product from the single signal. i want to reconstruct the tomographic image of the food product placed in between the 2 antennas in MATLAB using reconstruction algorithm. please help me in my project.
Relevant answer
Answer
Hussein A. Jasim Thank you so much for your recommendation. I stuck at how I can use my collected data to reconstruct the image. because VNA is collected data from 0.4GHz to 4GHz; on top of that I rotated 360 degree around the object, so I have about 37 datasets int total. I attached one of my sample dataset below. Please help me
  • asked a question related to Images
Question
4 answers
Hi everyone
I need a file with a dirty and clean potato image
Relevant answer
Thank you dear Jokin Ezenarro,
The purpose of the images is for training purposes and not for research purposes.
  • asked a question related to Images
Question
4 answers
I fabricated Ti3C2Tx using concentrated HF 40%, I plot an XRD as attached image below.. please let to know if I obtained it or not.
Relevant answer
Answer
Dear Wedad,
you can find multilayered Ti3C2Tx MXenes XRD patterns in these papers:
Your XRD pattern has some similarities to MXenes; however, the MAX phase quality and pretreatment can significantly influence your synthesis products. I suggest using Raman spectroscopy and scanning electron microscopy to investigate the synthesis products and ensure MXenes' quality.
  • asked a question related to Images
Question
6 answers
I ask the question in the form of an image entitled A Modernist Pictorial Tableau, which anticipates the future development of image generation software such as Midjourney, as it challenges the current progress not just of AI but also any attempt by cognitive neuroscientists to make sense of art and aesthetics via computing.
Relevant answer
Answer
The ability for AI to not only generate visuals but also impact perception offers a huge leap in both technology and our understanding of cognition. This might transform industries like art, neuroscience, and even human-computer interaction, pushing the boundaries of how humans create and perceive visual information.
  • asked a question related to Images
Question
3 answers
Sampling (image)
Relevant answer
Kamel Ghanem Ghalem if you mean reduce the size of the image without using imresize command:
I = imread('lena.png'); % Replace 'lena.png' with the path to your image file
% Convert to grayscale if it's a color image
if size(I, 3) == 30
I = rgb2gray(I);
end
% sampling the image by a factor of 0.5
samp=(1:2:end, 1:2:end);
% Display the original and downsampled images
subplot(2, 1, 1);
imshow(I);
title('Original Image');
subplot(2, 1, 2);
imshow(samp);
title('Downsampled Image');
  • asked a question related to Images
Question
4 answers
I have been running native page for FAM DNA substrate ( fluorescence samples) for protein DNA binding reaction. Binding is there but towards the end of the lane , I am loosing signals drastically.
I have checked my protein for nuclease contamination and it is nuclease free protein.
I am unable to figure it out , at times the signals are lost in the middle lanes also.
I am giving extra attention to pipetting, so I know that is not the problem.
I have attached the image.
Relevant answer
Answer
Fatemeh Yaghoobiadeh No. never added protease inhibitor
  • asked a question related to Images
Question
1 answer
there is an uploaded image of correlated matrix. please help what types of this dataset will be treaded labeled or unlabeled.
Relevant answer
Answer
Previously, we described how to perform correlation test between two variables. In this article, you’ll learn how to compute a correlation matrix, which is used to investigate the dependence between multiple variables at the same time. The result is a table containing the correlation coefficients between each variable and the others.
Regards,
Shafagat
  • asked a question related to Images
Question
3 answers
The above are manually labeled extrinsic matrices based on the first image
It can be seen that the projection error at the edge is large, while the error at the center is small.
What could be the reason? How can I solve it
Thanks.
Relevant answer
Answer
Differing projections of the sensor optics. You can correct with a positional warp. This can be done by matching associated points and using a polynomial offset in X and Y. OpenCV also contains homography methods which can align different sensors.
  • asked a question related to Images
Question
3 answers
Anomaly detection in scanned image data set
Relevant answer
Answer
The choice of approach totally depends on the specific dataset, available resources, research objectives, and most importantly the desired level of accuracy. You can experiment with different methods to find the one that best suits your needs.
There are several approaches you can explore to detect anomalies in image datasets. You can explore different machine and deep learning algorithms, such as Autoencoders, Isolation Forests, Support Vector Machine, Convolutional Neural Networks, and Contrastive Learning. Additionally, you can explore hybrid approaches that combine machine learning, computer vision, and deep learning techniques to extract relevant features from images to detect anomalies.
  • asked a question related to Images
Question
1 answer
Référence électronique
Manuel Gárate, « There will be blood (Petróleo Sangriento) », Nuevo Mundo Mundos Nuevos [En ligne], Images en mouvement, mis en ligne le 28 février 2008, consulté le 21 juillet 2024. URL : http://journals.openedition.org/nuevomundo/25712 ; DOI : https://doi.org/10.4000/nuevomundo.25712
Relevant answer
Answer
¿Qué opinan ustedes?
  • asked a question related to Images
Question
1 answer
Hi. I have brightfield images of stem cells. I need to count the total number of cells/image. Some cells are individual and some cells are in colonies. Using average cross-sectional area of individual cells, how can I calculate the number of individual cells in a colony. And then also calculate the total number of cells on the image?
I have tried the watershed feature but it does not provide accurate results unfortunately.
Thank you in advance!
Relevant answer
Answer
If you include a representative image, this might be an easier question to answer. Based on your description, I would hazard to guess that your cells are too dense and possibly overlapping for thresholding and watershedding to work well. There are a couple of other things that you can try.
If you are using multiple colors, you can try isolating the channels and then trying to threshold and watershed. If your cells are dense then you should take overlap into account. You can look for this by using the Image Calculator “AND” function.
Depending on their morphology, the Binary Feature Extractor may also work.
There are more options out there if these don't work, but these may get you started.
Hope it works!
- Melissa
  • asked a question related to Images
Question
3 answers
I'm a PhD student researching seismic amplification in a NZ sedimentary basin. I hope that somebody will be able to help me in my interpretation of the 2D seismic lines that traverse the basin. As per the attached image, I am somewhat baffled by the amount confusion in the seismic signals below the surface(shown within red rectangles) that are appearing where I would expect a regular stratigraphic sequence. I have attempted to delineate fault lines but I don't know how successful I am in this aspect. I point out that the mid formation (250-1250m) is consolidated marine sediments, above this mostly sediments from a volcanic source (silts, sands and pebbles mostly unconsolidated).
While the region has historically been understood as passive in terms of seismic activity, this may not be the case as this research may prove.
One of the suggestions for the signal complexity is that the surface region has been so fractured by seismic activity that the resulting unsorted and unstratified volcanic sediments impair the clarity of the signal that is penetrating below.
I hope that someone with experience may have an answer.
Relevant answer
Answer
Dave,
Saw your 2D line by chance; nicely complex!
I concur with Jim that post-stack cleaning up of the lines would be helpful. Different interpretation platforms provide different tools. I would recommend filtering in the FK domain. Are the lines migrated?
A comment, the shallow overburden does not contain spark of high reflectivities, supporting an acoustically relatively homogenous interval. Note that you have limited fold at shallow times, i.e. the stack is from a limited number of traces there.
The seismic contains reflective and non-reflective sequences; have an acoustic impedance log and synthetic made of a well if possible. As a rule of thumb, reflective packages relate to sequences with contrasted lithologies, i.e. carbonates interbedded with shales or volcanics with clastics. Transparent units are likely to be more homogenous.
Keep fault geometries simple.
Try to interpret some horizons across the sections and flatten - unflatten the section along those horizons, it may help you figure out what the seismic contains.
Shallow heterogeneities may lead to a cascade of poor imaging, as do fault zones. Imaging of faults on 2D seismic lines running oblique to faults are challenging for fault imaging.
Good luck with your project.
Thomas.
  • asked a question related to Images
Question
3 answers
I want to check my protein expression but the sample (target protein) on my SDS PAGE shows a little difference as compared to the control. I measured the band density through IMAGE J but the band that appears darker shows less mean value than the control.
is there a better software of am I using it wrong?
Steps that I follow:
1. Upload the image and change it to greyscale.
2. adjust brightness and contrast if needed.
3. Invert the image if the bands are too dark , the bands will turn white and the background will be black
4. Make rectangular selections, analyze and measure each band one by one.
5. Higher the mean value, higher is the band density.
Relevant answer
Answer
i tihnk that the problem could be related to the fact that you invert the colours in the step 3. Try to repeat the analisys avoiding this step.
in the following link you can find an example of my analysis approach.
best
Manuele
  • asked a question related to Images
Question
1 answer
Dear Researchers
I'm facing problems during Rietveld refinement.
As you can see in the image, some peak positions are not matching, but all others are perfectly matching the position. The intensity of the two main peaks does not match.
What are the possible reasons and how I can overcome this.
Relevant answer
Answer
The main phase matches, but there are additional minor phases. You may check the XRD pattern of another phase with a similar composition.
  • asked a question related to Images
Question
4 answers
I did MICOM and obtained partial invariance for both my demographics: teaching experience.
I split the teaching experience into three groups to do Multigroup analysis
However, I am not able to confirm or reject my hypothesis that teaching experience moderates the relationships of all the constructs with the endogenous variable due to different significance within a group.
The question is:
1. If there is significance for the third group of teaching experience in SI and none for the others, do I generalize that as the moderation being generally significant or not siginificant? (Refer image)
2. should I split the data for teaching experience into two groups instead?
Relevant answer
Answer
  • asked a question related to Images
Question
1 answer
I have done FTIR of human hemoglobin in healthy subjects (blue) and diseased subjects (red and orange) (image attached). Please guide me on whether I have done this right or not. How can I interpret the alpha helix and beta sheet from this spectrum?
Relevant answer
Answer
From my limited knowledge hemoglobin spectra in visible spectrum, the peaks of every sample correspond nearly exactly. It seems that the difference seen is due to differences in concentration only. Further you have not mentioned the abnormal conditions in each individual. Please give some more details of protocol applied and the working hypothesis of your study with relevant references. I have no personal experience of this research methodology
  • asked a question related to Images
Question
4 answers
Image attached is at 20X. Cultured media for checking presence of bacteria and yeast and both tests were negative.
Relevant answer
Answer
looking like erythrocytes
  • asked a question related to Images
Question
1 answer
Any Upcoming call for papers in journals (SCI/SCIE indexed) related to image processing or computer vision.
Special issues related to my area, whose submission dates are within a few months.
Relevant answer
Answer
  • asked a question related to Images
Question
2 answers
kindly respond
Relevant answer
Answer
Have you already solved the problem of cardiac interval extraction? Try to understand how this is done in open source software. For example
  • asked a question related to Images
Question
9 answers
Recently, I discovered the dimension of the SOM network do turn out to be the number of data clusters for data clustering or image segments when used for image segmentation.
For example, if the dimension of the SOM is 7 x 7, then the number of clusters(segments) would be 49, if the dimension of 2 x 1, then the number of clusters(segments) would be 2.
1. Therefore, are there techniques for determining the dimension?
2. What should be the basis/yard stick for picking the dimension?
3. If the knowledge of the data is the basis/yard stick for picking the dimension, is that not a version of K-means??
Relevant answer
Answer
Yes, there are techniques for determining or choosing the dimension of a Self-Organizing Map (SOM):
  1. Grid Search: Iteratively testing different grid dimensions (e.g., varying the number of rows and columns) and evaluating SOM performance metrics such as quantization error or topographic error.
  2. Data-Driven Approach: Using characteristics of the dataset such as the number of features or the complexity of the data to determine an appropriate SOM grid size.
  3. Rule of Thumb: Applying general guidelines based on the size of the dataset or domain knowledge to select a suitable SOM dimension.
  4. Visualization: Inspecting visualizations of the SOM results (e.g., U-matrix or component planes) for different grid sizes to assess the clarity and meaningfulness of the resulting map.
Choosing the right dimension ensures that the SOM effectively captures the underlying structure and patterns in the data without being overly complex or sparse.
  • asked a question related to Images
Question
5 answers
I am trying to refine my data in full prof and my chi2 is 0,834E11 and everytime i try to refine the data the plot deviates further from the actual data and the chi2 doesnt change. The first image is with the parameters found in literature and the second is after refinement, in some cases the peaks from the refinement disapear
Relevant answer
Answer
When you mention that "the data the plot deviates further from the actual data", refer to how, for example, the maxima at 2θ 5° is not displayed, or this maxima is not included in the Fullprof refinement, or both.
In principle, the boundaries of the diffraction pattern to take into account to be "refined" and "displayed" in the Rietveld refinement with Fullprof (shown in the image) should be changed. There are two different concepts, the first is the most important because it directly affects the refinement. The second has to do with the aesthetic aspect, to visualise the fit in the moment.
In the first case, if the refinement does not take into account important regions of the pattern, it could be solved by changing the 2θ boundaries of the pattern to take into account to be refined.