Science topic

Proteomics - Science topic

The systematic study of the complete complement of proteins (PROTEOME) of organisms.
Questions related to Proteomics
  • asked a question related to Proteomics
Question
3 answers
Hello, I am a proteomics scientist, and I would like to ask for your advice regarding sample stability in a quantitative proteomics experiment.
I performed protein digestion for six samples intended for quantitative analysis. After the digestion step, I dried the samples under vacuum on Friday. Four of the six samples were completely dried, but the remaining two were not fully dried by the end of the day.
To preserve them, I stored the two partially dried samples at 4 °C over the weekend. On Monday, I resumed vacuum drying, and the two samples were completely dried.
My question is: Could this affect peptide stability or significantly impact quantification results in LC-MS analysis? If the impact is negligible, I would like to proceed with LC-MS as planned.
I appreciate your guidance.
Relevant answer
Answer
Brief storage of partially dried peptide samples at 4 °C over a weekend is unlikely to significantly affect peptide stability or LC-MS quantification. Proceeding with analysis as planned should be fine, but complete drying and colder storage are best for long-term stability.
  • asked a question related to Proteomics
Question
1 answer
I need to combine 2 matrices via the "Matching Rows by Name" tool (settings I use are in the attched photo 2).
Because the "other matrix" has rows that the base matrix does not have I use the Join style "Outer".
That works perfectly except that it only adds the values of the text column that I matched, namely Protein.Group. The other values for protein names, genes and first.protein.description are then empty in the combined matrix (see photo) - only for the additional rows that the base matrix does not have.
I played around with all the settings but I can't find a solution for this. It either leaves those columns blank or it adds them in addition, so that I have a bunch of copied columns next to eachother.
Relevant answer
Answer
Hi did you figure out the problem? I also have been having the same problem
  • asked a question related to Proteomics
Question
3 answers
We have isolated extracellular vesicles (EVs) from plasma samples and are preparing them for LC-MS/MS-based proteomics at a core facility. However, I am uncertain whether EV lysis is necessary before trypsin digestion. The literature is inconsistent: some studies report lysis with SDS, while others do not mention lysis at all. What is the general consensus on this? If lysis is required, what detergent would be suitable without interfering with mass spectrometry?
Relevant answer
Answer
Certainly detergent based lysis is necessary. Choose SDS concentration based on MS tolerance level. Talk to company's tech department what is the maximum amount of SDS you can use. Alternatively you can rupture your EV by sonication.
Good luck.
  • asked a question related to Proteomics
Question
6 answers
Hello everyone,
After conducting a proteomics experiment, I identified some targets I want to validate. So, I embarked on a challenging journey to find reliable ELISA kits—and it turned out to be quite frustrating. How can I choose trustworthy companies that sell high-quality, properly validated ELISA kits?
The most common companies that show up in my searches are:
  • Antibodies-online
  • Abbexa
  • Biomatik
  • MyBioSource
  • Novus Biologicals
  • Proteintech
  • Biorbyt
I've noticed that these companies often provide identical information regarding validation and kit specifications (e.g., the same detection range), which strongly suggests they are reselling kits rather than manufacturing them. Unfortunately, they usually don't disclose the original manufacturer.
Whenever Thermo Fisher Scientific or Abcam offers a kit for my targets, it feels like a victory.
Can anyone help me? :)
Relevant answer
Answer
I would recommend R&D, and BioLegend
  • asked a question related to Proteomics
Question
4 answers
I often do post-acquisition on mass spectra to increase mass accuracy for untargeted lipidomics or top-down proteomics studies. The procedure helped me with the Waters, Sciex, and Bruker mass spectrometers data. I have recently been working on data on a Thermo Orbitrap mass spectrometer, but I could not find a way to do post-acquisition mass calibration. Does anyone know how to do it? Thank you!
Relevant answer
Answer
Abdelhak Maghchiche I tried EASY-IC for real-time internal calibration. It worked great for positive ion mode, but mass error was still significant for negative ion mode. Have you had any issues with negative ion mode internal calibration? Thanks!
  • asked a question related to Proteomics
Question
2 answers
I am looking for pipette tips for proteomics work with low protein retention and no plastic residue release to analyze samples in a very high-resolution mass spectrometer since the ones I used from the Eppendorf brand have been discontinued until 2026, according to the sales representative.
Relevant answer
Answer
Corning® DeckWorks™ low binding tips
  • asked a question related to Proteomics
Question
3 answers
Hi all!
I've run both LFQ and TMT 18-plex proteomics on the same protein extracts.
My experiment consists of two study conditions, and 8 biological replicates.
After digesting my protein extractions, I ran half of the peptide preparation using DDA with four technical replicates, and the other half I TMT tagged (18-plex, two reference channels, one mixture) fractionated, and ran using an SPS MS3 method on the Fusion Lumos.
I've done the searches in PD2.4, and summarised the results with `MSstats` and `MSstatsTMT`.
I'm currently working on how to deal with two different datasets of the same experiment, the original plan was to use the LFQ dataset for the improved coverage, and the TMT dataset for improved quantification.
One thing I've noticed is that while the TMT dataset has significantly better adjusted p-values, the fold changes are less pronounced than the LFQ dataset, meaning that quite a few proteins fail the biological significance thresholds. See the attached volcano plots (vertical dotted lines represent 0.58 log2 FC, horizontal 0.05 adjusted p-value). The scales are not consistent between the plots sorry!
I'm aware that MS2 TMT methods have an issue with reporter ion compression blunting fold change values, and was hoping that it would be less of an issue with my MS3 method. Is there a correction for this, or does this reflect a lack of dramatic fold-change in my biology?
Any other tips for integrating LFQ and TMT data would also be appreciated!
Sam
Relevant answer
Answer
Thank you for your answer Nikhil Dev Narendradev ,
I had a suspicion that this would be the case. I have looked at the correlation of TMT and LFQ fold changes, and it's not great unfortunately (plot attached). However, proteins with increased abundance in one technique do tend to show similar changes in the other technique.
I have decided to focus my analysis on the 25 proteins showing significance in both methods - this is a manageable list! I've not though to focus on proteins by those following a linear relationship however, that's an interesting idea.
  • asked a question related to Proteomics
Question
2 answers
I am working on proteomics. Due to non-availability of nanoLCMS, I have conducted a ESI-MS analysis of a test sample. I have identified the proteins using XTandem. I am planning to quantify relative abundance of peptides/proteins on the basis of mz value. It would be a great favor, if I am provided a reference for this.
Best Regards
Relevant answer
Answer
Hi Tehzeeb,
Using XTandem, are you able to match any IDs in the test sample?
The only quantifiable entity in your results is Intensity of a particular species at a particular m/z
You can utilize the XCalibur or the analytical software for your mass spectrometer to integrate area under curve of the particular species at particular m/z in your mixture.
Good luck,
Hediye
  • asked a question related to Proteomics
Question
2 answers
Their roles and examples.
Relevant answer
Answer
Genomics can improve diagnostic accuracy, predict which drugs are likely to be effective in patients, and contribute to the monitoring, treatment and control of infectious disease in individuals and in populations. One may explore the genomic structure of infectious agents, the implication of acquisition or loss of nucleotides, genes and plasmids on pathogenicity, evaluate how sequencing of the genome of infective organisms can be used for diagnosis, sub-classification and strain identity, and the sensitivity of a pathogen to drug treatment.
Few examples of genomics in clinical diagnosis of infectious diseases.
A. The application of nucleic acid sequencing and sequence-dependent detection methods may be used for the diagnosis and management of viral infections such as the blood-borne viruses human immunodeficiency virus (HIV) and hepatitis C virus (HCV).
B. Integration of genomic technologies into routine antimicrobial resistance (AMR) surveillance in health-care facilities has the potential to generate rapid, actionable information for patient management and inform infection prevention and control measures. Detection of drug resistance in plasma-borne viruses using DNA sequencing has been a mainstay for the clinical management of HIV-infected patients. This method is reliable and fast compared to the alternative, gold standard, cell-based phenotyping in culture and is also used to manage other viral infections that are treated with antivirals including HCV, hepatitis B virus, influenza and herpesviruses.
Proteomics expression provides biomarker identification by comparing the profile of protein expression between normal samples and those affected by the disease. When protein expression changes in biological pathways during disease conditions, monitoring of these altered proteins in tissue, blood, urine, or other biological samples can provide indicators of the disease.
Disease-specific biomarkers can be categorized into diagnostic for early detection; prognostic to predict disease recurrence; and treatment-predictive biomarkers. Predictive biomarkers classify patients into categories of responders and non-responders. This classification also is important in drug design applications. So, these biomarkers, in general, could reflect how patients feel and survive.
Few examples of proteomics in clinical diagnosis of infectious diseases.
A. Proteomic methods can be applied for the diagnosis of tuberculosis. Proteomic methods enable for the establishment of proteins secreted by the clinical isolates in vitro. Among these, rRv0566c, rRv3874, and rRV3369 have shown potential as sero-diagnostic antigens with a sensitivity of 43%, 74%, and 60%, and a specificity of 84%, 97%, and 96%, respectively. Therefore, kit-based serum screening test can be performed by using such groups of proteins.
B. Proteomics has played a significant role in the diagnosis of sera proteins during periods of SARS-CoV-2 outbreak. Severe acute respiratory syndrome is a viral infectious disease which was the cause of a number of deaths. For the treatment and control of this disease, a precise diagnostic approach would be important. Therefore, MS-based proteomic techniques can be used to detect SARS-CoV-2 viral proteins. Sera proteomic study of SARS patients can reveal possible protein markers of truncated forms of α 1-antitrypsin (TF-α 1-AT) that are consistently detected with higher concentrations in SARS patients than healthy individuals. These markers are proven useful as therapeutic targets, vaccine targets, and diagnostic tool for SARS patients.
The article attached below will be helpful!
Best.
  • asked a question related to Proteomics
Question
3 answers
Dear colleagues,
I would like to seek your kind advice on click chemistry enrichment of azidohomoalanine (AGA)-tagged proteins.
From literature, I learned that there are 2 main avenues, either (1) copper-free, DBCO-agarose bead pull down method or (2) Click-it enrichment kit (e.g. Thermo C10416) based on copper catalysis and alkyne-beads covalent capture.
From your experience, which method would provide the greater enrichment efficiency without intensive optimization?
As a newbie, it may feel safer to consider using an enrichment kit (e.g. the Thermo Click-it kit), but the manufacturer's protocol recommends an input of 5–20 mg proteins. Any experience on scaling it down (e.g. using 100-200 ug protein input)?
Any feedback would be much appreciated.
Thank you,
Kay
Relevant answer
Answer
Hi Kay, i'm having the same problem. i used the Click-iT Enrichment Kit (e.g., Thermo C10416), but the enrichment results show significant non-specific protein binding.Did you find a solution? Thank you
  • asked a question related to Proteomics
Question
3 answers
I have got some files from MS-MS analysis with .D extension, but I don't have idea if there is any software to analyse this kind of extension like MaxQuant or FragPipe. I didn't find the way to analyse.
Thank u in advance.
Relevant answer
Answer
Dear Enrique,
this file extension is specific for all Agilent platforms (HPLC, GC-MS, LC-MS, etc.). You will probably need OpenLab ChemStation or Agilent MassHunter to read and analyse the data contained in the files.
Best
Michael
  • asked a question related to Proteomics
Question
2 answers
I was just wondering as we can (quite easily?) isolate both RNA and proteins from the same sample, why can't I find much info about sequencing transcriptome and proteome "at the same time"?
It seems that single-cell multiomics is in trend now, but looking at transcripts and proteins from the same samples looks like a simplified multi-omics from my perspective. What are the limits to that? Even companies don't seem to provide such services. Why is that?
Relevant answer
Answer
Simultaneous RNA-Seq and proteomics are uncommon due to several technical challenges. First, RNA and proteins require different extraction protocols, and attempting to isolate both from the same sample can compromise yield and quality. RNA and proteins also differ in stability, turnover rates, and abundance, complicating simultaneous extraction. Proteomics is more complex, as proteins undergo post-translational modifications (PTMs) that require specialized mass spectrometry techniques, unlike RNA-Seq. Additionally, quantifying transcripts and proteins involves different methods, making it difficult to correlate their levels. Limited sample availability can further reduce material for both analyses, and the lack of standardized protocols for integrating RNA-Seq and proteomics data adds to the difficulty of adopting this approach.
  • asked a question related to Proteomics
Question
2 answers
Proteomics
Relevant answer
Answer
iProx: a proteomic database in China
  • asked a question related to Proteomics
Question
3 answers
Dear all,
After protein extraction with a RIPA buffer (50 mM Tris pH 7.5, 150 mM NaCl, 1% NP-40, 1% Na-deoxycholate, 0,1% SDS, 1mM EDTA +PIC), I wanted to quantify the yieild by Bradford assay. The RIPA I used was transparent and no precipitates were visible.
When I added the RIPA to the Bradford buffer (for the blank) a weird blue precipitate was formed in the tube, the color and the apparence makes me think those are not proteins.
Do you know what could have precipitate? I used this RIPA once already and I didn't have this problem.
Do you think I could still use the extracted proteins for Mass Spectrometry?
Relevant answer
Answer
Hello Giulia,
I too had thought the same. Maybe you are right. The shelf life of RIPA lysis buffer is 1 month when stored at 4 deg C.
Best.
  • asked a question related to Proteomics
Question
7 answers
Hello,
So, I am analyzing serum proteomics with MS from autism mouse models and trying to compare that to human serum data. So its:
Differentially expressed proteins in human serum vs. differentially expressed proteins in mouse model serum
I already got the data. The experiment is already done. No time to re-do any experiment. Is there a way, a tool, to translate the mouse protein data into a human data? Considering the analogous proteins and what not? I'm working with UniProt accession numbers...
If there is like a tool where you drag in the uniprot accession numbers and converts it into its human counterpart protein, or a paper that describes such a method, it would be of great help! Almost out of my depth here...
Thanks,
Andrew
Relevant answer
Answer
Isoforms most likely. You also see it when you go from protein to gene or vice versa. You also have the complication of different species.
  • asked a question related to Proteomics
Question
2 answers
  • asked a question related to Proteomics
Question
1 answer
I measured cathepsin B activity using the Magic Red kit in cells, and it increased after treatment. However, when I performed proteomics, the results showed a downregulation of cathepsin B. Does anyone have an explanation for this discrepancy or know of any articles that could provide insights? Thank you.
Relevant answer
Answer
Hi Aura,
Expression level and activity are two different things. It might be possible that your treatment is increasing the activity of cathepsin B (i.e. by PTMs) and therefore the expression gets compensatory downregulated to regulate to a normal level.
Best,
Murat
  • asked a question related to Proteomics
Question
3 answers
Hi everyone,
I wanna do proteomics of monkeys serums in 2 states, healthy and patient. How many monkeys in each group I need to run proteomics and validate my data from it?
Do I deplete albumin and IgG from serum then run proteomics? or any extra purificaton steps? or I can run serum without any extra purification?
Thanks to answering me.
Relevant answer
Answer
I would also recommend the albumin and IgG depletion, it will help with the identification of lower abundance proteins.
  • asked a question related to Proteomics
Question
4 answers
I have a proteomics dataset with missing values. I tried some strategies, but the point is that there are sets completely with missing values.
The last strategy was to apply MissForest in python and it does not handle completely missing value columns.
Any ideas on how to deal with this?
Thanks in advance.
Relevant answer
Answer
Manys thanks, Hussain Nizam !
  • asked a question related to Proteomics
Question
2 answers
Hello,
In the literature, there are some MS/MS results that include hypothetical proteins, which can be shorter than 40 amino acids. I can also find these when I search for an organism in the protein section of NCBI. My question is, would it be absurd if I synthetically synthesize these peptides called hypothetical proteins and test them as drug candidates in certain disease models? Or are studies like the one I mentioned feasible and being conducted? If so, what procedure should I follow? For example, when I find a hypothetical protein, should I first perform a blast and then synthesize and use it if it meets certain conditions?
Is there any chance you could share some references with me that have been done in this manner?
I hope I have been able to convey what I want to ask.
Thank you for your answers.
Relevant answer
Answer
I cannot really answer your questions, but wondered, if this will be of some help. I have seen a review article about how lactobacilli degrade milk protein, and the resulting short peptides have medicinal properties. For instance: "Many studies focused on ACE inhibitor peptides, probably
due to the ease of use of in vitro anti-ACE assays. The well-known
Val-Pro-Pro and Ile-Pro-Pro peptides are produced during milk
fermentation by some Lb. helveticus strains. ... An additional
ACE inhibitory peptide sequence (Ala-Ile-Pro-Pro-Lys-Lys-Asn-
Gln-Asp) was also identified in milk fermented by Lb. helveticus."
Raveschot, C., Cudennec, B., Coutte, F., Flahaut, C., Fremont, M., Drider, D., & Dhulster, P. (2018). Production of bioactive peptides by Lactobacillus species: from gene to application. Frontiers in Microbiology, 9, 409606.
  • asked a question related to Proteomics
Question
4 answers
I am currently working on a project that requires the isolation of light chain from reduced IgG for bottam up proteomics-Mass spectrometry. Kindly provide insights or recommendations. Thanks!
Relevant answer
Answer
As mentioned above by Dr. Albert Lee, you can use protein A or protein G to remove heavy chain. You can also use cation exchange chromatography to separated light chain from heavy chain, especially when the amount of the light chains you need is small since you can use an high resolution analytical cation exchange chromatography column.
  • asked a question related to Proteomics
Question
1 answer
We want to find interacting protein partners for our protein of interest.
Relevant answer
Answer
Hi, you can check with Phenoswitch Bioscience, now Allumiq.
Here is their website: https://allumiqs.com/
  • asked a question related to Proteomics
Question
2 answers
Due to factors outside of my control, peptides in my ESI-MS data have been ionised normally by protons ([M+2H], [M+3H]...) but also by sodium ([M+2Na], [M+3Na]...).
Is there a way to configure MaxQuant's andromeda search engine to look for the sodium-ionised peptides as well?
Thanks!
Relevant answer
Answer
James M Fulcher Thanks for your answer! I agree it is probably not trivial, because you cannot assume that *all* ionisations are with sodium rather than a proton, so it sort of explodes combinatorically at the MS2 level.
Unfortunately, I cannot rerun the samples, because I am trying to apply a new analysis pipeline to existing data on PRIDE, and I have found this phenomenon in other researchers' published data.
I have tried setting up sodiation as a PTM on MaxQuant, and I think it could work, but it increases the analysis time from minutes per sample to days per sample, so might be useful for a proof-of-concept, but if anyone has suggestions for a better solution – or knows of a way to this in MaxQuant already – I would love to hear them!
  • asked a question related to Proteomics
Question
1 answer
How many ug/particles of EVs is needed for doing proteomics?
We are doing many steps for purify EVs of human plasma and at the end of the process we are getting just 1/2ug/ml concentration of EVs, which is about 2.17e+10 / 8.18e+09 particles/ml. What is the volume or minimal ug for doing proteomics?
Thank you!
Linoy.
Relevant answer
Answer
  • asked a question related to Proteomics
Question
3 answers
serum proteomics
Relevant answer
Answer
İsmail Emir Akyildiz Meryem wrote they add 15% TFA, so that's probably not a final concentration.
  • asked a question related to Proteomics
Question
2 answers
Hi all,
I have limited experience in proteomics and would like your opinion.
I run proteomics analysis in three groups A: study group, B: first control of a disease with overlap with the study group, C: normal control.
The data analyzed by the proteomics team is the differential expression of A normalized to C, which is not what I want as there are some proteins that are also present in the relative quantification of B normalized to C.
Do you now how to control sample B so that I could identify only the proteins that are specific to A, excluding the common one in B?
Thanks you in advance
Clemence
Relevant answer
Answer
If you have A normalized to C, and B normalized to C, then you can compare A and B and set a cut-off differentiating the two. This will work if it is quantitative setting.
  • asked a question related to Proteomics
Question
3 answers
Could this be due to an error in Mass Spec calibration or data analysis? I have 2 technical repeats that are fine, but the 3rd repeat is far away in the PCA plot and clusters with replicates of a different sample.
Relevant answer
Answer
It is always better to include an internal standard (deuterated isotopes or any analog compound) to uncover any system or operator-related bias...
If you don't have any IS in your experiment, an alternative way to do this is tracking the intensity (and RT) of an inherently available compound (consider this approach as visualization of housekeeping proteins as normalization targets in western blot analysis).
If the system is Orbitrap, EASY-IC calibrant performance may also indicate some hints. A baffled system for ESI fluidics infusing the leucine enkephalin in the Waters system may also be beneficial...
shifted position for a sample in PCA is more common for biological replicates but not for technical replicates. If the instrument performance is stable during analysis, it is probably caused by either sample prep or autosample operation failure.
  • asked a question related to Proteomics
Question
1 answer
What is a correct way to estimate s0 parameter for Volcano plot visualization in Perseus?
The documentation says: "Artificial within groups variance (default: 0). It controls the relative importance of t-test p-value and difference between means. At s0=0 only the p-value matters, while at nonzero s0 also the difference of means plays a role. See (Tusher, Tibshirani, and Chu 2001) for details." Now the article states: "To ensure that the variance of d(i) is independent of gene expression, we added a small positive constant s0 to the denominator of Eq. 1 (i. e. d(i) = (avg-state1(i)- avg-state2(i))/(gene_specific_scatter(i) + s0)). The coefficient of variation of d(i) was computed as a function of s(i) in moving windows across the data. The value for s0 was chosen to minimize the coefficient of variation. For the data in this paper, this computation yielded s0 = 3.3."
Now should I calculate the CV for my data and then estimate the s0 or am I missing something?
Relevant answer
Answer
My best way to answer this:
following an articel by Gianetto (2016 - Uses and misuses of the fudge factor... DOI 10.1002/pmic.201600132) I downloaded the siggenes package for R and perfomed analysis on my dataset. I guess this is the only rigorous way of doing it.
  • asked a question related to Proteomics
Question
5 answers
Recently i started a proteomics of blood plasma (100 ms/ms files .raw) against fungal fasta (from uniprot) using MaxQuant in linux. I started with 6 .raw then 30 and finally all 100 files. In 6 files i was able to detect about 97 proteins, in 30 it was 127 but in case of 100 files where it should be much higher, count was reduced to 67 and most of the detected entries were of contaminants (100 in 167). Also,null intensity count is over 50%. So, I am stuck at this point. Beside this, i also performed the extraction of all these 100 files in the batches of 10. In this case total 666 proteins were detected. I don't know if i can trust this method or not. But, why am i not able to get this in one single go? Below are some parameters and system specifications i used for the analysis.
Parameters
Fixed modifications: carbamidomethyl (c)
enzyme: Trypsin/P
Variable modifications: Oxidation(M), Acetyl (Protein N-term)
3 groups, LFQ
Peptide, protein and site FDR: 0.05, 0.05 and 0.05
System
RAM-128GB
CPU- intel i9 11th generation, 16 cores
Working memory- 2TB SSD,
NVDIA Geforce RTX 3080, 16 gb
Please do the needful.
Relevant answer
Answer
Hi Rohit,
I can not give any suitable match time and alignment time window since it is dependent on your LC-MS performance and the level of possible drift of retention times from the first to the last sample. maybe you can try to inclease the retention time window in steps of +30 seconds and look if you get more identifications.
In relation to your FASTA, I would suggest to take a combined FASTA from all available fungal species in close taxonomy to your species of interest to increase the database by entries of very homologue sequences from other very similar fungal species.
Best,
Murat.
  • asked a question related to Proteomics
Question
3 answers
I am doing western blotting for sample after PFA cross-linking. And, indeed, I see something which looks like cross-linked band after PFA, but it MW is ~30 kDa smaller than expected.
I would like to understand - is it common and expectable?
I would assume that electrophoretic mobility should be affected by cross-linking, because cross-linking will prevent SDS-denaturation. But it would be nice to have some examples of similar cases.
Thanks!
Relevant answer
Answer
Post cross-linking, the sum of the proteins linked by cross-linker is what you should expect to see. In practice however, this is more complicated since cross linking is not a clean experiment and since there are intra- (within the protein) as well as many mono-linkages formed as reaction products.
So, if you are observing a cross linked product smaller than expected, is that because you already know what it is linking with?
Hediye.
  • asked a question related to Proteomics
Question
5 answers
I'm running MQ on some peptide-level enrichment TurboID samples, so I'm interested in quantifying individual peptides. In past versions of Maxquant (v1.6.14 etc), the peptides.txt file contained LFQ intensity columns for each peptide. However, I just ran my data in Maxquant v 2.4.12, and the LFQ intensity columns are missing in the peptides.txt file. I ran it twice to make sure I had selected all the correct parameters. Does anyone know why these columns are missing, and how to recover them?
Thanks!
Relevant answer
Answer
Hi Tarabryn,
to be honest, I have never used LFQ quantification on single peptide level, However there are a multitute of settings to define the calculation for LFQ.
Maybe, most important is to set the min peptide ratio for LFQ from 2 (default) to 1.
It also makes sense to disable the fast LFQ function.
Then set the type of peptides which should be considered for LFQ (i.e. Unique, Razor etc. and modified or unmodified only (depending on your experimental question.
I hope you can find the way to recover the LFQ values with the newer MQ versions.
Good luck!
Murat
  • asked a question related to Proteomics
Question
3 answers
Hey all! I have a question about my proteomics data evaluated in Proteome Discoverer. I got three volcano plots of three biological groups in a ratio with control group. In one group I see a strange pattern, while other two look normally. Log2 ratio is somehow 100% -related to p-values with no exception (please see the graphs). Obviously it is not the issue of plotting itself, but in calculating the ratio or the p-value. Quantification was done using non-nested design, label-free quatification, pairwise-based ratio calculation, t-test, normalization of total peptide amount.
Does anyone knows the reason for that pattern?
Thanks a lot!
Relevant answer
Answer
I've also had some funny looking volcano plots from PD data. Discussion here: https://www.researchgate.net/post/Artefact_in_volcano_plot
I never got to the root cause, but the funny lines of points lining up were an artefact of normalisation - mind you, the underlying table of PSMs from PD was the original source.
Unfortunately I never got around to identifying why PD gave odd output.
Sorry I can't be more helpful!
Sam
  • asked a question related to Proteomics
Question
3 answers
At a fixed voltage of 260V, electrophoresis of protein was faster in our previous batch of 1x SDS running buffer. However, the electrophoresis was much slower recently with much lower current (less than half of the previous one). The same issue occurs even with new dilution of freshly prepared 10x buffer to the 1x buffer. What would be the possible reasons of such issue?
Relevant answer
Answer
Check the electrodes. If you see them covered with whitish stuff, remove it with wet tissue or brush until the metal surface is exposed.
  • asked a question related to Proteomics
Question
3 answers
Kindly support me with articles and ideas. Regards
Relevant answer
Answer
"Kindly" take the time to perform a keyword search on the web (e.g. Google, Bing) to find related articles and documents on this topic. *Learning how to research the answer to a question is one of the most important skills you can learn as person and student.
  • asked a question related to Proteomics
Question
6 answers
Hello, I am proteomics researcher.
we got stuck in problem detecting immunopeptidome HLA class peptides.
After, enriching peptides, we detect 150 ng/ul concentration of peptide using nanodrop (protein A 280 mode).
and about 750ng of peptides were injected to mass spectrometer. (our mass spectrometer is tims-tof, so if we inject 200ng of HeLa peptide, 40000 peptides can be detected.)
However, 100 peptides were detected in our HLA sample. Furthermore, intensity of peptides signal is low...
why is a large amount of peptides detected in the nanodrops although there seems to be a small amount of peptides in mass spectrometry data?
I heard that A 280 mode in nanodrop detect peptide concentration by measuring tryptophan or tyrosine.
is it possible there are many free tryptophan and tyrosine in the sample, so it make nanodrop conecntration high but not be detected in mass spectrometer?
If you have any idea, please let me know
thank you very much
Relevant answer
Answer
Thank you for your kind reply.
I will try searching HLA peptide atlas.
Best regards
Jaekwan.
  • asked a question related to Proteomics
Question
3 answers
Hello. We understand that a volcano plot is a graphical representation of differential values (proteins or genes), and it requires two parameters: fold change and p-value. However, for IP-MS (immunoprecipitation-mass spectrometry) data, there are many proteins identified in the IP (immunoprecipitation group) with their intensity, but these proteins are not detected in the IgG (control group)(the data is blank). This means that we cannot calculate the p-value and fold change for these "present(IP) --- absent(IgG)" proteins, and therefore, we cannot plot them on a volcano plot. However, in many articles, we see that these proteins are successfully plotted on a volcano plot. How did they accomplish this? Are there any data fitting methods available to assist in drawing? need imputation? but is it reflect the real interaction degree?
Relevant answer
Answer
Albert Lee : the issue with doing this is it makes the fold changes entirely arbitrary. Imagine I have a protein I detect in my test samples at "arbitrary value 10" but do not detect in my control samples at all.
If I call the ctrl value 0.5, then 0.5 vs 10.5 = 20 fold increase.
If I call the ctrl value 0.1, then 0.1 vs 10.1 = 100 fold increase.
If I call the ctrl value 0.0001, then 0.0001 vs 10.0001 = 100,000 fold increase.
In reality, the increase is effectively "infinite fold", but what this is really highlighting is that fold changes are not an appropriate metric here.
A lot (most) of statistical analysis is predicated on the measurement of change in values, not "present/absent" scenarios.
For disease biomarkers, for example, something that is present/absent is of use as a diagnostic biomarker, but not as a monitoring biomarker: you can say "if you see this marker at all, you have the disease", but you cannot really use it to track therapeutic efficacy, because all values of this marker other than "N/A" are indicative of disease.
For monitoring biomarkers you really want "healthy" and "diseased" values such that you can track the shift from one to the other.
David Genisys: I agree with Jochen Wilhelm , and would not plot my data in this manner.
A lot will depend on the kind of reviewers you get, and the type of paper you're trying to produce, but it would be more appropriate to note that these markers are entirely absent in one group, and then to comment on the robustness of their detection in the other. You wouldn't run stats necessarily, because as noted, stats are horrible for yes/no markers, but you could use the combination of presence/absence and actual level of the former to make inferences as to biological effect. If a marker goes from "not detected" to "detected but barely", then it might be indicative of dysregulated, aberrant expression behaviour, or perhaps stochastic low-level damage. Interesting, but perhaps not of biological import or diagnostic utility. If instead if goes from "not detected" to "readily detected, at high levels", then it's probably very useful as a diagnostic biomarker, and also indicative of some active biological process, be it widespread damage/release, or active expression of novel targets.
In either case you can make biological inferences without resorting to making up numbers so you can stick them on a volcano plot (and to be honest, if you get the kind of reviewers that demand volcano plots, you can always use the trick Albert suggests).
Volcano plots are primarily a way to take BIG DATA and present it in a manner that allows you to highlight the most interesting targets that have changed between groups: if you have whole swathes of genes that are instead present/absent, then those could be presented as a table, perhaps sorted by GO terms or something (if it looks like there are shared ontological categories you could use to infer underlying biology).
  • asked a question related to Proteomics
Question
3 answers
As of now, there is no public database available for this kind of sample to take as a control.
Relevant answer
Answer
To gain insights from your proteomic data in the context of pathways:
1. Protein-protein interaction networks: Construct protein-protein interaction (PPI) networks using available databases or tools. These networks represent the physical interactions between proteins and can provide insights into functional relationships and pathway associations. Analyze the network topology, identify highly connected proteins (hubs), and explore protein clusters or modules that may represent enriched pathways.
2. Functional enrichment analysis: Perform functional enrichment analysis using tools such as DAVID, Enrichr, or g:Profiler. These tools allow you to input a list of proteins and assess enrichment of Gene Ontology (GO) terms, biological pathways, or other functional annotations. This analysis can help identify overrepresented functions or pathways in your protein dataset.
3. Cross-referencing with gene-level data: If available, consider integrating your proteomic data with gene-level or transcriptomic data from the same samples or a related study. By mapping proteins to corresponding genes, you can leverage gene-level pathway analysis methods and identify pathways enriched with differentially expressed genes associated with the proteins of interest.
4. Literature-based analysis: Conduct a literature search to explore existing knowledge and studies related to the proteins identified in your proteomic dataset. Look for studies that have investigated the functions, interactions, or pathways associated with these proteins. This qualitative analysis can provide valuable insights into the potential involvement of specific pathways in your disease sample.
5. Pathway databases: Explore curated pathway databases such as Reactome, KEGG, or WikiPathways. These databases provide well-annotated pathways and can serve as a reference to investigate potential connections between your identified proteins and known pathways. Look for proteins within your dataset that are annotated to specific pathways of interest.
Remember that pathway analysis based solely on proteomic data has limitations.
Hope it helps:credit AI
  • asked a question related to Proteomics
Question
1 answer
Hi,
what is the maximum number of serum proteins that can be identified by using ultra-performance liquid chromatography-mass spectrometry (UPLC-MS) without nano liquid chromatography (nano LC). Please give the reference.
Thanks!!
Relevant answer
For this you can see some available protein databases and tandem -LCMS data from various mass spectrometry sites.
  • asked a question related to Proteomics
Question
3 answers
Recently, I did sample digestion of my protein standard using FASP and I found that it had significantly more methionine oxidation than previous samples. Now, I am looking for any leads as to minimize this. Any comments or suggestions are welcome. Thank you.
Relevant answer
Answer
You may use antioxidants to inhibit the oxidation. For this, catalase, and methionine are typically used but you may add water-soluble antioxidants such as phenolic acids can also do the job. FASP is inherently an extended protocol and smaller sample volumes, and larger cut-off values of the membrane can reduce the required uptime. Thus, the whole process can be concluded relatively faster to limit met oxidation.
BTW, you may also look for S-TRAP as an alternative way of SDS-removal
and on-membrane cleavage technique. I did not compare but papers declare it has the same efficiency as FASP but completes the process faster...
  • asked a question related to Proteomics
Question
4 answers
Hello, I am a proteomics researcher. I got stuck in a StageTips problem.
I usually have used spincolumns made by Harvard apparatus for desalting.
However, I tried to replace it with in-house made StageTips.
I followed the StageTips paper. (Protocol for micro-purification, enrichment, pre-fractionation and storage of peptides for proteomics using StageTips, 2007, Nature protocol)
below is my protocol.
I inserted 3 C18 disk into the 200ul tip.
Buffer A: 0.5% formic acid
Buffer B: 0.5% formic acid , 80% acetonitrile
1) column conditioning step.
- Add 30ul of MeOH to the StageTip and centrifuge ( 500g, 2min, 20C)
- Add 30ul of Buffer B to the StageTip and centrifuge ( 500g, 2min, 20C)
- Add 30ul of Buffer A to the StageTip and centrifuge ( 500g, 2min, 20C)
2) Sample loading
-Load the about 6ug of peptide sample to the stagetip and centrifuge ( 500g, 2min, 20C)
3) Wash
-Add 50ul of Buffer A to the StageTip and centrifuge ( 500g, 2min, 20C)
4) Elution
-Add 50ul of Buffer B to the StageTip and centrifuge ( 500g, 2min, 20C)
I collect the sample loading step, and wash step, elution step respectively.
And all sample was dried under the vacuum and reconstituted with 0.1% formic acid,2% ACN
peptide concentration was determined using nanodrop.
and below is result
sample loading step:0.2 mg/ml
wash step:0.9 mg/ml
elution step:0.01 mg/ml
All solution is eluted in wash step...
Do you know why?
Relevant answer
Answer
Hi Jaekwan Kim,
my first question is on howyou performed the nanodrop analysis of the peptide concentration? Have you set the baseline (Blank) correctly by using the respective buffers for the different solutions (sample loading buffer, washing buffer, elution buffer). If you have done the blank measurement with dest Water for all solutions your results might be not correct.
However, there are some point in the stage tip procedure that can be the reason of bad recovery.
The first is if you put the C18 disks to loose into the tip there might be a leak where the buffer can flow through without retention of the peptides at the C18 matrix. The sam will happen if the disk stacks of the produced stage tip has wrinkanges at the side (by usage of a two large diameter of the blunt ended hamilton syringe needle for punching out the disks and push into the tip / 16gauge size in the original protocol Rappsilber et al. 2007). I would allways check freshly prepared stage tips for a leakage with a short centrifugation step before possibly wasting important samples.
Another point which is rarely addressed in protocols is the activation state of the C18 matrix in the stage tip which is active for proper peptide bilding/retention as long as the material remains wet. I.e, if you activate your C18 matrix with 100% methanol but leave the tips for a longer time before proceeding with the next step, so that the matrix dries and the C18 become inactive and your peptides will flow through. The drying of the C18 can also happen if your centrifugation steps between activation and sample loading are too long or the centrifugation speed is to fast so that every liquid is drained away from the C18. The setup of the centrifugation time and speed is depending on how tight the C18 disk stacks are packed (which could be very variant). The tighter the stacks in the stage tip are packed the longer the centrifugation or the higher the speed will be required. For this reason I would recommend (and also because of the drying issue mentioned above) to evaluate time and speed for the individual type of tightness of the stage tips.
Good luck,
Murat
  • asked a question related to Proteomics
Question
1 answer
We are planning to use an q-exactive mass spectrometer for top-down proteomics . There is an HCD collision cell in mass spectrometry. How well does q-exactive's top-down proteomics work?
Relevant answer
Answer
The Thermo Scientific Q Exactive mass spectrometer is a popular instrument for proteomics research, but it is primarily associated with bottom-up proteomics, where proteins are digested into peptides before analysis. While it is possible to use the Q Exactive for top-down proteomics, it may not be as effective as other specialized instruments for this approach.
Top-down proteomics involves analyzing intact proteins without prior digestion into peptides. This approach can provide valuable information about protein isoforms, post-translational modifications, and protein complexes. However, it comes with its own set of challenges, as intact proteins are larger and more complex than peptides.
Here are some factors to consider regarding the effectiveness of the Q Exactive for top-down proteomics:
  1. Resolution: The Q Exactive series offers high-resolution mass spectrometry, which is advantageous for resolving intact protein ions. However, specialized top-down instruments may offer even higher resolution, which can be critical for analyzing complex protein mixtures.
  2. Mass Range: The Q Exactive has a broad mass range, which can accommodate intact proteins. However, for very large proteins or protein complexes, other instruments with extended mass ranges might be more suitable.
  3. Fragmentation: Fragmentation of intact proteins in top-down proteomics is necessary to identify and characterize the protein's primary sequence and modifications. While the Q Exactive can perform fragmentation, other instruments designed for top-down proteomics may offer more advanced fragmentation techniques and options.
  4. Data Analysis: Top-down proteomics generates complex data, and specialized software tools are often used for data analysis. While you can process top-down data on the Q Exactive, dedicated top-down proteomics platforms may offer more comprehensive analysis capabilities.
  5. Sample Preparation: Sample preparation for top-down proteomics is crucial and can be more challenging than for bottom-up approaches. Ensuring efficient protein extraction, purification, and intact protein preservation is essential.
In summary, the Thermo Scientific Q Exactive can be used for top-down proteomics, and it offers high-resolution mass spectrometry capabilities that are beneficial for intact protein analysis. However, the field of top-down proteomics has advanced with the development of specialized instruments and methods, such as FT-ICR (Fourier-transform ion cyclotron resonance) and Orbitrap instruments. Researchers often choose these specialized platforms when focusing on top-down proteomics due to their improved performance and tailored features. Therefore, the choice of instrument for top-down proteomics depends on the specific research goals and the resources available to the researcher.
  • asked a question related to Proteomics
Question
1 answer
Hello Everyone,
I’m exploring the feasibility of a mobile application that assists in identifying contamination in microbial cell cultures. The concept involves the following steps:
  • Take a droplet from a flask containing the culture.
  • Place the droplet on a single-use microscope slide.
  • Capture an image of the droplet under a light microscope.
  • Upload the image to the app.
The application, using deep learning algorithms, would analyze the shape and color of cells to detect patterns indicating contamination. Users would need to provide specific details such as:
  • Buffer conditions
  • Type of microorganism being cultured
  • Hypothesis regarding potential contaminants
I would appreciate your insights on the following aspects:
  • Existing Solutions: Are there already existing tools or applications that execute a similar function? If so, what are their strengths and weaknesses?
  • Technical Feasibility: Given your expertise, do you see any technical challenges or limitations that might need special consideration from the biological perspective?
  • Specific Biological Markers: What specific biological markers or patterns should we prioritize when identifying contamination in cell cultures?
  • Practical Utility: How beneficial do you think such an application would be for researchers in the greater biological community in day-to-day lab activities?
Thank you very much for your valuable feedback and time!
Relevant answer
Answer
I think a lot of cell biologists could look at their cells under a microscope and immediately tell if their cell culture was contaminated, although they might not be able to identify which type of contamination. Trying to distinguish which type of contamination could be difficult. Personally, I don't think that microbiologists would be interested in the application. There might be more interest from biologists who study non bacterial eukaryotic cell types like HEK293, Hela, cancer cells, and other types of mammalian cells.
Size of prokaryotic cells is about 1 - 5 microns and size of eukaryotic cells is at least 10 microns and larger. On the basis of size at least by eye, it is easy to separate eukaryotic cells from prokaryotic bacterial cells.
Bacterial cells can be groups by size and morphology. They can be spheres, rods, curved rods, etc. and grow in isolate, chains, or clusters. That to some degree tells you if you are looking at Escherichia coli, vibrio cholera, staphylococcus aureus, etc, although false positives are possible because some bacterial cell types have the same morphology and growth pattern.
A lot of mammalian cell culture biologists complain about Mycoplasma contamination because the size of the mycoplasma bacteria is smaller than 1 micron so biologists might be interested in a application that could identify mycoplasma contamination. But cell biologists would probably look under the microscope immediately see the mycoplasma contamination because it is such a problem.
ImageJ is pretty standard for analyzing biological images. It is very powerful and you can write scripts for it, but it is kind of easy, but I don't think it is already automated for tasks like this.
When microbiologists complain about contamination, it is viral contamination of bacteriophages that they are complaining about. Bacteriophages are like nanometers in size so they can't be seen with a microscope but evidence of contamination is detected by lack of microbe cell growth due to lysis and the lysis of bacteria cells I believe you could detect from the images. Maybe microbiologists would be interested in an application that could detect lysed bacteria cells.
  • asked a question related to Proteomics
Question
2 answers
what is the best method to extract proteins from serum and tissue sample?
what is the best method to identify the signature protein/peptide between serum and tissue samples by using mass spectrometry?
Relevant answer
Answer
Hi Eef Dirksen,
Thanks for your comment.
I have some serum and tissue samples from patients with a certain disease, and I want to identify signature proteins or peptides that can differentiate them from healthy controls. I am wondering what is the best method to extract and analyze (top down or bottom up) these biomarkers.We have a Q-Exactive mass spectrometer for this experiment. how to handle the data analysis and interpretation? Is there any specific software or tools to process and visualize the results? 
  • asked a question related to Proteomics
Question
13 answers
I am looking for less cost igG extraction method from human serum for mass spectrometry applications. i used Melon Gel IgG Spin Purification Kit but it is very expensive. Please suggest me any alternative method.
Thanks!
Relevant answer
Answer
How about protein A or G resins...The melon gel works as flow-through mode of purification, protein A is bind & elute mode (orthogonal strategies). Without knowing about a cost comparison, this application works well and recovery would be satisfying. If you investigate the IgG CDR domains (fab) nsmol kit is ready to use and a promising tool as a front-end approach.
  • asked a question related to Proteomics
Question
3 answers
Dear all,
I've had thoughts that our understanding of what drives the biology underlying health and disease may be skewed towards DNA and RNA level alterations.
Aside from the central dogma of biology; DNA -> RNA -> Protein, I believe there may be a skewed understanding because the tools available to investigate DNA and RNA alterations are more advanced than the tools available for protein or metabolite level analysis.
DNA and RNA sequencing are cheaper, higher throughput, more accurate, and have better coverage than mass-spectrometry for proteomics. I'm wondering how much the availability of tools influences our understanding of at what level diseases occur.
Of course this is a gross simplification, as there will always be factors at play and interactions at multiple levels.
If the only tool you have is a hammer, you tend to see every problem as a nail.
Sam
Relevant answer
Answer
Heritable diseases tend to be the more obvious ones and are caused by genomic mutations, often at a single locus. This makes finding the causative locus easier. The underlying mechanisms are often then subsequently probed. Diseases caused by environmental exposure and lifestyle are harder to probe as they often involve the interaction of multiple gene products.
  • asked a question related to Proteomics
Question
1 answer
What are the steps in the regeneration of 'fresh' DEAE Sephacel, which comes in the swollen form, in 20% ethanol. And what is the significants of each chemicals in these steps, can anybody explain?
Relevant answer
Answer
Here is the instruction manual for this resin.
To prepare the resin for the first use, you have only to replace the ethanol with the starting buffer until equilibrium is reached.
Regeneration is to be done by washing with a strong NaCl solution (1 M or 2M), which should elute just about anything that is bound by an ionic interaction.
If the column needs to be cleaned of hydrophobic substances, wash it with 0.01 M NaOH, then re-equilibrate it with the binding buffer until the pH is back to where it should be.
  • asked a question related to Proteomics
Question
7 answers
Once protein has been extracted and quantified, and before protein digestion, I even concentrations to 1 mg/ml using U/T buffer (7M urea, 2m thiourea, 30mMTris). Then I re-quantifiy protein to make sure that protein concentration is 1mg/ml. For some reason that I do not know, buffer and protein do not mix homogeneously; the more concentrated the original aliquot is and the more buffer is added,  the less concentrated the final aliquot is. I did repeat the protocol 3 times with fresh buffer and I always get the same. However, I did not have this problem when sample was diluted in water. Any idea or suggestion? Thanks!
Relevant answer
Answer
Hi Waldo, I'd like to know if you solved this problem. Because when I want to make standard curve for Bradford assay, BSA cannot dissolve in the urea/thiourea buffer even after a day.
  • asked a question related to Proteomics
Question
1 answer
Dear all,
I'm working on the finer details of my experimental design, and have some questions regarding bridging channels for TMT based experiments.
I have two conditions to test, across nine biological replicates, in order to run as one 18-plex TMT-pro experiment.
I am aware of the use of one or more bridging channels being used with pooled samples to combine multiple TMT mixtures, however a colleague has mentioned that a bridging channel should also be considered for normalisation if only one set is used.
Does anyone have any experience using a bridging channel for normalisation in a single mixture? Is it worth sacrificing one or more biological replicates for?
I will be using MSstatsTMT for normalisation and summarisation.
Sam
Relevant answer
Answer
As an update to this discussion, I have decided to reduce my sample size and incorporate a pooled reference channel. Mostly to open up the possibility of integrating additional samples and conditions in the future.
Sam
  • asked a question related to Proteomics
Question
5 answers
Please suggest best way of intact light chain absolute quantification by using Mass spectrometry.
Relevant answer
Answer
The best way for this is to use the MALDI-TOF MS configuration. For ESI top-down or native-MS-like approach is needed for the analysis of either heavy or light chains. You may combine SEC and native analysis for ESI-LCMS or gas phase fragmentation of antibody light chain to select SRM-like produced peptide for quantification. ESI produces multiple charge states for large molecules therefore occurring charge envelopes reduce the precursor intensity if the top-down quantification is aimed. MALDI gives reduced charge states thus either the identification or quantification approach would be more easy/more practical if the sample is not a protein complex but a purified antibody.
Rather than the selection of MS or acquisition technique, it is more important to choose the sample prep strategy herein. Garbage in garbage out for any MS system. What is your consideration about the sample prep and what is your sample matrix? how would you purify/clean, reduce, and fractionate your sample? It is more critical to assess prior to MS detection. Otherwise many interfering compounds, and protein peptides make your quantification worse and that is why it is recommended to use MRM and signature proteolytic peptide identification is more appropriate to perform absolute quantification...
  • asked a question related to Proteomics
Question
1 answer
I am trying to homogenizate brain sample with glass beads in order to isolate microRNA. Would the affinity of RNA to glass influence total yield?
Relevant answer
Answer
glass beads should be avoided when processing samples for DNA/RNA extraction because nucleic acid tends to stick to glass. If you’re working with RNA, it’s wise to use beads pre-treated to be RNase-free: https://lab.plygenind.com/mastering-bead-selection-for-effective-homogenization
  • asked a question related to Proteomics
Question
5 answers
Dear all,
we are doing LC-MS/MS metabolomics and proteomics (research & diagnostics) and I wondered if you had experience with the Evoqua LaboStar PRO TWF water purification system?
It's a bit cheaper than the popular MilliQ (including the yearly budget for consumables).
I can find only very few publications referring to it for now.
Thank you!
Best
Julie
Relevant answer
When doing research in many SAIF labs it is commonly used (MilliQ-models) and proven results with better health of instrument. If you are planning to buy it for your laboratory, I refer to any advance /latest models of milliQ.
Addition information I will provide in some time with real user's reviews.
  • asked a question related to Proteomics
Question
3 answers
As someone who does not have much experience with proteomics, it would be nice to know of any helpful software/resources to get started with analyzing the data!
Relevant answer
Answer
I'd suggest you look at PeptideShaker, it's easy to use, well integrated and supported;
Vaudel, Marc, Julia M Burkhart, René P Zahedi, Eystein Oveland, Frode S Berven, Albert Sickmann, Lennart Martens, and Harald Barsnes. “PeptideShaker Enables Reanalysis of MS-Derived Proteomics Data Sets.” Nature Biotechnology 33, no. 1 (January 2015): 22–24. https://doi.org/10.1038/nbt.3109.
  • asked a question related to Proteomics
Question
14 answers
My peptide is Cholecystokinin (CCK8), MW=1142.35 (COOH-D-Y-M-G-W-M-D-F-NH2).
Stock solution in NH4OH 0.05M and working solution in acetonitrile.
I do MS infusion at conc. 500 ng/ml in acetonitrile.
I use two LC/MS machines: Micromass - Quattro Premier XE of Waters (Tamdem Quadrupole) and Applied Biosystems - API 3200 LC/MS/MS (triple quadrupole)
I run ES + but I can not see the peak at 1+, 2+, 3+,4+,...for [M+H], [M+Na], [M+K]
I wonder whether I have missed some other adduct ions that could be created during the ionization?
Or maybe my peptide is being degraded during preparing the sample?
Please give me some advice! Thank you!
Relevant answer
Answer
Here are the other points, you may lean on;
If the purpose is quantification or purity check, LC-UV would be nice to use since the octapeptide you have several aromatic rings and would be highly responsive,
secondly, 500 ppb may be a low conc. to conduct a full scan..especially when the ionization efficiency is low.
Third, I would prefer combined flow scanning in place of infusion...In this mode, you are not taking benefits of the mobile phases which present donors to improve ionization...You should combine the lc flow and infusion (acid and/or DMSO additives in phases for pos ESI in this case) and retest the response...
By solving the peptide in an alkaline condition you are directing the peptide to deprotonation and this makes the peptide more amenable to neg ESI...If it is soluble in ACN directly..prepare your stock in ACN and dilute it with the same solvent, I prefer not to use aggressive pH which is not convenient for most of the peptides to the unintended H exchanges...
Last but not least, If the peptide is hydrophobic and dissolves only in organic solvents this is susceptible to be efficiently ionized in APCI, APPI rather than ESI...You may look for these alternative ionization techniques if MS analysis is the bottleneck and the abovementioned suggestions are useless...
Good Luck...
  • asked a question related to Proteomics
Question
1 answer
Hi all,
I am using proteomics to conduct a microbiome experiment, and one of the sample types I will collect is the mouse cecum. I see various techniques in the literature, some papers wash the cecum tissue with PBS but others will just take the cecum contents straight from the GI tract. I was wondering if anyone knows why one would chose one method over the other? Is there an ideal way to handle mouse cecum samples in bottom-up proteomics?
Thank you in advance!
Relevant answer
Answer
Sadie rose Grant The decision to wash the cecum tissue with phosphate-buffered saline (PBS) versus taking the cecum contents straight from the GI tract can depend on the specific goals of your experiment and the nature of the microbial community you are trying to study.
Here are some considerations:
1. Sample composition: Washing with PBS can help remove transient bacteria and mucus, thereby allowing you to focus more on the microbes tightly adhering to or embedded within the gut lining. This could be beneficial if you're interested in the interaction between the host and microbiota. However, if you want a more comprehensive view of the total microbial community, including transient and luminal bacteria, you might prefer to avoid washing and take the contents directly.
2. Downstream analysis: Consider how washing might impact your downstream analysis. For instance, washing with PBS could potentially dilute the proteins you are trying to analyze, making it more challenging to detect low-abundance species. However, it might also reduce potential contaminants, making your proteomics data cleaner.
3. Sample handling consistency: It's crucial to handle all samples in a consistent manner to avoid introducing unnecessary variability into your dataset. If you're comparing cecum contents with other types of samples that are washed with PBS (like fecal samples), it might make sense to wash the cecum as well.
4. Study Reproducibility: If you're building upon previous work, consider the methods used in those studies. Replicating their methods can help make your results more comparable.
As for the ideal way to handle mouse cecum samples in bottom-up proteomics, it will ultimately depend on the specific scientific question you want to answer, and it may be worth testing both methods (i.e., with and without washing) on a small number of samples to see how much of an effect it has on your results. Also, make sure to control for any potential effects by including appropriate reference samples and replicate measurements.
It's also important to remember that the state of the cecum (e.g., whether it's relaxed or contracted) can also affect the amount of luminal content and, thus the microbial composition, so careful and consistent sample collection is crucial.
  • asked a question related to Proteomics
Question
3 answers
I am interested in predicting the protein structure of my protein of interest. Using NCBI BLAST, I found an experimental structure that corresponds to a domain of my protein, showing 24% query coverage and 100% similarity. My question is whether I can confidently use this experimental structure as a template for homology modeling, or if I should explore alternative techniques such as threading, ab initio modeling, or any other suitable approach. I would also appreciate recommendations for relevant servers or software that can assist in this case.
Thank you for your insights and suggestions.
Relevant answer
Answer
Quite honestly if your protein isn't too large, i.e., to many amino acids for it I would just use AlphaFold or ESMFold and compare the best model with the resolved one by aligning on this region. I think the models (or variants of it that participated listed in the previous post all do have lower performance in the last CASP competitions then AF had. Although I haven't checked this ^^
RosettaFold would also be a good option.
Of course homology modeling can still work pretty well, but usually only if you have good templates and ideally many of them. But if you have regions that basically are missing in your templates and those are significant it usually doesn't really work that well.
  • asked a question related to Proteomics
Question
5 answers
Dear all,
I'm building a shiny application for analysis of mass-spectrometry based proteomics data (for non-coders), and am working on the section for GO term enrichment.
There are a wide variety of packages available for GO term enrichment in R, and I was wondering which of these tools is most suitable for proteomics data.
The two I'm considering using are https://agotool.org/ which corrects for abundance with PTM data, or STRINGdb which has an enrichment function.
What do you guys recommend?
Best regards,
Sam
Relevant answer
Answer
Hi, I don't know the extent of your shiny application, but you can also use EnrichR (in R), which is also good, though, it upload the data to server during calculation.
Plus, I am curious how you will handle the missing IDs during ID mapping? ClusterProfiler can map IDs, but there are always some % of ID that are missing.
  • asked a question related to Proteomics
Question
3 answers
Hi all,
I'm busy building a shiny analysis pipeline to analyse protemics data from mass spectrometry, and I was wondering what the exact difference is between the terms Over-represented, and Upregulated. Can they be used interchangeably? Is one more appropriate for RNA or proteins?
Thanks,
Sam
Relevant answer
Answer
I would extend that and say we really need to be careful using upregulated. To me, that means the expression of the protein is increased, so there is a fundamental change in the amount of protein quantified as a result of that. I'd suggest we use the term increased (or decreased) abundance when quantifying the protein, unless we have clear evidence to say otherwise. There are a lot of reasons protein quant differs between samples, and it's not always due to expression level changes. I agree you will find such terms used interchangeably, that does not mean it's correct, or a good idea.
  • asked a question related to Proteomics
Question
3 answers
Hi everyone!
As there are an amazing number of proteomic data being published every month, i became really curious if (and where) they are publically available. The only site i found is http://www.proteomexchange.org/ which is a great site with a lot of files already collected, but maybe there are others that i couldnt find yet and contain data from other publications.
If you know any other sites, please let me know, i think they can be extremely helpful.
Relevant answer
Answer
Hi Peter,
There is also
1. MassIVE which is curated by University of California San Diego, USA. It is under the Center for Computational Mass Spectrometry of UC San Diego.
Once a deposition is under "complete", with peak files matching raw files and mzID results files, the MassIVE will also register/ produce ProteomeExchange number which may be required by some journals.
2. Another resource is "GPM" or Generalized Proteomics data Meta-analysis
Best,
Hediye.
  • asked a question related to Proteomics
Question
4 answers
Hi all,
I've noticed a suspicious pattern in some volcano-plots I've made of proteomics data.
Specifically, I've noticed points lining up, seeming to show a direct mathematical relationship rather than the expected volcano shaped cloud.
I've normalised and quantified the data using the R package `MSstats`.
I've also also noticed some proteins showing a p-value of 0, I've filtered these out however I believe some people assign a very low number to replace this. What is the usual practice here?
Volcano plots attached.
Many thanks,
Sam
Relevant answer
Answer
Hi all,
I've narrowed down the cause of this phenomenon, (in case anyone else has this same problem), namely there's an issue with the dataset I'm using. Running the same analysis on a different dataset using the same script does not produce this pattern.
Notably, the pattern varies as a function of the normalisation method used in MSstats.
Sam
  • asked a question related to Proteomics
Question
3 answers
Hi,
as my question already indicates I would like to do some multivariate analysis of my proteomics data as I have multiple characteristics in my samples. I have successfully used MetaboAnalyst for multivariate analysis in metabolomics approaches. Do I have to expect some drawbacks by using MetaboAnalyst for proteomics data or is there an easy tool such metaboAnalyst for proteomics data?
Thank you for your help!
BR,
Timo
Relevant answer
Answer
MetaboAnalyst is primarily designed for metabolomics data analysis, but it also supports proteomics data analysis. Therefore, it is possible to use MetaboAnalyst for multivariate analysis of proteomics data, but there may be some limitations.
One potential issue is that the data preprocessing methods for proteomics data might differ from those used for metabolomics data. For example, the normalization and scaling methods for proteomics data might not be the same as those used for metabolomics data. Additionally, the statistical tests available in MetaboAnalyst might not be optimized for proteomics data.
However, despite these limitations, MetaboAnalyst is still a powerful tool that can be used for proteomics data analysis. It provides a user-friendly interface for data uploading, preprocessing, and multivariate analysis. It also supports a wide range of multivariate analysis techniques, including PCA, PLS-DA, OPLS-DA, and clustering analysis, which can be used to identify patterns and relationships in the data.
If you are looking for a specialized tool for proteomics data analysis, there are other software packages available, such as Progenesis QI for Proteomics, MaxQuant, and Perseus. These tools are designed specifically for proteomics data analysis and offer a wide range of advanced features and analysis methods.
  • asked a question related to Proteomics
Question
2 answers
I would like to ask a question. TMT labeled peptides by reacting with amino groups (primary amine groups), but the sulfhydryl-acetamide structure (carbamidomethyl of Cystein) formed after IAM (iodoacetamide) blocking Cystein also contains a primary amine group. Why does TMT not react with alkylated sulfhydryl groups? After all, there are primary amine groups in the structure(Blocked Cystein), right? (It seems that this reaction is not considered when searching the library in TMT-proteomics, only the amino group of the N-terminal and K side chain are considered react with TMT).
Relevant answer
Answer
The product of the reaction of iodoacetamide with the side chain of cysteine is not a primary amino group. It is an acetamide. The reactivity of an amide is not the same as the reactivity of an amine.
  • asked a question related to Proteomics
Question
3 answers
I am isolating neutrophils using density gradient from human whole blood.
Relevant answer
Answer
If I understand correctly, this is the sample you want to use for your proteomic analysis. Therfore, you don't want to bring in media or serum, as this would totally mess up your proteomics!
Wash the cells 3x with PBS and transfer them into the Eppendorf tube for freezing of the cell pellet.
Later, you will use your lydis buffer and prepare the sample for the analysis.
  • asked a question related to Proteomics
Question
6 answers
I am very new on proteomics and I have formalin-fixed paraffin embedded (FFPE) tissue from temporomandibular joint of humans fetus and I'd like to perform a proteomics analysis with it. Does somone know some protocol that works well for this type of tissue?
Relevant answer
Answer
Hi Filipe,
Here is a good publication which I followed and obtained results (protein identifications) from FFPE slides:
  • asked a question related to Proteomics
Question
2 answers
I am trying to isolate the intracellular proteins from fungal biomass in a filamentous fungi (P. sporulosa, cell wall has more glucans) for proteomics study. I am facing problem in the sample preparation especially the cell wall lysis step. I have tried grinding the biomass in liquid nitrogen but the protein concentration is too low to proceed with proteomics. Which is the best method to get more protein concentration is it mechanical lysis via glass beads or chemical or enzymatic extraction methods or grinding in liquid nitrogen. I have also used the following lysis buffer:
  1. 7 M Urea, 2 M Thiourea, 10 mM TCEP, 40 mM Chloroacetoamide, 0.4 M Tris (pH 8), 20% ACN.
  2. 6 M guanidine, 0.4 M ABC, 10% ACN, 5 mM TCEP, 15 mM Chloroacetoamide
Please let me know what worked for you guys, anyone doing proteomics study on filamentous fungi?
Thanks
Relevant answer
Answer
Thank you. I was thinking of doing enzymatic degradation next. thanks for sharing the detailed protocol. Did you get it from some literature. can you please share the paper?
  • asked a question related to Proteomics
Question
4 answers
Other than "The Cancer Proteome Atlas (TCPA)" (https://tcpaportal.org/tcpa/index.html), which other cancer-based databases/tools can be used to associate a list of genes to identify different types of cancer.
Relevant answer
Answer
Antonis Tsintarakis thank you for your reply! I will look into the BEST tool you have suggested.
  • asked a question related to Proteomics
Question
2 answers
Our lab has recently conducted a proteomic study on a plant species we do research on. We asked the company doing the mass spec to also do the preliminary data analysis which included normalising the data. The data returned to me shows only the significantly abundant proteins (already annotated) and has the log fold change values as well as the normalised abundance values for each replicate. I am having some difficulty in performing routine analysis since most of the tutorials or R scripts I have found starts with the raw sequencing data. I am not very proficient in R so am having trouble navigating how best to deal with the data I have. Any recommendations on software or web-tools that does not require the input of raw sequencing data would be greatly appreciated.
Relevant answer
Answer
Thanks Marwan for your advice. I also spoke to some fellow researchers and they also suggested the following: REVIGO for GO enrichment, DAVID for functional annotation, and iDEP for proteomic analysis (it is very similar to metaboanalyst). Links below.
  • asked a question related to Proteomics
Question
2 answers
How can I reduce the viscosity of saliva samples for the proteomics project and homogenize the samples viscosely?
It should be noted that I do not want proteins to be removed in this way
Relevant answer
Answer
How to digest mucin in saliva?
  • asked a question related to Proteomics
Question
2 answers
I was planning to evaluate the protein expression profile of a gene of my interest, in breast cancer patients. Does anyone know if such dataset ( like we use TCGA datasets to examine mRNA expression )exists?
Relevant answer
Answer
Hi Sir,
You can check gene expression at protein level using Human Protein Atlas (https://www.proteinatlas.org). Also, NCI Proteomic Data Commons can be explored (https://pdc.cancer.gov/pdc/)
  • asked a question related to Proteomics
Question
2 answers
Set effects are usually quite imponent and mask sample characteristics when dealing with human samples and TMT-labeled non-targeted proteomics.
What is in your view the best approach to preserve the experimental differences while flattening down set effects (technical artifacts)?
Relevant answer
Answer
Roberto,
Here is the article partly covering this issue.
  • asked a question related to Proteomics
Question
1 answer
We have used a morpholino to block mRNA translation. Since our protein of interest does not have commercially available antibodies, is it better to proceed with a custom-made antibody or to go for the sequence and targeted mass spectrometry?
Relevant answer
Answer
Roberto, both approaches should work, and, as they are orthogonal, the results should nicely confirm each other.
  • asked a question related to Proteomics
Question
1 answer
Hi,
We want to investigate the extracellular vesicle proteome through proteomics. Reading the literature in this regard, we are not clear about the necessary micrograms of extracellular vesicles for the realization of this approach.
Thank you in advance for any responses we may receive on this.
Relevant answer
Answer
Hi Antonio,
the amount of extracellular vesicles (EV) required for a proteomic approach is dependent on the protein content of the EVs of interest which could differ in different sources. I would suggest that you perform a protein analysis from you EV preparation and adjust the amount that you need for your proteomics approach from this protein analysis. In one of our recent publications, Cilibrasi et al. 2022 "Definition of an Inflammatory Biomarker Signature in Plasma-Derived Extracellular Vesicles of Glioblastoma Patients" we have used 100ng of proteins from EVs to perform an SDS PAGE and do InGel digestion of 5 slices from each lane to get tryptic peptides for downstream LC-MS analysis.
This amount worked good for us but the required amount can be very different in relation to the method and LC-MS equipment you are planning to use.
I hope this helps.
All the best,
Murat
  • asked a question related to Proteomics
Question
11 answers
Hi, I am proteomics resercher.
I ask for your understanding that writing in English may be lacking.
Because this is my first time to post question in the researchgate and i am not a native English speaker.
I am doing proteomics research comparing protein concentration(Label Free Quantification) in two groups using LC-MS.
We have 6 samples and two groups to analyze (3 samples/group).
we are planning to run only two samples in one day and other four samples several weeks later.
Is it possible to statistically analyze (ex, t-test) 6 samples together in this case?
I think it is impossible because Retention Time drift and machine performance change.
Please let me know your answer. (If you know manuscript about this issue, please let me know.)
Relevant answer
Answer
The issue you described is not a problem for proteomic analysis and your statistical tests will be valid provided you appropriately normalize the data. Most current proteomic search tools (e.g., MaxQuant, Fragpipe, ProteomeDiscoverer) will correct for mass and retention time drift. Of course, it is always best practice to ensure your instrument is calibrated before running your samples. Once you have protein-level data output from your search software, there are several statistical tools for controlling intensity (abundance) normalization. One of the more common methods is to perform Tukey's median polish. After you have performed some type of statistical test between your groups, remember to correct for multiple hypothesis testing using Benjamini-Hochberg or Bonferroni procedure. This will ensure you have adequately controlled your false-discovery rate (FDR).
  • asked a question related to Proteomics
Question
11 answers
I've conjugated a PEG4-maleimide (MW 613.66) onto an antibody fragment (52 kDa), but I'm trying to figure out how this affects A260/280 measurements on a Nanodrop afterwards. After conjugation and removal of unconjugated material on a PD-10 column, I measure the concentration of the eluted aliquots but the measurements seem really off, and the A260/A280 ratio is nowhere close to 0.5. I blanked the Nanodrop with the PD-10 elution buffer. Seems strange that such a small molecule could affect the readings that much. Does anyone have experience with PEG and if/how it could affect concentration measurements after conjugation?
Relevant answer
Answer
Cynthia,
Please contact me by email at
Sherman@mvpharm.com. I will try to "talk you through" the method of using the refractive index and UV absorption to calculate the protein and PEG concentrations without attempting a BCA assay or other chemical assay of the protein concentration. Our method is based on the Kunitani reference that I cited in this string of discussions in February 2015.
Best wishes,
Merry
  • asked a question related to Proteomics
Question
2 answers
I know this might be a bit too general question but:
In proteomic analysis (working with Perseus) when you deal with raw LFQ analysis. Do you always use Z- score? And do you always log2 transform your data?
It doesn't seem to me that is always needed. Besides, no matter if you do or don't the results on the graphs should come out the same, only differently scaled, right?
Maybe it's a dumb question but thank you regardless!
Anja
Relevant answer
Answer
Working with log-transformed intensity (or concentration) data is very convenient. Protein expression is approximately log-normal, so it's approximately normal at the log scale, where you can use the standard toolbox for analysis (i.e. linear models like t-tests, ANOVA, regression, ANCOVA etc).
I think visualization is also better done with log concentrations or log fold-changes. The resulting picture (and possibly the interpretation) should not depend on the perspective, that is, which group is chose as the "reference" and wich as the "experimental" condition. To give an example: you study the expression of protein X in males and in females. The fold-change male/female is 5 (wow what a drastic increase!), so the fold change female/male is 0.2 (eh, well, yes, some kind of reduction). The numbers look very differently impressive althought the express the very same difference (effect, ratio if you like). The (base-2-)logarithms are +2.3 and -2.3: same number, opposite sign. This is (imho) easier to understand and faithfully reflects the symmetry of the problem.
Using z-scores serves a different purpose. Here, z-scores are calculed from the log intensities, for the reasons explained above. The z-scores more closely resemble what a t-test or ANOVA "sees" from the data, as here the differences (from the mean) are related to the variability (the standard deviation). What gets lost is the information about the base intensity (expression level), what is often less interesting than the difference in expession levels between samples and groups.
  • asked a question related to Proteomics
Question
8 answers
Dear All,
I extracted protein from various tissues in mouse with RIPA buffer (added protease-/phosphatase- inhibitor cocktails and PMSF).
Quantified the protein concentration via Bradford and loaded equal total protein amounts.
My housekeeping gen is quite stable across my various organs but not 100% same.
For publication I would like to have a blot where all bands are equal - this is why I adjust the protein loaded according to the previous blot. With protein from cell culture this works fine but with a variety of mouse tissue organs I do not receive an equalized normalizer band.
Why is that? Is there anybody with experience doing multi-organ blots and has a good protocol or advise where to look for one ?
Thanks for any advise or help :)
Relevant answer
Answer
This could happen due to pipetting errors, or the proteins leaching out of the wells while applying on gels. The whole idea of using a house keeping gene is to nullify this effect. If you express the intensity of protein of interest with respect to that of the house keeping gene minor differences will not be a problem.
You could also try ponceau staining on your blots and can use that to normalize protein expression, if the house keeping gene does not work!
Do let me know if that help!
  • asked a question related to Proteomics
Question
2 answers
Which label-free proteomics method is best for quantitative proteomics analysis of extracellular vesicles: DDA vs DIA?
I have N=70 sample size of EV, Case A vs B.
Thanks.
Relevant answer
Answer
I beileve this is highly dependent on what you are seeking. DIA has better reproducibility and higher data completeness as Dr. Zepeda said, however, in terms of quantification, DIA has lower sensitivity than DDA as the complete spectrum must be scanned, reducing the acquisition time per data point.
I think DDA is suitable if you would like to perform A vs. B as it focuses on highly-abundant proteins.
  • asked a question related to Proteomics
Question
3 answers
Can you help with the issue of a better kit for the depletion of more abundant proteins (albumin, and all immunoglobulins) in the plasma of mice for use in proteomics analyses? Thank you very much.
Relevant answer
Answer
If you afford to use it, try the Multiple Affinity Removal System (MARS) from Agilent. Mouse Multiple Affinity Removal Columns and Cartridges are ready-to-use columns for simultaneous removal of major 3 interfering proteins Albumins, transferring, and IgG. Otherwise, you need to follow a relatively cheap, tandem and, long procedure as Dr.Wolfgang Schechinger indicated.
  • asked a question related to Proteomics
Question
5 answers
I have done SMD of protein at applying constant velocity using NAMD software and CHARMM ff. Since, this is my first time in performing Steered MD, I am not sure as to should I do umbrella sampling alongwith SMD? Are the results of SMD without any umbrella sampling significant ? It would be helpful if I could get some references as well.
Relevant answer
Answer
Sharmi Mazumder could you figure out how to pull and fix multiple atoms in smd ? i'm interested to know about the same.
  • asked a question related to Proteomics
Question
4 answers
I have tissue samples digested by SDS lysis buffer which i would like to use for lipidomics analysis. what do you suggest? do you believe is gonna be possible?
Relevant answer
Answer
You can't do this because the protein contamination has not been eliminated by the SDS buffer lysis, for this you will have to digest the sample with a protease enzyme.
  • asked a question related to Proteomics
Question
1 answer
Hi everyone,
I am trying to upload my proteomics data in Proteomexchange in "Complete Submission" type.
But the File validation (the first step after selecting data) is stuck in 50%.
I gave it time even for an hour, but nothing happened.
How can I solve the problem?
Thank you in advance
Relevant answer
Answer
Dear Mashid,
it is important to always use the newest version of the submission tool. Further an explanation of all steps is provided here: https://www.ebi.ac.uk/pride/markdownpage/pridesubmissiontool. If you are still experiencing problems PRIDE has a very good support team, which I am sure will help you. If you will not be able to manage the upload by yourself, they can even assist you in doing so. Their support/help email is: pride-support@ebi.ac.uk
Thumbs are pressed!
  • asked a question related to Proteomics
Question
10 answers
Is anyone aware of an SPE approach to isolating plasma metabolites, or at least substantially diminishing the protein/metabolite concentration ratio? The aim is to reach a balance of concentrations such that vibrational spectroscopic methods can pick up meaningful spectroscopic contributions from metabolites - contributions that are otherwise inaccessible as a result of overwhelming protein absorptions. Note that there need not be selectivity among the proteins - we are not interested in removing only the high-abundance ones (as is the case for proteomic MS work for example). Ideally, we want ALL of the proteins gone.
P.S. I am fully aware of the various lab methods in common use - crashing proteins out with organics (e.g. acetonitrile, methanol), and ultrafiltration methods. Interested in knowing more about less obvious options.
Relevant answer
Answer
Try to use Protein precipitation plates...It combines both protein crash and filtration which can be an automated and effective way of diminishing the protein composition.
  • asked a question related to Proteomics
Question
1 answer
I am using desthiobiotin instead of biotin for pulling down a particular protein via streptavidin beads. The desthiobiotin is clicked to the chemical probe I am using for the protein. Post enrichment, I am digesting the proteins 'on beads' and then eluting the desthiobiotinylated peptides. I am submitting the digested sample for proteomic analysis to determine site of modification. However, I am not sure if the desthiobiotin is intact during the proteomics mass spectrometry analysis and hence can't predict the exact mass difference expected.
Relevant answer
Answer
Hi Sauradip Chaudhuri,
If you do an on beads digestion, you will only get peptides which will be cutted away from the Streptavidin bead/desthiobiotin complex. Depending on the digestion enzyme and the available digestion sites on desthiobiotin you may also have some desthiobiotin peptides in your identified sequences after LC-MS analysis but this should not be a problem if you perform your LC-MS analysis and downstream data analysis in the correct way.
Good luck!
Murat
  • asked a question related to Proteomics
Question
4 answers
Could anyone recommend some good online Proteomics courses and/or books for both beginners and advanced students?
Relevant answer
Answer
  • asked a question related to Proteomics
Question
3 answers
Long story short, I need to degrade 30ug of RNA and i need to do it at 4C. I want to use only as much RNAse A as necessary.
So if performing the reaction at 4C, how long of incubation time and how much RNAse A would i need to degrade 30ug of RNA?
For example, would 1ug of RNAse A(20ug/ml conc.) for 15min at 4C be enough?
Relevant answer
Answer
You will probably have to determine this yourself.
Setup a few reactions (with and without RNase A and RNase A at different concentrations). Run RNA out on a TBE urea gel to monitor amount of hydrolysis.
  • asked a question related to Proteomics
Question
9 answers
Hi,
I am looking for open source tools for pathway and network analysis for proteomics and genomics. Will appreciate tools with tutorials and simple to follow documentation.
Relevant answer
Answer
Hi Victor
begin with StringDB and DAVID, it's easy to handle.
all the best
fred
  • asked a question related to Proteomics
Question
1 answer
I have three MS spectra of an unknown protein. The protein has been separated and directly analyzed with a MS without trypsinization (top down approach). How can I know the identity of the protein? I suppose I can search for a matched spectrum in a library. However, I have no experience with top down proteomics and I don't know which software to use for the protein identification. Any help please?
Relevant answer
Answer
Hello! Thank you for your reply. I used MaxQuant but it is for shotgun approach. I was having a look to fragpipe but it seems it also can be used only for shotgun approach
  • asked a question related to Proteomics
Question
5 answers
For antibody sequencing
Relevant answer
Answer
It strictly depends on the overall approach to antibody sequencing. The main benefits of TIMSTOF instrument is high sensitivity and high ms/ms speed. It best fits to the transcriptome-assisted sequencing of native affinity purified pools( e.g. repertoire analysis of convalescent sera) It gives the acceptable quality ms2 of tryptic peptides with deep overall proteome coverage.
The same time timstof is inadequate instrument for true de-novo sequencing of mabs without database assistance. The modern Orbitraps performs multiple modes of fragmentation(CID, HCD, ETD, EThcD, UVPD) for different type proteolityc peptides. It greatly improves the quality of identification of long peptides (especially DMAPA-treated V8 peptides) and allows to differentiate Leu/Ile at the protein level. But this is true only for high-end Tribrid series Orbitraps(and partially for ETD hybrids) but not for Exactive and Exploris series.
  • asked a question related to Proteomics
Question
8 answers
What advantages does transcriptome have over proteome as the final product of gene expression is protein? Why to choose it?
Relevant answer
Answer
Not only it is much easier to work with transcriptomics but transcriptomics allow you to take into consideration the role played by non-coding sequences that are the majority of the genome and very often play a crucial role in biological regulation see for example:
  • asked a question related to Proteomics
Question
4 answers
This might be a trivial question and an operation do to, but I'm not experienced and haven't been able to find a direct solution online. I'm pretty sure I've overlooked something as this should be a simple task, so I'm asking here.
I have a list of some 1500 protein IDs identified in a proteomic experiment coming from a bacterial origin.
I would like to get GO annotation for those proteins so I could categorize them according to the "biological process"and "cellular function". Is there a web service or a simple program that could get those GO annotations ?
I'm confident that the GO enrichment analysis offered on the main page is not appropriate for the data I have and the information I want (I may be wrong), and my organism is not available in the list.
Does anyone have any suggestions ?
Relevant answer
Answer
Hi Dennis,
You may already found a workflow but just in case you not I would use STRING tool, you will need to enter a fasta file and select an organism and string will do the rest if your organism is not in their database just select the closest, it will give you a protein-protein interaction network and a lot of information (take a look to download section). Alternatively, you can use BLASTKOALA tool where you will also need to input a fasta file and you’ll get the GO annotations for your IDs.
hope it helps
Best
  • asked a question related to Proteomics
Question
13 answers
Dear scientists,
I got a set of around 4000 protein ids from a proteomic experiment and I would like to globally analyse if the particular groups of proteins in my experiment are significantly more hydrophobic and/or aggregation-prone compared with other groups. I am looking for an R programming library or a web tool that will enable me to obtain some quantitative value for hydrophobicity per protein for my sets. One thing I may do is to just simply calculate the sequence length adjusted number of hydrophobic amino acids C, L, V, I, M, F, W but this seems to be a little naive and I am not sure about the biological relevance of such a simple calculation not taking into account the whole structural aspects of the sequences...I would be glad for ideas on any smarter approaches...please help
Relevant answer
Answer
Basically, what you want to do is to determine the amino acid composition for each sequence (e.g. in this R package https://cran.r-project.org/web/packages/protr/vignettes/protr.html using extractAAC()) and multiply the number of times a given amino acid occurs with its hydrophobicity index in your chosen scale (see https://web.expasy.org/protscale/ for different scales) and sum up the values. However, in my experience, hydrophobicity is a poor predictor of the aggregation propensity of folded proteins, as aggregation is frequently linked to imperfect folding rather than to the association of properly folded molecules.
  • asked a question related to Proteomics
Question
3 answers
Hello everyone,
I'm planning on running my peptide samples on a high-resolution LC-MS instrument. I'm going to use a ZipTip C18 tip for the extraction of peptides and desalting of the sample. However, I'll be sending these samples overseas and it might take 2-3 days for them to reach their destination. I can potentially keep them in dry ice throughout their delivery, but it is very costly and we had few issues before where the dry ice evaporated until it reached the destination.
If I free-dry my peptide samples, do you think they are going to be stable for couple of days? Considering there won't be any humidity where the enzymes can work on the peptides, but I just wanted to get the opinion of people who has lots of proteomics experience.
Relevant answer
Answer
Selam Bora,
leave your peptides on the zip tips (after washing step without elution)and send them in an envelope at RT. The MS-facility can than elute the peptides from the C18 tips and start LC-MS analysis. I have done this several times with custom made C18 stage tips (much cheaper than ZipTips) in a collaboration with colleagues from Acibadem.
Let me know if you need further assistance.
Good Lick,
Murat
  • asked a question related to Proteomics
Question
8 answers
Multiplexed samples labelled with TMT tags. I am trying to quantify the ion intensity for each channel, however they are all being reported as 0.
Relevant answer
Answer
Hey Ben,
Don't know if you figured it out but we find we get better TMT results with FragPipe than MaxQuant (https://fragpipe.nesvilab.org). If it is a software problem give that a try. Normalizes across TMT plexs better.
  • asked a question related to Proteomics
Question
3 answers
Current search engines for MS/MS protein identifications such as: Mascot, MS Amanda, Sequest, etc., currently rely on the creation of a search library composed of computationally generated potential peptides through the cleavage by proteases (e.g., trypsin) of proteins from a given database. Different PTMs can be added to these computationally generated peptides, so that the search could be extended to address specific scientific questions, but this leads to significantly higher computational costs.
I have recently come across a case, where a highly enriched short protein could not be identified by a standard search, given that it was only generating a single peptide that had 2 fixed modifications. The modifications were not the most common there were and finding the right combination to use was time and computationally expensive.
I would like to open a discussion on the fact that pre-made peptidome libraries are a much better alternative to de-novo generated libraries of proteomes. Let’s get into the details!
As an example, I will use the ACE2 receptor, now infamously known to be the entry gate of Covid-19 into human cells.
The human ACE2 receptor undergoes a series of post translational event, such as: proteolytic cleavage by ADAM17 resulting in a soluble proteoform, glycosylation and phosphorylation of tyrosine-781 and serine 783.
In current search engines, the tryptic peptides generated would be generated from the first Methionine to the next positively charged residue and so on until the very last residue of the protein. If one would like to detect this protein in a sample and asses the presence of the mentioned PTMS, you would need to look for at least 2 phosphorylation sites per peptide and also check for S and Y phosphorylation. The search engine will then generate all possible combinations of SY single and double phosphorylate tryptic peptides to search for, which leads to exponentially increasing computational costs.
Since the protein is also cleaved by another protease in vivo, the 2 peptides before and after this site will not be accounted for as they do not end/begin after a positive residue. Since this is not a small protein, other peptides will probably still be detected, and the protein will eventually be identified.
I imagine a tool which would be used to generate the tryptic peptides as before, only accounting for the known PTM sites. In case of the ACE2 2 almost adjacent phosphorylation sites, this would lead to only 3 additional peptides (pY, pS, and pYpS). If the research question being asked is to identify novel phosphorylation sites, then only 1 phospho-site per peptide while looking for STY phosphorylation might already suffice, since the known ones will have already been accounted for. This can be applied to any combination of PTMs, massively reducing computational requirements. It is of course counterproductive to looking for PTMs in sterically inaccessible regions for example (e.g., hydrophobic core of the fold)
Databases of know annotated PTM sites of entire proteomes of many organisms are readily available. The tool could have a modular design in allowing the user to create a customized peptidome having any or all the following characteristics: trypsin/other enzyme used and/or accounting for known endogenous cleavage sites and/or accounting for known PTMs sites and/or accounting for natural variants.
I see a long list of advantages using this method and I would like to list the most important ones:
1. Identification of additional hits that could have been missed due to several reasons (e.g., tryptic peptides contain fixed modifications while not searching for these specific modifications due to computational resource limitation, or worse, small protein that would normally only yield in a single peptide that has 2 fixed modifications, one of which might be exotic)
2. Reduced computational time when trying to identify novel PTM sites
3. Lower false discovery rate since the peptidome used will be a much more closely related dataset to the actual sample composition than just a simple tryptic proteome and as a result newly identified spectra of interest can be more confidently assigned as the risk of artefacts is lower.
4. Single nucleotide polymorphisms can be analyzed analogously to PTM sites and would not result in exponentially larger search database.
5. More unique peptides could be assigned: If 2 proteins share a tryptic peptide, but one is known to be phosphorylated in this peptide but not the other, one could distinguish the phosphorylated peptide as having come only from one of the 2. In case of glycosylation this makes even more sense since some types of glycosylation only appear in a limited number of proteins, depending on their cellular localization
As the human proteoform project is taking on, maybe this would be the way of MS based proteomics to quickly catch up and help this project while advancing itself.
What are you thought on this? Are there any ongoing projects that would aim to do just that?
Relevant answer
Answer
To me, this debate seems somewhat reminiscent of the peptide-centric vs the spectrum-centric approaches.
Limiting the search space is generally a good idea for reducing FDR. Of course, if your peptide is not in your “limited” library you have no chance of identifying it. I see this as the biggest issue with this type of approach.
X!Tandem (and now other search tools) takes the approach that you do a broad initial search with few PTMs specified, then you broaden the PTMs once you have a smaller list of proteins to search. A neat approach in my view.
I’d be very careful with this source of information; “Databases of known annotated PTM sites of entire proteomes of many organisms are readily available.” I know of someone with a lot of de novo MS/MS experience who has undertaken an extensive manual review of phosphopeptides in the databases. The estimate (unpublished) is that around 30% are wrong. As tools progress and the amount of data increases, we look less at the raw MS/MS data. This is for very practical reasons, no one can manually verify 10,000 phosphopeptides, but we still need care when using this type of data.
Here are some papers that may be of relevance for you;
Lu, Yang Young, Jeff Bilmes, Ricard A Rodriguez-Mias, Judit Villén, and William Stafford Noble. “DIAmeter: Matching Peptides to Data-Independent Acquisition Mass Spectrometry Data.” Bioinformatics 37, no. Supplement_1 (July 1, 2021): i434–42. https://doi.org/10.1093/bioinformatics/btab284.
Searle, Brian C., Lindsay K. Pino, Jarrett D. Egertson, Ying S. Ting, Robert T. Lawrence, Brendan X. MacLean, Judit Villén, and Michael J. MacCoss. “Chromatogram Libraries Improve Peptide Detection and Quantification by Data Independent Acquisition Mass Spectrometry.” Nature Communications 9, no. 1 (December 3, 2018): 5128. https://doi.org/10.1038/s41467-018-07454-w.
Ludwig, Christina, Ludovic Gillet, George Rosenberger, Sabine Amon, Ben C. Collins, and Ruedi Aebersold. “Data‐independent Acquisition‐based SWATH‐MS for Quantitative Proteomics: A Tutorial.” Molecular Systems Biology 14, no. 8 (August 1, 2018): e8126. https://doi.org/10.15252/msb.20178126.
  • asked a question related to Proteomics
Question
4 answers
Hi everyone
I am looking to perform Protein extraction from Human Aortas to send for Mass spectrometry analysis. Anyone has previous experience with these tissues, and would be willing to share their protocol with me?
Thank you in advance for any help you may provide :)
Relevant answer
Answer
Hi Lara
Easy simple protocol:
Weight aorta sample : e.g. 100mg
Add tissue to FastPrep24 tube with Lysis Matrix D
Best will be if you can snap freeze the tissue in liquid N2 before taking the weight
Work always on ice (dry ice if possible)
Add Lysis Buffer (8M urea/100mM Tris-HCl pH 8.00/protease Inhibitors)
1mL Buffer/100mg tissue
Homogenize following Instruction on FastPrep24 Instrument
Centrifuge and transfer SN into new tube.
You can wash the beads once more with 1/2 Vol. of buffer and pool with first SN
With this lysate you can do what ever you want.
Protein Assay, run a gel, direct In solution digestion ...
Best wishes and good luck
Greatings
Natasha
  • asked a question related to Proteomics
Question
1 answer
What is the concentration of the surfactants used in the protein isolation, purification and crystallisation of proteins and what is the basis for selecting the surfactant concentration in the different steps in proteomics?
Relevant answer
Answer
Dear Subhrajit Mohanty sorry to see that your very interesting technical question has not yet received any expert answers. Personally, I'm not an expert in this field enough to give you a qualified answer. My suggestion would be to search the "Publications" and "Questions" sections of RG for relevant literature refernces and for closely related questions which have been asked earlier on RG. Moreover, please have a look at the following potentially useful review article which might help you in your analysis:
Successful amphiphiles as the key to crystallization of membrane proteins: Bridging theory and practice
This article has been posted by te authors as public full text on RG so that you can freely download it as pdf file.
I hope this helps. Good luck with your work!
  • asked a question related to Proteomics
Question
8 answers
Hello,
I have a very small knowledge in bioinformatics, and part of my research project is based on analysis of proteomics and metabolomics data. However, I am struggling to find some resources (webinars, courses, websites, ...) to help me get started with understanding and analyzing my data. I would appreciate it if anyone can give me some suggestions.
Thank you!
Relevant answer
Answer
Did you perform the experiment yourself? What kind of digestion? Do you have raw files? How versed are you with LC-MS/MS? Do you have access to any proprietary software e.g. Proteinscape or protein discoverer? Would you like to analyse the data yourself? For proteomics data MaxQuant is a wonderful and user friendly resource and a number of videos are available. Besides, the manual available on its website is quite self explanatory. For metabolomics data, MetaboloAnalyst is the analogous software. However, make yourself versed with the Jargon.
  • asked a question related to Proteomics
Question
5 answers
Hi,
As the protein buffer exchange is important for efficient protein immobilization. However, most times we lose some of the protein during the exchange process.
could we escape this step if the dilution factor is high, Ex; 50X or 100X? is there a reference for that?
Thank you in advance.
Best Wishes,
Waleed
Relevant answer
Answer
I never really had a major issue with protein loss by dialysis. Typically I lose most protein during column purification steps. Are you using desalting columns to exchange buffer or dialysis? If your yield is low it may just come down to switching to a different product. If you try diluting in your new buffer and have to re-concentrate with centrifuge filters, you will probably lose a lot of protein that way as well. I don't think there is any one method that is going to give you close to 100% yield, but there are many methods that should give you more than 90%. If this is not enough, then the issue may be with whatever approach you are using to express your protein. Maybe by optimizing that a bit more, your yield will increase enough that loss from buffer exchange is insignificant.
  • asked a question related to Proteomics
Question
2 answers
I am looking for a tool (online, R, Python, or otherwise) which I can use to highlight peptide sequences on the full protein sequence in a visually nice way for publications and presentations.
Extended description: In several of my bottom-up proteomics research projects, I have identified proteins of interest for a given condition/disease. Often, these proteins are activated/deactivated by cleavage (e.g. the complement system, coagulation system, angiotensinogen, etc.). Therefore, I commonly perform a peptide-centric analysis after the protein centric analysis, to identify changing peptides and then I manually map these to the protein sequence. I am looking for a tool to help me with this; where I can submit the list of peptide sequences and have these visually mapped to the full protein sequence of origin. Ideally, the tool should include known cleavage products (e.g. from UniProt KB).
Any advice is most welcome and thank you for your time.
Sincerely yours,
Tue Bjerg Bennike
Relevant answer
Answer
Maybe Peptigram can be of use for you?
  • asked a question related to Proteomics
Question
4 answers
I have done siRNA mediated knockdown of a low expressed protein in SKOV3 cell lines followed by proteomic analysis in biological triplicates. Proteomics was repeated three time. After retrieving the date I found that my desired protein(knockdown protein) in not present in transfected and even control (Non-transfected) group. However, I am getting bands of protein in western blot analysis. How can I justify my proteomics data.
Relevant answer
Answer
First, I would recommend to check the protein expression of your protein in the cell line. The chance to detect protein of interest with MS technique is low if it several copies per cell (WB usually more sensitive). Check how many tryptic peptides of your protein can be detected theoretically. Small mol. weight proteins can give you just too little tryptic peptide to claim protein Identification. Also it is a question how you performed proteomics analysis, was it a targeted analysis (MRM, PRM)? how sensitive and clean your MS? any potential PTMs on the peptides and so on? good luck
  • asked a question related to Proteomics
Question
3 answers
Hello,
I am looking to design a proteomics experiment looking at three treatment concentrations (Control, low-dose, high-dose) and two timepoints (24h, 48h) in an attempt to discover an unknown mechanism for lipid accumulation in THP-1 macrophages. I have never stepped into the omics world before so I thought I would start by asking:
What do you know now that you wish you had known when you started?
Relevant answer
Answer
Hi Braeden Giles
Mass spectrometry is a very sensitive and accurate method to determine the precise molecular weight of a protein.
Because it is very sensitive and is amenable to rapid processing of many samples, it is becoming very popular for determining the different proteins present in complex samples (e.g., the proteins in a given cellular organelle, separated by two-dimensional gel electrophoresis), in what is known as a proteomics analysis.
Techniques have now been developed by which proteins separated in two- dimensional gels can be digested within the gels using endoproteases and then injected directly into a mass spectrometer for analysis of the resulting fragments.
Best