Science topic

Library - Science topic

Explore the latest questions and answers in Library, and find Library experts.
Questions related to Library
  • asked a question related to Library
Question
2 answers
Hello everyone, We have done library preparation for Genotyping-by-Sequencing (GBS). The sample source is gDNA from chili leaves. The restriction enzymes are using EcoR1 dan Mse1. We've designed the common adapter that compatible with overhang restriction site and universal adapter for Illumina sequencing. Then we'use DNA/RNA UDI index illumina. In sequencing procees final library from GBS, we pool with final library of amplicon and mRNA. We use PE 150 for sequencing. Unfortunately, we didn't get any reads after sequencing.
Could you give us any recomendation how to optimize the library preparation and sequencing to get the optimal output of sequencing.
Your suggestion is very helpful
Relevant answer
Answer
I've done a bit of library preparation. I know there are steps where the is no way to do any sort of "let's check if this works, just keep going". But there are steps where you can.
So, what was the last step where you checked and have good evidence that the protocol was working as planned?
It takes a really, really large amount of high-quality DNA to make a sequencing library. How much did you start with?
  • asked a question related to Library
Question
1 answer
I am working on a ddRAD-seq experiment and looking for a detailed protocol, particularly focusing on restriction enzyme selection, library preparation, and adapter selection. I would appreciate if anyone could share an optimized protocol or insights on choosing the best restriction enzymes for different genomes. Additionally, any recommendations on adapter design and ligation efficiency would be helpful.
Has anyone encountered challenges in enzyme compatibility or library preparation steps? Any troubleshooting tips would also be valuable.
Relevant answer
Answer
Restriction Enzyme Selection
The choice of restriction enzymes (REs) is a crucial step in ddRAD-seq, as they will determine the sequence complexity and distribution of loci across your genome.
Here are some general guidelines for selecting restriction enzymes:
-Fragment Size Considerations :
Fragment Length: You want to ensure that the resulting fragments are of an appropriate size for sequencing. Generally, you aim for fragments between 300-600 bp after digestion. A good starting point is to use restriction enzymes that produce fragments with an average size of 300-500 bp. You can estimate the expected fragment size distribution using bioinformatics tools like REBASE or NEBcutter.
If the genome you are studying is large or has a high level of repetitive sequences, consider using enzymes with longer recognition sites that might give larger fragments. Alternatively, for a smaller genome or when working with species with a low amount of repetitive DNA, you can use enzymes with shorter recognition sites to get smaller fragments.
-Enzyme Compatibility:
Two Enzyme Combinations: Typically, ddRAD-seq uses two restriction enzymes, one with a 4-base recognition site (e.g., EcoRI or MseI) and another with a 6-base recognition site (e.g., PstI or SbfI). The combination of a 4-base enzyme and a 6-base enzyme allows for effective fragment generation while avoiding over-digestion.
-Consider genome size and complexity:
For large, complex genomes, enzymes with longer recognition sites may work better, like SbfI (6 bp) or PstI (6 bp). These can help reduce the number of overly small fragments.
For smaller genomes, or genomes with fewer repetitive regions, using shorter recognition site enzymes, such as EcoRI (4 bp) or MseI (4 bp), will give you a greater coverage of loci.
-Checking for Cutter Efficiency:
In silico analysis: Tools like REduce or RAD Sequence are great for evaluating how well a restriction enzyme pair will cut a given genome in silico.
Avoiding repetitive regions: If possible, try to choose enzymes that avoid cutting too frequently in repetitive regions of the genome. A good balance between coverage of loci and avoiding overly abundant repetitive regions is key.
Library Preparation and Adapter Design
- Adapter Design:
ddRAD-seq typically uses two types of adapters: a P1 adapter (which ligates to one end of the restriction fragment) and a P2 adapter (which is designed to ligate to the other end of the fragment).
Adapter Length: Adapters are usually 40-50 bp in length. The P1 adapter contains a barcode sequence that allows for sample multiplexing, while the P2 adapter includes a primer binding site for subsequent amplification.
Unique Barcodes: If you plan on multiplexing several samples, include unique barcodes in your P1 adapters for sample identification. Barcodes should ideally be 8-12 nucleotides in length.
-Ligation Efficiency:
Adapter ligation can sometimes be inefficient, particularly when working with small or GC-rich fragments. To maximize ligation efficiency, ensure that:
The ligation reaction is carried out with a high-quality T4 DNA ligase.
The ligation temperature should be optimized (generally around 16°C overnight is recommended).
Adapter-to-DNA ratio is optimized to ensure a high rate of ligation. A common ratio to try is between 1:1 to 10:1 (adapter to DNA fragment), but this can vary depending on the DNA input and the quality of the fragments.
Library Amplification
After adapter ligation, the next step is PCR amplification :
Use high-fidelity polymerase to avoid introducing PCR errors.
PCR primers should correspond to the adapter sequences (P1 and P2).
Primer optimization: Ideally, perform a test PCR to determine the optimal number of cycles (typically between 12-16 cycles). Over-amplification can lead to PCR bias, so it's important to stop amplification once the fragments of the correct size are enriched.
Size Selection & Cleanup
After amplification, size selection and cleanup steps are necessary:
Use a gel-based size selection (if you want to recover specific fragment sizes) or SPRINT (solid-phase reversible immobilization) beads for size selection and to clean up the library.
You want to retain fragments within your target size range (e.g., 300-500 bp), removing both smaller and larger fragments.
Sequencing
Once the library is prepared and clean, it's ready for sequencing. Depending on your sequencing platform (e.g., Illumina), you can use paired-end reads to get more robust data for downstream analysis.
  • asked a question related to Library
Question
1 answer
I am working on a ddRAD-seq experiment and looking for a detailed protocol, particularly focusing on restriction enzyme selection, library preparation, and adapter selection. I would appreciate if anyone could share an optimized protocol or insights on choosing the best restriction enzymes for different genomes. Additionally, any recommendations on adapter design and ligation efficiency would be helpful.
Has anyone encountered challenges in enzyme compatibility or library preparation steps? Any troubleshooting tips would also be valuable.
Relevant answer
Answer
Hi Gemini
if you have access to an Illumina séquencer, this will help you:
all the best
fred
  • asked a question related to Library
Question
3 answers
its a research topic
Relevant answer
Answer
In my Institution, we properly utilize our repository (TERAS). The TERAS was developed by TETFund which allows Federal University Libraries to collect, preserve and disseminate their resources with ease. It features
Include; EBSCOHost, Eagle Scan, Thesis Repository, and BIMS.
  • asked a question related to Library
Question
2 answers
My study is about cell metabolomics. I’m currently analyzing MS/MS data using MS-DIAL 5 for untargeted metabolomics, but I’m facing an issue where I don’t get any reference-matched/library-matched compounds (I got some, like 4 or 5, but they're not from metabolites of interest and seem like they're from contaminations). Instead, most of the annotated metabolites are either "suggested" or "w/o MS2" annotations. And some of the metabolites of interest regarding energy metabolism are also annotated with "w/o MS2". From what I understand, you cannot take w/o MS2 results with confidence.
Previously, I have also tried increasing the concentration and extracting more cells for metabolites but still face similar issues. I have also played around with MS-DIAL parameters, but still, I can't get the library matched.
My questions are:
1. Can w/o MS2 annotations be considered confident identifications and be used as results or in publication?
2. Are there specific parameter settings in MS-DIAL that I should check to ensure MS2 spectra are being used correctly for annotation?
3. Could this be the problem with the sample preparation or instruments/methods rather than data analysis?
Any advice on troubleshooting or optimizing settings of either MSDIAL or the instrument would be greatly appreciated!
My Setup:
  • Instrument: Agilent 6520 Accurate-Mass Q-TOF LC-MS (Data Acquisition: Auto MS/MS or DDA)
  • Sample: metabolite extracted from human cell lines
  • File type: abf file using Abf File Converter / raw .d from Agilent (drag to MS-DIAL)
  • MS-DIAL 5.0 settings: Default parameters with public libraries (MSMS_Public-all-pos-VS19/MSMS_Public-all-neg-VS19)
Thank you!
Relevant answer
Answer
I would also suggest that you do some targeted search to be sure the methodology has worked, i.e. to show you get MS2 from some peaks and also check if non-MSDial software finds MS2, a free software option is Skyline which i often use for this.
It is also possible in the instrument method to assign some ions for preferential MS2 thus its good to have a suspect list of metabolites in the method that way you should capture the MS2 if its there.
Also if you have the mass from the first run on the Agilent from the MSdial data and its possible you can reinject the samples for the MS2 using the MS1 masses as the target masses.
You can publish on MS1 but then you have to state the identification level is tentative look at for example Schymanski et al (doi: 10.1007/s00216-015-8681-7. Epub 2015 May 15.) for rules of non-target and what level to assign your peak confidence to, (other papers on non-target rules are available but similar).
  • asked a question related to Library
Question
1 answer
Hello i have a problem with I have a problem with creating a mathematical model using the Simscape Fluids library. Someone can help me with that ?.
Relevant answer
Answer
Sorry no idea.
  • asked a question related to Library
Question
1 answer
I found an error how do I report it ? The "cover page" you create and attach to an article is the wrong one , it is for a different article.
I have the file attached to show you.
Gail Cathey, M.L.S.
Print Resources / Access Services Librarian
Interlibrary Loan
Course Reserves
Chestnut Hill College
Logue Library
9601 Germantown Ave.
Phila., PA. 19118
(215)248-7053
Relevant answer
Answer
If you have a ResearchGate account, you should be able to delete and repost the article if you are the author, or contact the scholar under whose name the article is posted to notify of the error.
  • asked a question related to Library
Question
1 answer
Hi
I want to export ACM library articles for my SLR. I have this problem. When I do that, it just gives me 1000 records, and I still have 642 records left.
Can anyone suggest how I can export all of the citations?
Kind regards
Relevant answer
Answer
Unfortunately, I haven't found a solution for this either, but it's possible to divide the search into years to get below 1,000.
Or have you found a better solution?
  • asked a question related to Library
Question
1 answer
Can I know step by step how can spectragryph be used to create libraries to identify if specific compounds in my sample are present?
For example if my sample contains many phytochemicals such as Aucubin, Curcumin etc how and can I create a library of already spectra of those specific phytochemicaks and use ro compare with my FT-IR results to check their presence?
Also other than spectragryph are there any other library softwares that can perform this task?
Thanks
Relevant answer
Answer
Spectragryph is a software tool for visualizing and analyzing spectroscopic data, including FT-IR (Fourier Transform Infrared) spectra. Here's how you can use Spectragryph for FT-IR analysis:
Data Import
1. *Import spectra*: Load your FT-IR spectra into Spectragryph. The software supports various file formats, including ASCII, CSV, and JCAMP-DX.
2. *Data organization*: Organize your spectra into datasets, which can be further divided into subsets for easier analysis.
Data Analysis
1. *Spectral manipulation*: Perform various mathematical operations on your spectra, such as baseline correction, normalization, and smoothing.
2. *Peak analysis*: Identify and analyze peaks in your spectra, including peak picking, fitting, and integration.
3. *Spectral comparison*: Compare multiple spectra to identify similarities and differences.
Visualization
1. *Spectral plotting*: Visualize your FT-IR spectra as plots, with options for customizing axes, labels, and colors.
2. *Peak labeling*: Annotate peaks in your spectra with labels, making it easier to identify and analyze specific features.
3. *Spectral overlays*: Overlay multiple spectra to facilitate comparison and analysis.
Quantitative Analysis
1. *Calibration*: Create calibration models using Spectragryph's built-in regression analysis tools.
2. *Prediction*: Use your calibration models to predict properties or concentrations of unknown samples.
3. *Statistics*: Perform statistical analysis on your data, including mean, median, standard deviation, and more.
Reporting and Export
1. *Report generation*: Create customizable reports that include your spectra, peak tables, and analysis results.
2. *Data export*: Export your data in various formats, including ASCII, CSV, and JCAMP-DX, for further analysis or sharing with colleagues.
  • asked a question related to Library
Question
2 answers
I prepped a cDNA library from ovarian cancer cell line RNA (all RIN scores were 9+ on TapeStation 4200) using a KAPA HyperPrep mRNA kit and UDI adapters for Illumina sequencing. When we ran the TapeStation (D1000 HS screen tapes) on the prepped libraries, we saw extra peaks around 230-250bp. On some samples it was a more pronounced peak, but others it was more of a shoulder. We are baffled on what this could be from. We did a QC sequencing run on an illumina nextseq P2 100 cycle and are wondering if our inner distance plot could look like this due to the smaller fragments we see on the tape station? I've had some people suggest the extra peak on the tapestation a bubble peak from over amplification of the library, but bubble peaks would appear larger, not smaller. Our anticipated library size was 200-300bp. For most samples the average size was 350bp, so including the adapter sequences this seems right. We just have no idea what the smaller peak would be that appear in nearly every sample. I've included images of a couple electropherograms and gel images from the tapestation and the inner distance plot. You can see these smaller extra peaks appear as bands on the gel images and how they slightly vary in size from sample to sample.
Relevant answer
Answer
Hi! It looks like an adapter dimer peak at 150 bp. Improve clean-up after adaptor ligation. Use x0.8 of beads, fresh-prepared 80% ethanol, pipette 5-10 times up-down during ethanol cleaning step.
  • asked a question related to Library
Question
6 answers
Hello,
I am new to coding in R and have come up with the following code to perform a nested 2-way ANOVA (with Tukey post-hoc) to be able to account for individual animal variability within each group. I am wondering if someone can confirm this is correct or provide alternative methods? I am assessing the effects of diet and stress on certain cellular outcomes, with n=3-5 animals/group. Thank you!
# Load required packages
library(lme4)
library(emmeans)
library(ggplot2)
# Convert factors
Data_For_R$Diet <- as.factor(Data_For_R$Diet)
Data_For_R$Stress <- as.factor(Data_For_R$Stress)
Data_For_R$Animal <- as.factor(Data_For_R$Animal)
# Nested 2-way ANOVA models
model_aov_stress <- aov(SomaVolume ~ Stress / Animal, data = Data_For_R)
model_aov_diet <- aov(SomaVolume ~ Diet / Animal, data = Data_For_R)
model_aov_combine <- aov(SomaVolume ~ Diet * Stress / Animal, data = Data_For_R)
# Mixed-effects model accounting for animal variability
model_lmer <- lmer(SomaVolume ~ Diet * Stress + (1 | Animal), data = Data_For_R)
# Obtain estimated marginal means for Diet and Stress, considering random effect for Animal
emmeans_result <- emmeans(model_lmer, ~ Diet * Stress)
# Perform pairwise comparisons for the interaction between Diet and Stress, adjusting for animal variability
pairs(emmeans_result, adjust = "tukey")
# Create a new factor to represent the combination of Diet, Stress, and Animal
Data_For_R$Diet_Stress_Animal <- interaction(Data_For_R$Diet, Data_For_R$Stress, Data_For_R$Animal, drop = TRUE)
# Summaries of the models
summary(model_aov_stress)
summary(model_aov_diet)
summary(model_aov_combine)
Relevant answer
Answer
The Error(AnimalID/Diet/Stress) term in the model accounts for the nested structure, where animals are nested within Diet and Stress conditions. This is crucial when you have repeated measures on the same animals.
Two-Way ANOVA: The model includes both Diet and Stress as factors and their interaction (Diet*Stress) to assess their main effects and any potential interaction between them.
TukeyHSD: This function performs Tukey's Honestly Significant Difference post-hoc tests to identify which specific groups (Diet or Stress levels) significantly differ from each other.
Alternative Approach (using lme4 package):library(lme4) # Fit the model using lmer() function model_lmer <- lmer(CellOutcome ~ Diet*Stress + (1|AnimalID), data = data) # Summary of the model summary(model_lmer) # Post-hoc tests (using emmeans package) library(emmeans) emmeans(model_lmer, pairwise ~ Diet) emmeans(model_lmer, pairwise ~ Stress)
library(lme4) # Fit the model using lmer() function model_lmer <- lmer(CellOutcome ~ Diet*Stress + (1|AnimalID), data = data) # Summary of the model summary(model_lmer) # Post-hoc tests (using emmeans package) library(emmeans) emmeans(model_lmer, pairwise ~ Diet) emmeans(model_lmer, pairwise ~ Stress)
  • asked a question related to Library
Question
1 answer
Greetings from Abuja-Nigeria.I suddenly and rudely discovered my copy of Y.B Usman's seminal publication Manipulation of Religion in Nigeria is missing from my library at a time I most needed the book.Could anyone with a copy to hand help with this?Thanks.
Relevant answer
Answer
i am also looking for the book. i will appreciate your kind help to get the book
  • asked a question related to Library
Question
2 answers
Comsol Multiphysics
Relevant answer
Answer
Greetings. All of the predefined parameters and constants are readily available in COMSOL documentation.
  • asked a question related to Library
Question
2 answers
We were using the Zinc database for the virtual compound library in our studies, but there have been problems downloading a large number of compounds for a while. Are there different databases that can be used or how can I solve this problem in the Zinc database?
📷
Relevant answer
Answer
Hi Zeynep Yağmur Babaoğlu,
Maybe you can refer to databases provided by TargetMol (website: https://www.targetmol.com) . TargetMol provides REAL Database (6 billion compounds) and TOPSCIENCE Datebase (26million compounds) for virtual screening. For more information, you can also contact us at sales@targetmol.com.
  • asked a question related to Library
Question
7 answers
I need to identify the metabolites in the blood. I have done LC-HRMS. since there is no library provided, I am finding difficulties identifying the hits. Any suggestions..?
Relevant answer
Answer
If you don't know the accurate mass and molecular formula, use software tools like Xcalibur, MassLynx, or open-source options like MZmine or MS-DIAL to calculate possible molecular formulas based on the accurate mass and isotopic patterns.
If you know the accurate mass and molecular formula, then follow this to interpret fragments. Manually interpret fragments or use software like MetFrag, SIRIUS, or Molecular Networking (via GNPS) to predict the structure from fragmentation data.
  • asked a question related to Library
Question
1 answer
This call for papers invites the submission of high-quality, unpublished manuscripts that explore the challenges and opportunities faced by banks in a historical context characterized by the energy transition of national economic systems.
Topics of interest include, but are not limited to, the following three areas:
(1) The evolution of the SSM ten years after its birth.
It is considered important to broaden the discussion on whether, and how, ten years after the SSM Regulation, the new Community architecture has contributed significantly to improving the stability of individual banks in several ways, such as: (i) Harmonisation of supervisory practices (Carretta et al., 2015; Scannella, 2015); (ii) Strengthening prudential supervision (Beccalli & Cesarini, 2021); (iii) Improving the stability of the banking system and the market power of European banks (Banfi & Pampurini, 2016; Bikker. J. & Okolelova. I. 2022); (iv) Reduction of systemic and idiosyncratic risk (Beccalli & Poli, 2015; ECB, 2024b); (v) Improved risk disclosure (Altunbaş et al., 2022).
(2) The harmonisation of the Control and Supervision process: focus on sanctions.
A crucial aspect of the SSM is the sanctioning activities, which has evolved to raise new questions about: (i)The effectiveness of sanctions in improving banking stability (Caiazza et al., 2015) also in consideration of the information about the motivations behind them (Guerello et al., 2019); (ii)The impact of sanctions on stock price (Linder, 2016); (iii) The effectiveness of supervisory action in which there is a comparison between the European and US supervisory sector (Götz & Tröger, 2017); (iv) The restrictiveness of supervisory action in Europe on the basis of the frequency with which sanctions are imposed and their contribution to systemic risk (Korzeb et al., 2024); (v) The contribution of sanctions to the risk of bank default (Murè et al., 2020); (vi) The combination of ESG and sanctions (Murè et al., 2021; Mango et al., 2023); (vii) The impact of sanctions on reputation (Armour et al., 2017); (viii)The impact of sanctions on banks’ performance (Murè, 2014; Murè & Spallone, 2018); (ix) The probability of sanction (Murè et al., 2018); (x) The possible evolutions of the legislation on the adequacy of top management bodies (ECB, 2017; MEF, 2020); (xi) The sanctioning activity of the Bank of Italy in the context of the SSM (Banca d’Italia, 2023).
(3) The evolution of the control governance process: the integration of the Compliance Function with strategic planning and outsourcing possibilities.
Compliance to support the strategic process in intermediaries, also in consideration of the possibility of resorting to the outsourcing of corporate functions (Murè & Bittucci, 2020; Murè, 2021; ECB 2024a).
We encourage all researchers to submit their work by the deadlines outlined above. Your contributions are vital for fostering discussions and advancing knowledge in our field. We look forward to receiving your submissions!
Links below for more information
  1. Website page: https://www.complianceandstrategyinbanking.eu/
  2. Submission page: https://complianceandstrategyinbanking.confnow.eu/
  3. LinkedIn page: https://www.linkedin.com/in/csibc-international-conference/
Please find attached more information, including opportunities for publication related to the JFMMI special issue. Other possibilities will be available soon.
***
References
  • Altunbaş Y., Polizzi S., Scannella E. & Thornton J. (2022). “European Banking Union and bank risk disclosure: the effects of the Single Supervisory Mechanism”. Review of Quantitative Finance and Accounting.
  • Armour J, Mayer C. & Polo A. (2017). “Regulatory Sanctions and Reputational Damage in Financial Markets”. Journal of Financial and Quantitative Analysis. 52(4):1429-1448.
  • COUNCIL REGULATION (EU) No 1024/2013 of 15 October 2013 conferring specific tasks on the European Central Bank concerning policies relating to the prudential supervision of credit institutions.
  • ECB (2017). “Linee guida: Fit and proper assessment”.
  • ECB (2024, a) “Draft guide on governance and risk culture”.
  • ECB (2024, b). “Statement on SSM risk appetite”.
  • Banca d’Italia (2023). “Relazione sulla gestione e sulle attività”.
  • Banfi & Pampurini (2016). “Il grado di efficienza degli intermediari sottoposti alla vigilanza europea: una valutazione.”. Osservatorio Monetario.
  • Beccalli E. & Poli F. (2015). “Bank Risk, Governance and Regulation”. Philip Molyneux, Houndmills.
  • Beccalli E. & Cesarini F. (2021). “Il sistema finanziario europeo. Cosa regolare, come regolare, chi deve regolare”. Il Mulino.
  • Bikker. J. & Okolelova. I. (2022). “The Single Supervisory Mechanism: Competitive implications for the banking sectors in the euro area”- International Journal of Finance & Economics, Wiley Online Library.
  • Caiazza S., Cotugno M., Fiordelisi F. & Stefanelli V. (2015). “Bank Stability and Enforcement Actions in Banking”. CEIS Research Paper 334, Tor Vergata University, CEIS, revised 20 Mar 2015.
  • Carretta A., Farina V., Fiordelisi, F., Schwizer P., Stentella Lopes F.S. (2015). “Don’t Stand So Close to Me: The role of supervisory style in banking stability”. Journal of Banking and Finance.
  • Götz.M, & Tröger. T (2017). “Fines for misconduct in the banking sector – what is the situation in the EU?”. Pubblicazioni dei ricercatori dell’Istituto Leibniz per la ricerca finanziaria SAFE.
  • Guerello C., Murè P., Rovo N. & Spallone M. (2019). “On the informative content of sanctions”. The North American Journal of Economics and Finance, Elsevier, vol. 48(C), pages 591-612.
  • Korzeb. Z., Bernardelli. M. & Niedziółka.P (2024). “Enforcement actions against European banks in the years 2005–2022. Do financial penalties imposed on European banks follow any patterns?”. Journal of Banking Regulation.
  • Linder D. (2016). “Reputational risk of banks – a study on the effects of regulatory sanctions for major banks in Europe”.
  • Mango F., Murè P., Cardi M., Paccione C. & Bittucci L. (2023) “Supervisory Sanctions, ESG Practices and Bank Reputation: Market Performance Analysis of Sanctioned Banks”. Corporate Ownership & Control.
  • Marzioni. S., Murè. P. & Spallone M. (2020). “L’impatto delle sanzioni sulla probabilità di default. Il caso delle banche italiane”. IL RISPARMIO. – ISSN 0035-5615
  • MEF (2020), Decreto MEF 169/2020.
  • Murè P. (2014). “Le sanzioni amministrative per le banche italiane: effetti sulle performance”. Rivista bancaria.Minerva Bancaria.
  • Murè P. & Spallone M. (2018). “Gli effetti delle sanzioni amministrative sulle performance delle Banche Popolari Italiane”. Rivista Bancaria.Minerva Bancaria.
  • Murè P., Spallone M., Rovo N. & Guerello C. (2018). “Un modello previsionale per le sanzioni bancarie in Italia”. Rivista Bancaria.Minerva Bancaria.
  • Murè P. & Bittucci L. (2020). “Dalla traditional compliance al regtech. Soluzioni innovative per il sistema dei controlli interni”, EGEA.
  • Murè P. (2021). “La compliance in banca. Tra le soluzioni Regtech e l’integrazione dei fattori ESG”. EGEA.
  • Murè P., Spallone M., Mango F., Marzioni S. & Bittucci L. (2021). “ESG and reputation: The case of sanctioned Italian banks”. Corporate Social Responsibility and Environmental Management,John Wiley & Sons, vol. 28(1), pages 265-277, January.
  • Scannella E. (2015). “Crisi economica e vigilanza unica europea sulle banche: alcuni riflessi sul mercato dei servizi finanziari”. Economia dei Servizi, Il Mulino, n. 1, gennaio-aprile, 2015, pp. 65-82.
Relevant answer
Answer
Definitely not my subject please. Thank you
  • asked a question related to Library
Question
3 answers
I am infecting 4T1 with a lentiviral library and need to sort the cells to recover the transduced ones. I am running into problems with these cells clumping even after filtering. I use trypsin with EDTA to get them off the dish, inactivate the trypsin with serum containing media, wash 2x with PBS+4% FBS and then filter. During the sort I end up with many cells being 2 stuck together. This is also a problem during passage. Anyone have any experience with these?
Relevant answer
Answer
Hey Dennis,
I've recently come across a similar issue. Can you offer any insights or solutions?Thanks.@Dennis A Simpson
  • asked a question related to Library
Question
3 answers
Dear researchers, I am trying to assess specific indirect effects in my model with three moderators. However, AMOS always gives a syntax error and my estimand could not run. When I try it on R studio (with lavaan and psych packages), I could not assign parameters to calculate specific indirect effects. Could you please help me identify problems and solutions for this?
Below is the code in R studio:
library(psych)
library(lavaan)
# I already input my CSV data so now I just describe it
describe(my.data)
A =~ A2+ A3 + A4 + A5 + A7 + A8
MS =~ MS1 + MS2 + MS3 + MS4 + MS6 + MS7+ MS8
M =~ M1 + M2 + M4 + MA8
IM =~ IM1 + IM2 + IM3 + IM4
FLA =~ Listen + Speak + Read + Write
# Regression paths from IV to mediators
M ~ a1*IM
A ~ a2*IM
MS ~ a3*IM
# Regression paths from mediators to DV (FLA)
FLA ~ b1*M + b2*A + b3*MS + c1*IM
#From this moment, I tried to assign parameters to calculate specific indirect effects. However, none of the below functions works!
direct : c1
Error: object 'direct' not found
direct:= c1
Error in `:=`(direct, c1) : could not find function ":="
direct<-c1
Error: object 'c1' not found
direct=c1
Error: object 'c1' not found
Relevant answer
Answer
As far as I recall, AMOS by default does not report indirect effects along individual paths when there is more than one indirect path between two factors/variables (e.g., in a parallel mediation model). Did you use user-defined estimands to get the estimates of the three indirect effects? If yes, maybe the syntax error is in the code defining these estimands, not in the model.
  • asked a question related to Library
Question
4 answers
Bibliometric analysis is a research method that uses quantitative analysis and statistics to assess and analyze scientific literature. It is often used to evaluate the impact and trends of research within a specific field by examining published articles, citation counts, and other metrics. Commonly used in fields like library and information science, bibliometric analysis helps in understanding research productivity, collaboration patterns, influential authors, and high-impact journals.
Relevant answer
Answer
Dear Anitha
In my eyes, this is an accurate definition.
Perhaps, you can have a look at the following reference work: Ball, R. (2021). Handbook bibliometrics. De Gruyter Saur. https://doi.org/10.1515/9783110646610.
Additionally, I would recommend to consider five other metrics, outlined in this paper: Zhao, H., & Wang, X. (2024). Research on interdisciplinarity of five-metrics in China based on Chinese Citation Data under the background of open science. Journal of Information Science, 0(0). https://doi.org/10.1177/01655515241263286. Best regards Anne-Katharina
  • asked a question related to Library
Question
1 answer
I am reading an Endnote library file in Vosviewer; however, it gives me the attached message: Vosviewer cannot read the file. There is no valid (%Authors) and (%keywords) fields.
Can you please help me out this issue?
Best Regards,
Fazli
Relevant answer
Answer
BibTeX file format read in Vosviewer endNote is reference mngment tool.
  • asked a question related to Library
Question
1 answer
What will be the criteria for selection of compounds for docking from the LC-MS compound library? Is it abundance or other criteria?
Relevant answer
Answer
When selecting compounds from an LC-MS library for molecular docking, prioritize those with favorable drug-likeness (e.g., Lipinski's Rule of Five), documented biological activity, and strong predicted binding affinities. Abundance is important but should be considered alongside structural characteristics and computational predictions.
  • asked a question related to Library
Question
1 answer
Does anyone know of a free Python library for machine learning that can be used on a personal computer? I am particularly interested in neural network libraries similar to FANN.
Relevant answer
Answer
I would advise looking into PyTorch, which is free to use (BSD 3 license). It works well with the majority of PC hardware, has a lot of community support, and learning it develops coding skills that can be transferred to other very popular packages, like transformers.
  • asked a question related to Library
Question
3 answers
Good morning,
I am very new to ATAC-seq and library preparation.
I just did my first trial in Arabidopsis samples and after tagmentation and library prep the bioanalyzer profile doesn't look very promising (see attached).
What I really don't understand is the very concentrated peak around 1000-1500bp (100s) in all samples. Even in the last one (which is genomic, it can be seen).
Any idea of the origin of this band/peak? (I have my theories but I want unbiased answers! xd)
Thanks!
Relevant answer
Answer
Hi Zhangyi,
For what has been my experience, this is the upper marker which runs inconsistenly among the samples. It can be fixed manually after the run but it denotes problems with the machine or the chips. Our machine was old and not well kept and led to this problems. I have been troubleshouting a lot of issues with ATAC samples in the bioanalyser and this guide helped from Scott Herke: https://www.linkedin.com/pulse/mastering-bioanalyzer-dna-high-sensitivity-chip-assay-scott-herke/
  • asked a question related to Library
Question
5 answers
Hi everyone,
While running the certain material available in Eco invent library, I came across the negative water depletion values,
What could be probable reason behind it?
Relevant answer
Answer
Please recheck your model for avoided burdens.
  • asked a question related to Library
Question
1 answer
The number of pores in the R.10.4.1 flow cell decreased significantly from +/- 1400 to 291 after nanopore sequencing with only 24 samples multiplexing. I used the SQK-RPB114-24 kit for processing the 24 samples as one library. Would anyone recommend anything about the protocol or to change something about it? Does anyone have about the same experience and what did you do to make it in some way better?
Relevant answer
Answer
Hello Naomi I have same problems with my Flow Cell,too.
Did you figure it out how to solve this problem ?
And also I don't see increased pore numbers after I wash them, too
  • asked a question related to Library
Question
2 answers
The protocol fro the new NEBNext UltraExpress® RNA Library Prep Kit NEB #E3330S/L closely follows to the previous version NEBNext Ultra II RNA library Prep Kit # E7770 S/L, but random primer step is missing. How 1st strand synthesis works without it? Is random primer added into some mix now?
Relevant answer
Answer
In the UltraExpress kit, the random primers are integrated into the First Strand Synthesis Mix. This streamlines the workflow by reducing the number of steps and reagents you need to handle. The First Strand Synthesis Mix contains all the necessary components, including the random primers, to synthesize the first strand of cDNA from the RNA template.
This integration helps to simplify the protocol and reduce the overall preparation time, making the process more efficient while still maintaining high-quality results.
  • asked a question related to Library
Question
8 answers
We use the NEBNext Library Quant Kit for Illumina to determine our Illumina library concentrations. Before taking the library to qPCR, we usually run the library on Bioanalyzer to get an idea of the concentration. Based on Bioanalyzer, we will dilute the library down to 5,000pM for qPCR. Most of the time, qPCR results will show libraries around 5,000pM (maybe 4,000-6000pM). In some cases, the qPCR concentration can be double or even triple the Bioanalyzer concentration. When this happens we QC the library by Qubit. In most cases, the Qubit concentration will be similar to the Bioanalzyer concentration.
This makes it challenging for us to determine which concentration to use for loading the sequencer. If we use the qPCR concentration, the runs will be under-clustered. We are trying to understand why we would get higher concentrations from qPCR. It makes more sense if the qPCR concentrations were lower than Qubit/BA suggesting that the adapter ligation was not very successful.
Why do you think we would get such high qPCR concentrations? One thought is that there may be single stranded DNA in the library which is not detected by Qubit or BA, but qPCR is able to amplify. Curious to know your thoughts.
Thank you,
Karrie
Relevant answer
Answer
Oh, one more thing! We also started doing a re-conditioning PCR of our library that gets rid of heteroduplexes in the higher bp tail of our products. Was very helpful. Not sure what libraries you are running and if you have a tail like we do. Can see my other post for what that looks like. We use the P5 and P7 primers and do one PCR cycle.
  • asked a question related to Library
Question
1 answer
Hi ya'll,
I am here to ask for recommendations on software or platforms I can use to manage a massive database.
I am working on a big museum samples barcoding project. For now, we are going through ~6000 specimen drawers one by one and selecting two to four specimens of each species for the barcoding process. Our database is getting bigger and bigger as we keep doing this.
For each specimen, we have a specimen barcode (including species name, collect year, identifier names, collect locality et al.), the drawer code (which drawer it was selected from), the DNA extraction plate code, the DNA extraction well code (we are using the 96-well plate), PCR plate code, Library pool code, Sequencing run No., Freezer code, freezer rack code (we have four -80 freezers and lots of racks to store DNAs) and a lot of other information.
I right now have 5 people working on this project and I am using the Google spreadsheet to manage and share the progress with all the collaborators. But the sheet is getting bigger and bigger, and there are lots of tabs created. Specifically, it is not easy to figure out the errors, like typos, two specimens were given the same code, and some drawers were samples twice....
I am wondering if there is any specimen tracking system, software, or functions I can use to manage the dataset easier, like linking all the information together while avoiding duplication errors?
Thank you for your time and my best wishes,
Menglin
Relevant answer
Answer
You could build a relational database in SQL.
  • asked a question related to Library
Question
7 answers
Is the spectral library in python , helpful for reading SAR images
Relevant answer
Answer
If you are going to read geotiff/img files, then you can't go wrong with rasterio. However, I would go with the pyroSAR library, which is more specialised in SAR, specifically Sentinel-1, it has a variety of preprocessing modules!
  • asked a question related to Library
Question
3 answers
South Indian Journal of Library and Information Science "Integration of E-Resources and Smart Technologies in Law College Libraries: Enhancing Access and Learning Experiences"
Relevant answer
Answer
See https://help.researchgate.net/hc/en-us/articles/14293139566353-Journals for explanations why some journals are missing. Unfortunately, it is not possible to add a journal to RG's database "by hand". In case of a missing journal in this database, I add the bibliographic data to the abstract field (like, e.g., in https://www.researchgate.net/publication/344474227 and https://www.researchgate.net/publication/268925009).
  • asked a question related to Library
Question
28 answers
latest trends or topics
Relevant answer
Answer
Artificial intelligence tools in library
  • asked a question related to Library
Question
1 answer
Hi All,
I am ordering an overalapping peptide library to study the binding epitope of my antibody. I wonder if there is a formula to calculate the probability of number of epitope hits (e.g. single, double) with different epitope length, peptide length and offset (or peptide overlap). Knowing the probability of double hit would be helpful to determine how many peptides to order (as they are quite expensive!). Thank you for your help.
Relevant answer
Answer
I did a *lot* of this kind of work as a post-doc, when multiple peptide synthesis was first becoming feasible. There is no easy answer to your question, as asked.
What do you know about your Ab, and what do you need to know about the epitope?
If you have a monoclonal there is about a 12% chance it will bind to a short peptide. Most MAb are specific for conformational epitopes, and it's infamously difficult to draw conclusions about their epitopes using short peptides as antigens. OTOH, if you're looking at antiserum, you're likely to find multiple epitopes, with one or two immunodominant.
What level of resolution do you need? If you plan to follow up with more synthesis and mapping you can get away with an initial experiment using fewer peptides (e.g., 12mers overlapping by 4).
Anyway, yes, making the peptides is a lot of work and expensive. So once you have them, test as many Ab as you can find.
  • asked a question related to Library
Question
1 answer
I am looking for Proteus Library for current transformer SCT-013 . Where can i find it?
Relevant answer
Answer
Hi, did you find it?
  • asked a question related to Library
Question
1 answer
...
Relevant answer
Answer
Look for any local or cloud-based backups of the files. If the workspace was shared, ask collaborators if they have copies of the files.
  • asked a question related to Library
Question
2 answers
Hello all,
I am trying to determine the dependence of the energy gap of silicon as a function of temperature. In the literature, it is stated that the decrease in the energy gap of silicon with increasing temperature can be explained by thermal expansion and electron-phonon interaction.
First, I used the thermo_pw library (which uses the QHA approximation) to determine the lattice parameter of silicon as a function of temperature. Then, I ran the following calculations: SCF, NSCF, DOS, band, and finally plotband. I performed these calculations using the lattice parameters of Si corresponding to temperatures in a range from 4K to 800K. For this simulation, I am using PBE pseudopotentials, an ecutwfc of 25 Ry, and a unit cell with 2 atoms.
The problem is that the gap increases with temperature instead of decreasing. I obtained a gap of 0.6187 eV at 4K and 0.6315 eV at 800K.
I also tried calculating the band structure considering electron-phonon coupling using the EPW library, but the gap still increases with temperature.
Has anyone already tried to calculate the silicon gap as a function of temperature? What am I doing wrong?
Relevant answer
Answer
I found that to account for phonon effects, I needed to consider lattice vibrations. I performed MD calculations at different temperatures using a supercell of 512 atoms with PBE pseudopotentials, an energy cutoff of 25 Ry, and calculations at the gamma point only. Once the temperature was stabilized, I extracted 160 structures at each temperature.
For each structure, I ran an SCF calculation, and now I observe that the energy gap decreases with temperature. I am still waiting for all the SCF calculations to be completed, but currently, the average band gap is 0.54624 eV at 250K and 0.65271 eV at 4K.
Once all the calculations are done, I will correct the band gap using the GW method to obtain a better approximation.
  • asked a question related to Library
Question
5 answers
I received C. elegans nuclei (20 million nuclei) for ATAC-seq and prepared the library based on the Buenrostro 2015 protocol. The final cycle number was determined by 1/3rd maximal qPCR fluorescence and total cycle number used was 14. Could this be due to too high input or have others seen anything similar due to a different reason? We are going to try lowering the input significantly and titrate input amount to find the nucleosome pattern. Any input would be appreciated. Thank you.
Relevant answer
Answer
To determine if an ATAC-seq (Assay for Transposase-Accessible Chromatin using sequencing) library profile suggests under-tagmentation, you'll need to evaluate several key aspects of the sequencing data and the quality of the library preparation. Here’s how you can assess if under-tagmentation might be an issue:
1. Read Distribution
1.1 Peak Identification: Examine the read distribution across the genome. In ATAC-seq, you should expect to see enrichment of reads at regions of open chromatin, typically at promoters, enhancers, and other regulatory elements. If these regions show lower enrichment than expected, it might indicate under-tagmentation.
1.2 Fragment Size Distribution: Check the fragment size distribution. Properly tagmented samples should have a characteristic peak around 100-150 bp, reflecting the nucleosome-free regions. A significant deviation from this pattern could suggest under-tagmentation.
2. Library Complexity and Yield
2.1 Total Yield: Assess the total number of reads generated. If the library yield is significantly lower than expected, it might indicate that not enough DNA was fragmented, which could be a sign of under-tagmentation.
2.2 Complexity: Evaluate the library complexity by examining the number of unique reads or fragments. A low number of unique fragments compared to the total reads might indicate that the transposase didn’t adequately cut the DNA.
3. Tagmentation Efficiency
3.1 Positive Control: If you used a positive control (e.g., a cell line with known open chromatin regions), compare your results with the control. Lower accessibility in your sample compared to the control might suggest under-tagmentation.
3.2 Visual Inspection: Use genome browser tracks (e.g., IGV) to visualize the distribution of reads across various genomic regions. Under-tagmented samples often show fewer reads in expected regions of open chromatin.
4. Quality Control Metrics
4.1 QC Metrics: Utilize quality control metrics from tools like FASTQC or other ATAC-seq analysis pipelines. Look for warnings or low-quality indicators that might suggest issues with tagmentation.
4.2 Reproducibility: Compare the ATAC-seq profiles with replicates. Consistent patterns of low accessibility across replicates might indicate a systemic issue with tagmentation rather than random variability.
5. Consult Protocol
5.1 Protocol Review: Review your ATAC-seq protocol to ensure that the tagmentation step was performed according to the recommended conditions. Under-tagmentation could result from insufficient enzyme activity, incorrect buffer conditions, or suboptimal reaction times.
Conclusion
If you observe low peak intensity at expected regions, abnormal fragment size distribution, low library yield or complexity, or if your data deviates significantly from positive controls, these can be indicators of under-tagmentation. Ensuring that your protocol is followed precisely and troubleshooting any issues in the preparation process can help resolve these problems.
l Take a look at this protocol list; it could assist in understanding and solving the problem.
  • asked a question related to Library
Question
2 answers
How can we can train our model using the data, so that it can identify the disease and recommend possible treatment.(subject to the review of concerned expert). Also, suggestions are welcome on possibility of making the maximum utilization of this proposed model using stream-lit library by making it go public. I have built certain disease prediction model and looking forward to build a multi in one model that accepts multiple type of values for better analysis.
Relevant answer
Answer
yes , basically I have more than one models , One for diagnosis through Imaging data and the other one is predictive one , based on the input values provided.
  • asked a question related to Library
Question
3 answers
I don't know how to submit my article?, then I inform you that I registered with Ajol
. Thank you in advance for your attention to this submission
Relevant answer
Answer
thanks
  • asked a question related to Library
Question
4 answers
The idea of Bayesian Neural network is as primitive as the answer of failing student. You have a neural network model but no matter how hard you train it, there is always a residual error, so what to do? And student tells you, - than replace each scalar in the network by normally distributed random variable and tune expectations and variances to match the data.
Although this concept is failing miserably, we can find large group of scientists who keep pushing it into usage. I can easily provide the proof of these strong statements. The elementary stochastic system, which anyone can reproduce at home is a coin and dice. You pick two random inputs by rolling one die twice, let say they are 3 and 5 and flip the coin. In case of head you roll 3 dice, add outcomes, otherwise 5 dice. The sum of outcomes is your stochastic output. Simple, right? Now make few hundred records and try to obtain bimodal distribution by any of publicly available library designed to support BNN. The result will not be even remotely close to reality. But the solution is simple and know for at least 50 years. It is KNN. For each given input you find several similar records. Each output is considered as expectation of normal distribution, you assign variance from common sense, and you see this beautiful bimodal distribution very close the real. Called KDE, known for decades. Funny?
That is not all. Freely available library Tensorflow is capable to detect gaps in data and return your confidence interval, which becomes larger for sparse data. That is already mocking of the science. All you need to do to identify these gaps is to generate new inputs as evenly distributed points in the field of definition, find the distance to nearest dataset point for each, record it and make a new model, which tells you your training data density. Why to use Tensorflow, when it needs 50 lines of code and can be done by student for an hour.
I tested Tensorflow with coin and dice data. The returned result was compared to true distribution by Kramer von Mises criteria. The accuracy was 15%. KNN gives 85%. I made my own method, which is slight improvement of KNN, and improved it to 90%.
I never believe that scientists promoting BNN is not aware that this technology is fake. My question is what we can do about it? Let say I publish my research, I contact scientists promoting BNN directly, he ignores and keeps promoting his research. We all don't like when doctors prescribe us expensive drugs when regular drugs is a cure and when auto mechanics suggesting to replace parts that can work. Isn't that the same thing?
I will add the links to my published research exposing weakness of BNN for those who interested.
Relevant answer
Answer
You did not understand the concept even remotely or approximately. Watch this:
  • asked a question related to Library
Question
7 answers
"Today several adjectival phrases have been used to describe English like ‘International Language’, ‘Lingua-Franca’, ‘Language for Globally Connecting’, ‘Library Language’, ‘Official Language’, ‘Administrative Language’, ‘Queen of Languages’, ‘Employment Passport’ and ‘the most Preferred Language’ etc." (Jabir, M. 2019)
Reference:
Jabir, M. (2019). The Use of ICT in Teaching English: A Study of ELT in the Secondary Schools of Kargil District . An M. Phil Dissertation Submitted to Jaipur National University, p. 5.
Relevant answer
Answer
Am in full support of
Zylfije Tahiri
She is a forward looking researcher in language education, with respect to multiple intelligence development. Very good input from Bara Nesma ; I did like the input from Bulcsu Szekely about listening to favorite music.
  • asked a question related to Library
Question
2 answers
Dogwood RNA isolations From leaf tissue
used Zymo modified protocol
KAPA mRNA-stranded library prep.
RNA QC looks decent, input of RNA into library is 1.5micrograms
RNA QC looks good
libraries failed
Per Zymo ran RNA elutes through cleanup column and re-ran libraries still all libraries failed.
Relevant answer
Answer
Did you ever find an answer to this? We're having a similar problem.
  • asked a question related to Library
Question
7 answers
I have been preparing NGS Library, where the samples input volumes, conditions followed and the PCR Cycles are same but still the concentration obtained was uneven and the fragments size where also differ from sample to sample. What could be the possible reason for this uneven results.
Relevant answer
Answer
Vilte Cereskaite We do not specifically measure mRNA concentration; instead, we quantify total RNA using Qubit 4 and also assess RIN to ensure the quality of RNA.
  • asked a question related to Library
Question
7 answers
Until the early 2000s, the National Bibliography used to be an important source for the development of library collections, mainly for the acquisition of new items. I would like to know if in your country it remains important for research in your library. Please, could you tell me?
Relevant answer
Answer
In Indonesia, the National Bibliography remains an important resource for library acquisitions. It helps document and preserve national publications, supports library collection development, and serves as a crucial reference for academics and researchers. The National Library of Indonesia publishes the National Bibliography and ensures its accessibility through digitization. Despite challenges like compliance with legal deposit laws and competition with international resources, it continues to play a vital role in the country's library and information management.
  • asked a question related to Library
Question
6 answers
How can contexual bandits (used in Recommendation Systems) be implemented in code via library package ?
Relevant answer
Answer
Example of `scikit-optimize` library in Python:
```python
import numpy as np
from skopt import Optimizer
from skopt.space import Real, Categorical
# Define the reward function
def reward_function(x):
# Implement your reward calculation logic here
# This function should take the context (x) as input and return the reward
return some_reward_value
# Define the context space
context_space = [
Real(0, 1, name='feature1'),
Real(0, 1, name='feature2'),
Categorical(['option1', 'option2', 'option3'], name='feature3'),
]
# Initialize the optimizer
optimizer = Optimizer(context_space, base_estimator="GP", acq_func="EI", random_state=42)
# Interact with the environment and update the model
for iteration in range(num_iterations):
# Get the next recommendation
recommendation = optimizer.ask()
# Observe the reward for the recommendation
reward = reward_function(recommendation)
# Update the optimizer with the observed reward
optimizer.tell(recommendation, reward)
# Use the optimized model to make future recommendations
final_recommendation = optimizer.best_params_
```
In this example, we use the `scikit-optimize` library to implement the contextual bandit algorithm. The `Optimizer` class is used to handle the optimization process, and the `reward_function` is where you would implement your own logic to calculate the reward for a given context.
The `context_space` is defined using the `Real` and `Categorical` classes from `skopt.space`, which represent the different features or context variables that the recommendation system can use.
The main steps are:
1. Define the reward function that calculates the reward for a given context.
2. Define the context space using the `Real` and `Categorical` classes.
3. Initialize the `Optimizer` object with the context space and other parameters.
4. Iterate through the interactions, where you:
- Get the next recommendation from the optimizer.
- Observe the reward for the recommendation.
- Update the optimizer with the observed reward.
5. Use the optimized model to make future recommendations.
Good luck; partial credit AI
  • asked a question related to Library
Question
2 answers
Dear all,
I've recently processed some samples for ATAC-seq. My corresponding ATAC-seq library looks different (see picture: Bio-Analyzer) than the expected profile. I was wondering if I can still sequence it or it will be too biased.
Thank you for your help
Best,
Karim
Relevant answer
Answer
An ATAC library like this cannot be used for further sequencing, if you increase the depth then also it will be very hard to find good alignments of the reads. You should look into the number of cells and the cell lysis timings. You can also increase the transposition time depending on the cell type. I have attached one of my prepared library QC report.
  • asked a question related to Library
Question
1 answer
Hello everyone,
I am facing a problem when making a plot in "R". I am generating a ROC curve, but in the graph I have been oberving that my "0.0" scale of of X-axis is far from "0.0" scale of Y-axis. I don't understand where is problem. I want to make the plot where "0.0" will start from the same point. I am giving you the example of what I found from R (Please check figure of R) and also what I want (Like figure drawn by GraphPad)
If anyone please help me to find the solution, I will be highly benefited. I am providing the script that I use.
# Install and load necessary packages
install.packages("pROC")
# install.packages("readxl")
library(pROC)
library(readxl)
# Read data from Excel file (replace with your file path)
data <- read_excel("D:\\Samsun medical center\\ELISA data analysis\\elisadata\\New prism analysis for AUC curve analysis\\Sample data for R.xlsx")
# Extract control and cancer patient data
control <- data$Control
cancer <- data$Cancer
# Combine data and create a grouping variable
data_combined <- c(control, cancer)
group <- factor(c(rep("Control", length(control)), rep("Cancer", length(cancer))))
# Create ROC curve
roc_data <- roc(group, data_combined)
# Plot ROC curve
plot(roc_data, main = "ROC Curve", col = c("blue", "red"), legacy.axes = TRUE,
print.auc = TRUE
xlab = "100% - Specificity", ylab = "Sensitivity", asp = 1) # Set aspect ratio to 1:1
# Calculate AUC with confidence interval
auc_value <- auc(roc_data)
ci_value <- ci.auc(roc_data)
# Extract AUCs for control and cancer groups
auc_control <- roc_data$aucs[group == "Control"]
auc_cancer <- roc_data$aucs[group == "Cancer"]
# Perform Mann-Whitney U test to compare AUCs
p_value <- wilcox.test(auc_control, auc_cancer, alternative = "greater")$p.value
# Display AUC, CI, and p-value on the plot
legend("bottomright",
legend = paste("AUC =", round(auc_value, 2),
"\n95% CI =", round(ci_value[1], 2), "-", round(ci_value[3], 2),
"\np-value =", signif(p_value, 3)),
bty = "n")
# Calculate Youden's Index
youden_index <- roc_data$thresholds[which.max(roc_data$sensitivities + roc_data$specificities - 1)]
cat("Youden's Index Cutoff:", youden_index, "\n")
# Find the index corresponding to Youden's Index
index <- which(roc_data$thresholds == youden_index)
# Extract sensitivity and specificity at Youden's Index
sensitivity_value <- roc_data$sensitivities[index]
specificity_value <- roc_data$specificities[index]
# Convert sensitivity and specificity values to percentages
sensitivity_percentage <- sensitivity_value * 100
specificity_percentage <- specificity_value * 100
# Print sensitivity and specificity values as percentages
cat("Sensitivity at Youden's Index:", sensitivity_percentage, "%\n")
cat("Specificity at Youden's Index:", specificity_percentage, "%\n")
Relevant answer
Answer
use the following code:
```r
# Load required libraries
library(pROC)
# Assuming you have your ROC curve object 'roc_obj'
roc_obj <- roc(...)
# Plot the ROC curve
plot(roc_obj, type = "s", print.auc = TRUE)
# Adjust the x-axis and y-axis scales to start from (0.0)
plot(roc_obj, type = "s", print.auc = TRUE,
xlim = c(0, 1), ylim = c(0, 1))
```
Here's a step-by-step explanation:
1. Load the `pROC` library, which provides functions for creating and manipulating ROC curves.
2. Assuming you have your ROC curve object `roc_obj`, you can plot the basic ROC curve using the `plot()` function with the `type = "s"` argument to display a stepped plot.
3. To adjust the x-axis and y-axis scales to start from (0.0), you need to set the `xlim` and `ylim` parameters in the `plot()` function. By setting both ranges to `c(0, 1)`, you ensure that the x-axis and y-axis start from 0.0 and end at 1.0.
This will create an AUC curve where both the x-axis and y-axis start from the same point (0.0) and end at (1.0). The `print.auc = TRUE` argument will display the Area Under the Curve (AUC) value on the plot.
Note that the `roc()` function in the `pROC` library is used to create the ROC curve object, and the specific arguments you provide will depend on your data and the context of your analysis.
Hope it helps; partial credit AI
  • asked a question related to Library
Question
1 answer
Hi, I am trying to make synthetic phage library. Right now , do people still use Kunkel method or other methods? Since it looks like Kunkel method is an old method. I am looking for more convenient method to build the library.
  • asked a question related to Library
Question
1 answer
I researched about this but most of the instructions online is for AutoDock 4. I tried the same steps by adding the needed parameters to AD4_Parameter.dat and AD4.1_bound.dat but I cannot find the .gpf and .dpf files so the changes i made with the parameters were useless. Please help me, what should I do if I cannot find the .gpf and .dpf? Thank you.
Relevant answer
Answer
Hello Ferlyn Macapagal De Jesus,
I have the same problem, did you figure out how to add atom types to ADFR? Thank you!
  • asked a question related to Library
Question
8 answers
Some of my in-text citations references show up with an 'a' at the end (e.g., Smith, 2023a). The author does not have a 2nd publication in my Mendeley library, and I have checked for duplicates. Can someone please advise how to fix this other than to continually manually update the citation?
Thank you!
Relevant answer
Answer
I Faced this week the same problem and I exhausted myself till I found the solution
After updating and correcting duplicates
(((you should delete any citations recorded or inserted in your word file before trying again to insert any citation after correcting the duplications))
This is because the word file has the memory of the duplicated citations and is still noticing two copies of your cited paper and not updated properly from the Mendeley manager
Best of Luck
  • asked a question related to Library
Question
2 answers
I'm performing an antibody phage display with a VHH library and I consistently get frameshift mutants (mainly frame +2) after biopanning. I'm using TG-1 cells for amplification of phagemids and VCSM13 as helper phage. Biopannings are performed in target protein-coated immunotubes and PBS-milk is used as blocking agent. I have tried to coat the immunotubes with different protein concentrations (10-100 ug/mL in carbonate coating buffer) with the same results. Also tried the microtiter plate format. When I analyze the original library, all the clones are in the correct frame. I would appreciate any explanation or suggestion. Thanks!
Relevant answer
Answer
Hi Daniel,
that is indeed curious, I can only think of the two following factors contributing to this effect:
1. The Helper Phage Infection Efficiency:
It could be that the VCSM13 helper phage might be contributing to the observed high mutation rates. To address this, I suggest ensuring you are using a fresh, high-titer VCSM13 stock. Sometimes, issues arise from aged or low-quality helper phage stocks. As an additional step, it might be worth considering a switch to a different helper phage, such as M13KO7, just to ascertain if the helper phage itself is the culprit.
2. The Bacterial Host Strain:
TG-1 cells are commonly utilized in phage display, however, they may not be the optimal strain for your specific application. It is worth it to verify that the strain you are using is both competent and healthy, as stressed or suboptimal bacterial cultures can introduce mutations. Experimenting with another E. coli strain, like XL1-Blue or SS320, could potentially provide valuable insights.
I find myself leaning towards the first point, suspecting that the helper phage you are currently using might be more likely to introduce some form of frame shift mutations.
Cheers
Stöpa
  • asked a question related to Library
Question
4 answers
What is scope of the implementing LIS classification and cataloguing in different field ?
Relevant answer
Answer
William Badke I fully support your suggestion, By virtue of profession I have gone through different condition (handling tools and spares) in maintaining different locomotive and other aviation subsidiaries. We need more intricate system to handle, which will be extracted for LIS principles.
  • asked a question related to Library
Question
1 answer
Dear ResearchGate Community,
I am currently engaged in a thesis project involving the analysis of essential oils using gas chromatography-mass spectrometry (GC-MS), PerkinElmer, Clarus 690. Specifically, I am examining tea tree (Melaleuca alternifolia) essential oil, which is expected to contain terpinene-4-ol as its main constituent.
My challenge lies in the identification process, particularly when utilizing the NIST library for peak identification. Despite following standard protocols and procedures, I consistently encounter very low probabilities for matches, even for well-known compounds like terpinene-4-ol. These low probabilities persist across all unknown peaks, making it difficult to confidently identify compounds present in the essential oil samples.
Attached to this inquiry are screenshots illustrating the methodology employed, chromatograms, spectrograms, and the peak identification results from the NIST library.
I am reaching out to the community for insights, suggestions, or potential solutions to address this issue. Any advice on improving the accuracy and reliability of peak identification in GC-MS analysis of essential oils would be greatly appreciated.
Thank you for your time and assistance.
Best regards,
Achwek Meftehi
PhD Student Neurosciences and Biochemistry
Faculty of Sciences, Tunis
Relevant answer
Answer
  1. Verify Instrument Calibration: Ensure that your GC-MS instrument is properly calibrated and optimized for the analysis of essential oils. This includes verifying parameters such as temperature settings, injection volume, and detector sensitivity.
  2. Optimize Sample Preparation: Proper sample preparation is crucial for obtaining reliable results. Ensure that your essential oil samples are properly diluted and prepared according to established protocols. Any contaminants or impurities in the sample can affect the accuracy of the analysis.
  3. Use Multiple Libraries: In addition to the NIST library, consider using other libraries or databases for compound identification, such as Wiley, FFNSC, or your own in-house library. Cross-referencing results from multiple libraries can help improve the accuracy of compound identification.
  4. Evaluate Spectral Quality: Assess the quality of the mass spectra obtained from your GC-MS analysis. Low-quality spectra with high noise levels or insufficient peak resolution can lead to inaccurate identifications. Optimize instrument parameters to improve spectral quality, if necessary.
  5. Consider Retention Indices: Retention indices (RI) can provide additional information for compound identification, especially for volatile compounds present in essential oils. Compare the retention indices of your peaks with literature values to aid in the identification process.
  6. Verify Peak Matches: For low probability matches, manually inspect the spectra and compare them with reference spectra of known compounds. Look for characteristic fragment ions and retention times to confirm or refute the identification.
  7. Perform Additional Analyses: If uncertainty remains after initial analysis, consider performing additional experiments such as co-injection with authentic standards, alternative separation techniques, or complementary spectroscopic methods to confirm the identities of unknown compounds.
  8. Document Results and Uncertainties: Keep detailed records of your GC-MS analyses, including instrument parameters, sample preparation methods, and identified compounds. Document any uncertainties or limitations associated with low probability matches.
  9. Consult with Experts: If you encounter persistent challenges in identifying compounds, seek advice from experienced analysts or consultants specializing in GC-MS analysis of essential oils. They may offer valuable insights and assistance in resolving difficult identification issues.
By following these steps and employing a systematic approach, you can effectively address low probability matches in the NIST library for GC-MS analysis of essential oils and improve the accuracy of compound identification.
  • asked a question related to Library
Question
1 answer
Can anyone give clear explanation add parameters for it to the parameter library first for auto dock. Because Ni atom is not in the library it seems? Step by step instruction.
Relevant answer
Answer
Hello, thank you, we will take it into account in our future work.
  • asked a question related to Library
Question
9 answers
Why all the buzz about AI-assisted writing? Think about it—haven’t we already embraced tools like Grammarly and Quillbot and other AI-assisted and Computer Assisted Writing to help us write better(Wang, 2022)? And remember when we switched from digging through library cards to hopping onto research databases? Evidently, each has advantages and disadvantages (Falagas, 2008). Sure, there was a time when many educators were wary about students using computers for writing, worried it might spoil their writing skills (Billings, 1986) or second language acquisition (Lai, 2006; Gündüz, 2005). But look how that turned out: we adapted and learned to see the value in the technology. So, what's the big deal now? AI writing tools are just the next step. Instead of pushing back, why not dive in, learn how it works, and show others how to use it? Let's make the most of what tech can offer and keep up with the times!
Billings, D. M. (1986). Advantages and disadvantages of computer-assisted instruction. Dimensions of Critical Care Nursing, 5(6), 356-362.
Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of science, and Google scholar: strengths and weaknesses. The FASEB journal, 22(2), 338-342.
Gündüz, N. (2005). Computer-assisted language learning. Journal of language and linguistic studies, 1(2), 193-214.
Lai, C. C., & Kritsonis, W. A. (2006). The advantages and disadvantages of computer technology in second language acquisition. Online Submission, 3(1).
Wang, Z. (2022). Computer-assisted EFL writing and evaluations based on artificial intelligence: a case from a college reading and writing course. Library Hi Tech, 40(1), 80-97.
Relevant answer
Assisted writing by AI has been getting a lot of attention in recent years for several reasons:
  1. Efficiency: AI-powered writing tools can increase efficiency by helping writers generate content more quickly. They can offer word suggestions, correct grammar and spelling errors, and even assist in organizing ideas.
  2. Accessibility: These tools make writing more accessible for people with learning difficulties, physical disabilities, or other limitations that might make traditional writing challenging.
  3. Quality: While they don't completely replace human creativity, AI-powered writing tools can help improve the quality of the text by offering suggestions to make it clearer, more cohesive, and persuasive.
  4. Diverse Applications: These tools have a wide range of applications, from assisting in writing emails to generating content for blogs, social media, technical reports, and more.
  5. Technological Advancement: The advancement of AI and natural language processing technologies has enabled the development of increasingly sophisticated and useful writing-assistance tools.
However, there are also concerns about the overreliance on these tools, especially regarding their dependence for tasks that could benefit more from human creativity and sensitivity. Additionally, ethical issues such as authorship attribution and data privacy are relevant when it comes to AI-assisted writing.
  • asked a question related to Library
Question
2 answers
When I run autogrid4 it says: autogrid4: ERROR: Unknown receptor type: "Se" -- Add parameters for it to the parameter library first!
How do i handle it? Thanks
Relevant answer
Answer
Sorry my silly question, where should I add "atom_par Se 4.21 0.291 14.000 -0.00110 0.0 0.0 0 -1 -1 4 # Non H-bonding"?
  • asked a question related to Library
Question
1 answer
I have 10 pre-processed studies for which I have prepared ASV tables, Taxa tables, Metadata, and phylogenetic trees. Now I want to merge these studies and create and single or merged phyloseq object to do further downstream processing.
ASV tables, Taxa tables, Metadata - these are the CSV files while tree is in text format.
# Load required libraries
library(phyloseq)
library("ape")
# Function to load metadata files from a folder
load_metadata_files <- function(folder_path) {
metadata_files <- list.files(path = folder_path, pattern = "\\.csv", full.names = TRUE)
metadata_list <- lapply(metadata_files, read.csv, header = TRUE, row.names = NULL)
return(metadata_list)
}
# Function to load ASV files from a folder
load_asv_files <- function(folder_path) {
asv_files <- list.files(path = folder_path, pattern = "\\.csv", full.names = TRUE)
asv_list <- lapply(asv_files, read.csv, header = TRUE, row.names = 1)
return(asv_list)
}
# Function to load taxonomy files from a folder
load_taxonomy_files <- function(folder_path) {
taxonomy_files <- list.files(path = folder_path, pattern = "\\.csv", full.names = TRUE)
taxonomy_list <- lapply(taxonomy_files, read.csv, header = TRUE, row.names = 1)
return(taxonomy_list)
}
# Function to load phylogenetic tree files from a folder
load_tree_files <- function(folder_path) {
tree_files <- list.files(path = folder_path, pattern = "\\.txt", full.names = TRUE)
trees <- lapply(tree_files, read.tree)
return(trees)
}
# Specify folder paths
metadata_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/Metadata_SB"
asv_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/ASV_SB"
taxonomy_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/Taxa_SB"
tree_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/Tree_SB"
# Load metadata, ASV, and taxonomy files
metadata_list <- load_metadata_files(metadata_folder)
asv_list <- load_asv_files(asv_folder)
taxonomy_list <- load_taxonomy_files(taxonomy_folder)
tree_list <- load_tree_files(tree_folder)
create_phyloseq <- function(asv_list, taxonomy_list, metadata_list, tree_list) {
# Merge ASV tables based on sample IDs
merged_asv <- do.call(rbind, asv_list)
# Combine taxonomy tables into a single tax_table
tax_table <- do.call(rbind, taxonomy_list)
# Combine metadata tables into a single sample_data object
sample_data <- do.call(rbind, metadata_list)
# Merge phylogenetic trees
merged_tree <- lapply(tree_list, function(x) list(phylo(x)))
# Create phyloseq object
ps <- phyloseq(otu_table(merged_asv, taxa_are_rows = TRUE),
tax_table = tax_table,
sample_data = sample_data,
phy_tree = merged_tree)
return(ps)
}
ps <- create_phyloseq(asv_list, taxonomy_list, metadata_list, tree_list)
I am using this code but I encounter error :
ps <- create_phyloseq(asv_list, taxonomy_list, metadata_list, tree_list)
Error in rbind(deparse.level, ...) : numbers of columns of arguments do not match
Relevant answer
Answer
  • asked a question related to Library
Question
7 answers
My endnote library doesn't accept adding more than 10 references! I want to make a library to use for citing while writing, that could exceed 10 references. Any help?
Relevant answer
Answer
I have faced the same challenge. My EndNote library doesn't allow me to add more than 10 references. "You have reached the maximum limit..." it frequently said. Thus, I am obliged to create too many "My EndNote Library enl" by renaming it which is very disgusting task. Please, help me how to fix such challenge.
  • asked a question related to Library
Question
4 answers
I've been working in the Library Department for the past six years, and I've noticed that only about 10% of students visit the library. I'm wondering if there is anywhere, for example, a set of data that can help me understand how we can increase the number of users for academic libraries.
Relevant answer
Answer
Sanjay Kumar, we need to understand the distribution of students arear of interest and induce they required referral books and periodicals and other CAS facilities. Extending operating time of library as per students needs, increasing funds for proactive procurements of publication matching students needs
  • asked a question related to Library
Question
1 answer
I'm working on library prep for ITS NGS using Earth Microbiome Protocol and am getting double banding and smearing on my gels. What might be the cause for this? I should be seeing a band around 230 bp.
Relevant answer
Answer
Your products smeared and degraded.
  • asked a question related to Library
Question
6 answers
I am searching for all the possible ways to measure this gap and validate it statistically.
Relevant answer
Answer
a comparative analysis can be carried out over the years.
  • asked a question related to Library
Question
8 answers
Hi,
I was lucky enough to get my paper in the cover of Deep Sea Rearch Part I: Oceanographic Research Papers in 2022. Now I'm wondering to get a good picture of the cover to frame it. Unfortunately, they did send us the cover with the final design and at the DSR page the quality is very (very) low. They offer download the cover but just the picture without the journal graphic design. To add more sadness and touch your souls, we don't get the journal in our library.... So, if someone have for any chance access to the volume 186 of Deep Sea Research Part I and can send the scan, or a good picture, I would deeply graceful. It was one of my PhD tessis papers and having a cover was a nice thing to keep in my studio.
thanks in advance
Iván
Relevant answer
Answer
Ivan Hernandez Ivan, I also have the PDF of the paper with the art. My questions: (1) I can send the full set of PDF pages. Will that be sufficient ? (2) Or, do you need the images cropped, in other words, images separated from the pages ? If you desire separate art files I need to know dimensions, resolution (pixels per inch) and file format. I used Adobe software to crop the graphic art. (2) Can you crop pages and check the resulting dimension and image resolutions -or- find others that can do this ??
I attached two samples of “Water-mass-transformation-in-the-Iceland-Sea” as PDF and PNG file. Both at the same resolutions. 4.333 inch x 3.447 inch x 300pixels/inch
(110.07mm x 87.55mm x 118.11 Pixels/cm) ------- N.B. To show attached art in this post ResearchGate condenses the art. Please download them separate from the displayed message.
  • asked a question related to Library
Question
6 answers
The sentence above are largely rhetorical and perhaps it would be fairer to ask how these attacks are reacted to.
The British Library recently suffered a cyber attack by a criminal gang (I have done work on the Russian involvement with such criminal gangs but other than disruption it is difficult to see what could be obtained by Putin's government) and their personnel's data was dumped on the Dark Web when they refused to pay ransom. The BLs chief executive expressed the view that such people were against everything which libraries represent: openness, empowerment, access to knowledge." Such attacks have been slowly rising. Bostin city library was shut down in a ransomware attack in 2021. Toronto Public Library suffered a massive cyber attack in October. The city responded by declaring that such attacks were directed essentially towards civilised values.
While these attacks are criminal ones, Putin funded many of these criminal cyber groups and eventually they helped construct Russian misinformation and began working directly with Russia.
Are these actions to do with authoritarian states? Traditionally the Romans are suspected of destroying the Alexandrian Library. Generally, information is cut off in religious societies. Information dealing with understanding reality and the senses is attacked.
The bombing of Ukrainian information centres, archives, schools, museums, universities have been noted in the present war as in Gaza. Is the fate of Hypatia to be renewed?
Hypatia
📷
They attacked her in mid exploration Cutting away her golden thoughts As they cut away her flesh, destroying A mind that they couldn’t destroy in Debate, a sparkling old woman Whose thoughts were spun from steel.
The screaming mob desecrated her tiny form Dragging it into the dust, through the rubbish And shit. Tearing off her clothes The Parabalani exposed her to celestial winds crossing The arora, rubbing Spoilt Alexandrian soil into her unexplored vagina. She did not die as a philosopher, calculating and Learning, but, torn apart, the old woman Screamed out for her father, Terrified, in sacrificial pain so much worse Than beheadings and crucifixion. Her modesty, Kept for 60 years, mutilated by a 1000 killers in a single Minute.
Her head bounced in the forum, Her arms thrown to the 4 corners, Her soul stamped into the gutter, As the new religion cried out for tolerance. In a morning thinking became forbidden Books burnt, laughs ignored and fires built for heretics.
Hypatia was a female philosopher in Alexandria in the 4th century who was torn apart by a Christian mob, her skin scraped from her bones.
Relevant answer
Answer
Libraries form at least a metaphorical, if not literal, representation of truth and critical thought. Repressive regimes thrive in an environment of mis/disinformation. To create the impression that a library is "broken" is to attack the reality that truth is a sure bastion against falsehood.
In the ancient world, conquering nations often destroyed temples in the places they conquered, a symbolic way of saying, "Your god is weak and we have prevailed."
  • asked a question related to Library
Question
5 answers
I need to teach how to conduct research using internet or library sources, therefore I need to develop a successful curriculum for the proposed training on how to conduct research>
Relevant answer
Answer
Independent of internet
  • asked a question related to Library
Question
2 answers
I have papers
Relevant answer
Answer
See this help page ("How to add research") for instructions: https://help.researchgate.net/hc/en-us/articles/14293005132305
  • asked a question related to Library
Question
1 answer
On May 30, 2023, I gave a lecture on "The Ontology of Computer Games" at the Immanuel Kant Baltic Federal University Research Library.
Here's a link to the full lecture on YouTube: https://www.youtube.com/watch?v=7QEFsrQcJak
The lecture is in Russian.
The questions posed by my lecture are:
What is the reality of computer games?
Do they affect the human mentality?
Do they change moral principles?
Do games encourage violence?
Do games weaken empathy?
Relevant answer
Answer
Dear Doctor
"In computer and information science, ontology is a technical term denoting an artifact that is designed for a purpose, which is to enable the modeling of knowledge about some domain, real or imagined.
The term had been adopted by early Artificial Intelligence (AI) researchers, who recognized the applicability of the work from mathematical logic and argued that AI researchers could create new ontologies as computational models that enable certain kinds of automated reasoning . In the 1980's the AI community came to use the term ontology to refer to both a theory of a modeled world (e.g., a Naïve Physics [5]) and a component of knowledge systems. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy."
  • asked a question related to Library
Question
2 answers
I am trying to generate topology file in GROMACS for an enzyme (PDB code =1HBN). But I encountered the following error.
Fatal error:
"The residues in the chain ALA2--ALA549 do not have a consistent type. The
first residue has type 'Protein', while residue MHS257 is of type 'Other'.
Either there is a mistake in your chain, or it includes nonstandard residue
names that have not yet been added to the residuetypes.dat file in the GROMACS
library directory. If there are other molecules such as ligands, they should
not have the same chain ID as the adjacent protein chain since it's a separate
molecule."
Can anyone please kindly help me to solve this.
Relevant answer
Answer
Firstly, check your 1HBN.pdb in textual editor. You have this line:
MHS D 257 HIS N1-METHYLATED HISTIDINE
So, the MHS cannot be processed by standard force field because there is no such aminoacid.
Secondly, you need to deal with it. You can manually add MHS in the selected forcefield or if you don't need your histidine to be N1-methylated you can remove methyl from N1-atom. There are some ways but I tried to delete non-desired atoms from PyMol interface and then rename MHS to HIS in .pdb file (open it via textual editor and just replace MHS -> HIS).
I hope this will work for you.
  • asked a question related to Library
Question
2 answers
Hi i have estimated an armax model using python sippy library. The estimation gives me two transfer functions H and G. How can I combine them into a single one to predict model output for new input u(t)or to compute unit step response? I thought to somehow derive state space representation maybe...
Relevant answer
Answer
André Kummerow In your case, you've estimated two transfer functions, H and G, using the Python SIPPY library. These transfer functions represent different aspects of the ARMAX model:
  1. H: This transfer function typically represents the relation between the output and the noise in the system.
  2. G: This is the transfer function that relates the exogenous input u(t) to the output.
Now, to predict the model output for new inputs or to compute the unit step response, you need to combine these transfer functions. Here's a simple way to understand the combination process:
  1. Conceptual Understanding: Think of the ARMAX model as a system where your input signal u(t) passes through a filter (represented by G) and adds to a noise component (represented by H) to produce the output. In a more technical sense, the output of the system is the sum of the responses from each transfer function to their respective inputs.
  2. Mathematical Approach: To combine H and G, you'd typically use the principle of superposition, which is valid for linear systems like ARMAX. The total output y(t) of the system can be expressed as the sum of the output due to the input u(t) (processed by G) and the output due to the noise (processed by H).
  3. Implementation: In Python, using libraries like SIPPY or control systems libraries, you can simulate this behavior. For a given input u(t), you can simulate the response of G to this input and separately simulate the response of H to the noise input. Adding these two responses gives you the total system output.
  4. State-Space Representation: Converting to a state-space representation can be a good idea if you're comfortable with it. State-space models offer a more general framework for representing linear systems and can be more intuitive for simulation and control purposes. Each transfer function (H and G) can be represented in state-space form, and you can then combine these state-space models appropriately.
  5. Practical Tips: Ensure that the data you use for simulation is well-prepared, and the noise characteristics (for H) are well understood. The accuracy of your predictions heavily relies on the quality of your model and the data.
  6. Advanced Considerations: If you're delving deeper, consider the frequency response of your combined system and its stability. These are crucial for ensuring that your model behaves as expected in various conditions.
  • asked a question related to Library
Question
1 answer
I am using Haddock 2.4 to check docking parameters between DNA and ligand . My DNA file contain Na ion and I format it as given in Haddock library but it still gives the error -
"Error in PDB file.Issue when parsing the PDB file at line 308.ATOM/HETATM line does not meet the expected format"
format of ions in my pdb file-
HETATM 307 NA+1 NA1D 101 8.118 -6.766 34.223 0.50 15.72 NA
HETATM 308 NA+1 NA1D 101 8.498 -6.164 34.656 0.50 20.12 NA
I have attached my pdb file of DNA
Relevant answer
Answer
Probably the problem with your PDB is that the residue order or name order is not correct. Use yasara software to renumbering
  • asked a question related to Library
Question
3 answers
I have encountered an error to measure the light intensity of my laser source (650nm) (see image attached). The serial plot remains constant even i have changed the intensity of my light source, I have even tried both extremes: dark environment and close to lase source, yet there are no changes to the serial plot. Have anyone enconutered similar problem? How do i solve this error?
Here, the codes were used for the complete setup of photodiode BH1750 and Arduino Nano:
/*
Advanced BH1750 library usage example
This example has some comments about advanced usage features.
Connection:
VCC -> 3V3 or 5V
GND -> GND
SCL -> SCL (A5 on Arduino Uno, Leonardo, etc or 21 on Mega and Due, on esp8266 free selectable)
SDA -> SDA (A4 on Arduino Uno, Leonardo, etc or 20 on Mega and Due, on esp8266 free selectable)
ADD -> (not connected) or GND
ADD pin is used to set sensor I2C address. If it has voltage greater or equal to
0.7VCC voltage (e.g. you've connected it to VCC) the sensor address will be
0x5C. In other case (if ADD voltage less than 0.7 * VCC) the sensor address will
be 0x23 (by default).
*/
#include <Wire.h>
#include <BH1750.h>
/*
BH1750 can be physically configured to use two I2C addresses:
- 0x23 (most common) (if ADD pin had < 0.7VCC voltage)
- 0x5C (if ADD pin had > 0.7VCC voltage)
Library uses 0x23 address as default, but you can define any other address.
If you had troubles with default value - try to change it to 0x5C.
*/
BH1750 lightMeter(0x23);
void setup(){
Serial.begin(9600);
// Initialize the I2C bus (BH1750 library doesn't do this automatically)
Wire.begin();
// On esp8266 you can select SCL and SDA pins using Wire.begin(D4, D3);
/*
BH1750 has six different measurement modes. They are divided in two groups;
continuous and one-time measurements. In continuous mode, sensor continuously
measures lightness value. In one-time mode the sensor makes only one
measurement and then goes into Power Down mode.
Each mode, has three different precisions:
- Low Resolution Mode - (4 lx precision, 16ms measurement time)
- High Resolution Mode - (1 lx precision, 120ms measurement time)
- High Resolution Mode 2 - (0.5 lx precision, 120ms measurement time)
By default, the library uses Continuous High Resolution Mode, but you can
set any other mode, by passing it to BH1750.begin() or BH1750.configure()
functions.
[!] Remember, if you use One-Time mode, your sensor will go to Power Down
mode each time, when it completes a measurement and you've read it.
Full mode list:
BH1750_CONTINUOUS_LOW_RES_MODE
BH1750_CONTINUOUS_HIGH_RES_MODE (default)
BH1750_CONTINUOUS_HIGH_RES_MODE_2
BH1750_ONE_TIME_LOW_RES_MODE
BH1750_ONE_TIME_HIGH_RES_MODE
BH1750_ONE_TIME_HIGH_RES_MODE_2
*/
// begin returns a boolean that can be used to detect setup problems.
if (lightMeter.begin(BH1750::CONTINUOUS_HIGH_RES_MODE)) {
Serial.println(F("BH1750 Advanced begin"));
}
else {
Serial.println(F("Error initialising BH1750"));
}
}
void loop() {
float lux = lightMeter.readLightLevel();
Serial.print("Light: ");
Serial.print(lux);
Serial.println(" lx");
delay(1000);
}
Relevant answer
Answer
James Garry Hey mate, really appreciate your response to my question! It may sound really silly, apparently I have solved the issue by resoldering my microsensor. Now it works fine. Once again, thanks for the help mate!
  • asked a question related to Library
Question
2 answers
In a metagenomic library profile desired size is 450-550bp,getting unwanted fragments which is causing failure of the library or it is difficult to get the desired data after sequencing.
Relevant answer
Answer
Generally, column-based PCR cleanup protocols / or bead-based purification are not that efficient in removing short bases in the samples. I suggest doing a gel extraction of PCR product to select the desired size, before starting with library preparation. During library preparation, you can do bead-based purification as suggested by the manufacturer. However, starting material is very crucial in the NGS. We follow this in our lab, and we always get a good library with desired sequencing data. Regards
Venkat
  • asked a question related to Library
Question
7 answers
My analyte standards are highly pure. R match and F match were good (800-900) but probability of match with the coumpound in Nist Library was Low(20-60%). How can i improve it? I am Using 10PPM standards prepared in HPLC grade N-Hexane. I am using Helium as my carrier gas?
Relevant answer
Answer
Asset Mopidevi: Two very basic issues must be addressed to answer the question. NIST library matches are "suggestions" only. They are only as good as:
(1) The proposed GC-MS method used must be shown to be selective and fit-for-purpose following good chromatography fundamentals. Poor quality methods will yield poor quality results (and misleading or false matches / purity values);
(2) The MS instrument settings used and the NIST library settings used in the method for the method must be appropriate.
Accurate peak assignments requires that a properly trained GC-MS operator uses a high quality method to obtain quality data. The databases only have value when this is true. GC-MS operation and training takes several years of full-time experience to learn the basics.
  • Have your GC-MS method evaluated by an experienced, professional GC-MS chromatographer to insure it follows good fundamentals. Once the method has been found to be valid, then utilize the library database to make qualitative peak ID's and then check with standards and orthogonal methods for accuracy.
  • asked a question related to Library
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"As the training of the models in deep learning takes extremely long because of the large amount of data, using TensorFlow makes it much easier to write the code for GPUs or CPUs and then execute it in a distributed manner."
  • asked a question related to Library
Question
3 answers
Would providing an explanation due to a high cost of an article and unavailability in the library be a good reason for exclusion criteria when working on a systematic review?
Relevant answer
Answer
The objective of a systematic review is to find and use ALL studies that address your research question. Short of that, you could be satisfied with a random sample of all studies. Exclusion of studies on the basis of criteria unrelated to the research question (e.g., cost of article, language the article is printed in) can result in a biased set of studies. Thus, in my opinion, exclusion of an article on the basis of cost is not acceptable. Seek out financial support from your research mentor, your department, etc.
  • asked a question related to Library
Question
1 answer
I am researching information on what defines outreach.
Relevant answer
Answer
Hi, Barbara Carouthers may I know elaborated issue regarding subjected question
  • asked a question related to Library
Question
3 answers
In chapter 35 in Don Quijote, Cervantes used a scene from "The golden ass" (unfortunate translation) from Apuleius. The rare version Cervantes did read in catholic Italy was a censored version. As he later read the original version in the king of Algiers's library, he thought his copying would never be spotted. By the way, what was the manchego slave doing in King of Algiers's library?
(9) (PDF) Miguel de Cervantes, slave, and his master Hassan Pacha Veneziano (researchgate.net)
Relevant answer
Answer
Well, if fighting with bags full of wine is not enough for you...the same scene appears in Apuleio and Don Quijote. It cannot be pure casuality.
  • asked a question related to Library
Question
6 answers
Please give your valuable opinions.....
Relevant answer
Answer
Library Automation : It specifies broader spectrum of area, Its indicates that total library or all the inventorying and operations of a library is automated.
Automated System in Libraries: It specifies only a small portion or operation is automated. This is some what more independent to operate and function
  • asked a question related to Library
Question
5 answers
Dear colleagues!
Does anybody have any information on how many public libraries currently available in Egypt, Pakistan or Nigeria? It is very difficult to receive this data, that is not available in English. For sure, it is available in Urdu or Arabic.
This information is necessary for research on public libraries trends. I could share data for library stats for more than 20 countries from all the world.
Relevant answer
Answer
Thanks, William! Finally I obtained data on Pakistan, but Nigeria is still unknown. Wiki lists are not completed and because of that not a reliable source.
  • asked a question related to Library
Question
1 answer
Good greeting
I am interest to get the publication of researcher (books, thesis, dissertations and papers, etc.), so we are pleasure to supply us with these references in order to save it at the electronic library of ministry of youth and sport in Iraq country to be useful to the society in typical and the researcher and the students in specialty.
Thank you very much
Dr. Nagham Ali Hussien
Relevant answer
Answer
I think most scientfic communication means like RG ond social media ,Google ......etc are rich references for electronic library.
  • asked a question related to Library
Question
1 answer
Good greeting
Can you help us to get the references (books, thesis, dissertations, ect.) with all fields and legally form, In order to save it at the electronic library of ministry of youth and sport in Iraq country? so we are pleasure to supply us with these references to be useful to the society in typical and the researcher also the students in specialty.
Thank you very much
Dr. Nagham Ali Hussien
Relevant answer
Answer
You can download it from various search engines and save it.
Get permission from the organization that provided the references.
  • asked a question related to Library
Question
1 answer
I want to design an aptamer for morphine by in sillico method and i need some basic sequence but i can not find them. if you have a library or know some article that can help me , please introduce me. thanks
Relevant answer
Answer
E-AB sensors, or electrochemical aptamer-based sensors, are a flexible type of sensing platform that can identify targets in complicated matrices quickly and reliably. But these sensors' low sensitivity has made it difficult for them to go from proof-of-concept to commercial goods. In order to bind targets and then fold for signal transduction, surface-bound aptamers need to be appropriately spaced apart. We postulated that conventionally produced electrodes produce sensing surfaces with only a portion of aptamers suitably positioned to actively respond to the target. Alternatively, we introduced a new method for immobilising aptamers that promotes microscale spacing between aptamers for the best possible target binding, folding, and signal transduction.
  • asked a question related to Library
Question
2 answers
Utilization of ChatGPT for effective library services
Relevant answer
Answer
Oili Sivula, I have not any idea too, sir. I need opinions to include in the paper.
  • asked a question related to Library
Question
1 answer
Dear All,
İ want to prepare a library of compounds for virtual screening. But I keep running into multiple issues that are already resolved but when I apply the solutions on my compounds, nothing seems to work.
1. There are no z-coordinates in the SDF files I downloaded from the Enamine database. How do I add them.
2. The obabel command line does not open these files from enamine. Gives varıous errors that I checked online. Some are related to the shell and some to the bugs in the software.
3. İs there any other software or method to generate these compound structures.
4. Any idea how to download a specific library from ZINC database. such as only the FDA approved compounds?
I shall be grateful for the help.
Thank you
Best regards
Ayesha Fatima
Relevant answer
Answer
I will try to address one by one
For 1 and 2 quiery.
Ttry with other Tools, i,e
Jmol: Jmol is a free and open-source molecular visualization tool that can be used to open SDF files and add z-coordinates.
Another simple tool is DataWarrior can be used to convert csv file or smiles
2. To download a specific library from the ZINC database, you have to ZINC Tranche Browser. Then filter the ZINC database by a variety of criteria, including drug-likeness, lead-likeness, and FDA approval status. You can use below link to get FDA drug directly from ZINC
  • asked a question related to Library
Question
2 answers
I am new to NGS and just trying to understand the numbers. It appears the actual amount of DNA fragments resulting from most NGS library prep protocols (in the pmol range) far exceeds the number of reads offered on most Illumina platforms (far lower than pmol range).
Two real questions:
1. what proportion of the library-prepared DNA fragments is actually loaded onto a flow cell?
2. what proportion of the DNA fragments loaded onto a flow cell actually attaches to the flow cell, clusters and results in an eventual read?
(I presume these both will depend on what sequencing system is used - so perhaps we can take something like a MiSeq system as an example?)
Thanks in advance.
Relevant answer
Answer
The proportion of DNA that makes it onto an Illumina flow cell and results in a read can vary based on several factors, including the quality of the library preparation and the specifics of the sequencing run. In a well-optimized NGS (Next-Generation Sequencing) workflow, the efficiency of cluster generation and the sequencing chemistry can be quite high, but not all DNA molecules will be sequenced. Here's a general overview of the process:
  1. Library Preparation: During library preparation, DNA is fragmented, and sequencing adapters are ligated to the ends of the fragments. However, not all DNA fragments are successfully adapter-ligated, and some may be lost in the process. The efficiency of this step can vary depending on the quality of the starting material and the library preparation protocol.
  2. Cluster Generation: In Illumina sequencing, library fragments are immobilized on the flow cell surface to create clusters of DNA fragments. Cluster generation efficiency can be high, but it may not capture every fragment, and some fragments may fail to form clusters.
  3. Sequencing: Sequencing chemistry and detection processes are highly efficient. However, not all clusters will result in high-quality reads. Factors like phasing and prephasing may cause some clusters to yield lower-quality data.
  4. Image Analysis and Base Calling: During the sequencing run, images are captured, and bases are called from the signal intensities. The base-calling process has error correction built in, but some reads may still be of lower quality and may not pass the quality filters.
The exact proportion of DNA that results in a high-quality read can vary depending on the specific conditions of the sequencing run, library quality, and the Illumina instrument used. In general, the efficiency of cluster generation and sequencing chemistry can be quite high, resulting in a substantial proportion of input DNA generating reads. However, it's not uncommon for a portion of the DNA to be lost or result in low-quality data.
To get more accurate estimates for your specific NGS experiment, it's essential to consult the quality control metrics provided by the sequencing facility or software analysis tools. These metrics often include cluster density, yield, and the percentage of clusters passing filter. These values can provide insights into the efficiency of library preparation and sequencing on a case-by-case basis.
  • asked a question related to Library
Question
5 answers
Staff structure for central university library
Relevant answer
Answer
Ayman Daoud A very good insight, indeed! In addition, it depends on the mission and vision of the institution itself and the talent management ability of the librarian herself/himself. Effective communication between and among staff of the university and within the library plays a paramount importance. There is a need for scholarly communications information scientist who anchors the scholarly community intact without any gap and carry all along to the increased visibility of the university and its staff
  • asked a question related to Library
Question
2 answers
---
title: "Study Area Map"
author: "Musab Isak"
date: "2023-08-27"
output: html_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
**Load packages**
```{r}
library(tidyverse) #provide tools for data manipulation
library(ggplot2) #Used for creating intricate and customizable data visualizations
library(dplyr) #Offers a set of functions for efficient data manipulation
library(sf) #Provides tools for working with spatial data
library(rnaturalearth) #Allows you to access and retrieve geographical data from the Natural earth dataset for mapping and analysis.
library(rnaturalearthdata) # Provides functions to load and manage Natural Earth data within R
library(raster) #Focuses on the manipulation and analysis of gridded spatial data
library(viridis) #Offers a set of perceptually uniform color scales for creating informative and visually appealing visualizations.
library(ggspatial) #Extends ggplot2 for spatial data, allowing you to combine the power of ggplot2 with spatial visualizations and data.
```
**Get Shape Data of the country and study areas**
```{r}
turkey <- st_as_sf(getData("GADM", level = 1, country = "TUR"))
# level 1 refers first level of administrative divisions, it could correspond to major regions or states within the country, like provinces or states in some other countries.
turkey1 <- st_as_sf(getData("GADM", level = 2, country = "TUR"))
#level 2 refers second level of administrative divisions within a country. It's a more detailed breakdown of administrative regions.
trapzon <- st_as_sf(subset(turkey1, NAME_1== "Trabzon")) #This object likely contains the spatial information for the administrative region named "Trabzon."
giresun <- st_as_sf(subset(turkey1, NAME_1== "Giresun"))
ordu <- st_as_sf(subset(turkey1, NAME_1== "Ordu"))
samsun <- st_as_sf(subset(turkey1, NAME_1== "Samsun"))
```
**Plot Location without legend names**
```{r}
turkey_ggplot <- ggplot() +
geom_sf(data = turkey, aes(fill = "Turkey")) +
geom_sf(data = subset(turkey1, NAME_1 == "Trabzon"), fill = "yellow") +
geom_sf(data = subset(turkey1, NAME_1 == "Ordu"), fill = "green") +
geom_sf(data = subset(turkey1, NAME_1 == "Samsun"), fill = "skyblue") +
geom_sf(data = subset(turkey1, NAME_1 == "Giresun"), fill = "purple") +
geom_rect(aes(xmin = 35, xmax = 41, ymin = 40, ymax = 41.8), color = "black", fill = NA) +
scale_fill_manual(values = "pink") +
theme_minimal() +
theme(plot.background = element_blank())
print(turkey_ggplot)
```
**Plot Location with legend names**
```{r}
# Create a new data frame for the legend
legend_data <- data.frame(
City = c("Trabzon", "Ordu", "Samsun", "Giresun"),
Color = c("yellow", "green", "skyblue", "purple")
)
turkey_ggplot <- ggplot() +
geom_sf(data = turkey, aes(fill = "Turkey")) +
geom_sf(data = subset(turkey1, NAME_1 %in% c("Trabzon", "Ordu", "Samsun", "Giresun")), aes(fill = NAME_1)) +
geom_rect(aes(xmin = 35, xmax = 41, ymin = 40, ymax = 41.8), color = "black", fill = NA) +
scale_fill_manual(values = c("Turkey" = "pink", "Trabzon" = "yellow", "Ordu" = "green", "Samsun" = "skyblue", "Giresun" = "purple"),
guide = guide_legend(override.aes = list(fill = "white"))) +
theme_minimal() +
theme(plot.background = element_blank()) +
guides(fill = guide_legend(title = "Cities", label.theme = element_text(color = "black", face = "bold"))) + labs(title = "STUDY AREA")
print(turkey_ggplot)
```
**NOTE** You can add Annotation and scalebar by using *scalebar()* function and *annotation_north_arrow()* if you are interested in.
Relevant answer
Answer
Tom Koch This is the answer.
  • asked a question related to Library
Question
1 answer
Could anybody give me the diagnosis of the acritarch genus Pterosphaeridia Maedler, 1963? I cannot find the original article of Maedler (1963) in our library.
Mädler, K.A., 1963: III. Die figurierten organischen Bestandteile der Posidonienschiefer. Geologisches Jahrbuch, Beihefte, v.58, p.287-406, pl.15-30.
Thank you in advance!
Relevant answer
Answer
I have had it. Thanks a lot to the colleague for giving me the information.
  • asked a question related to Library
Question
3 answers
Hi everyone,
I am trying to clone a CRISPR sequencing library. After transformation by electroporation, I picked single colonies and did minipreps, then digested with a single cutter enzyme. Most of my colonies look good, with a single band at the right size. However, some of my colonies have the correct band, plus three additional larger bands. What could be happening here? Could it be concatemers or incomplete digestion? Could my single colonies have two plasmids, one at the right size and one much bigger that's not being cut?
Any advice on what might be happening, and how to figure out what's going on with my plasmids, would be really appreciated. I want to know what is in the plasmid ideally!
Thank you
Relevant answer
Answer
Try running cut and uncut plasmids side by side. If the larger band is from partial digestion, you will see that band size in all uncut samples. If it's still a puzzle, don't use those odd ones for analysis.
  • asked a question related to Library
Question
1 answer
I have modeled coastal vulnerability in GeNIe and I want to know how to integrate Bayesian Networks with GIS (Geographic Information Systems). Or are there r or Python or any other language library or any open source software that can successfully integrate BN and GIS? The GIS data formats are either tif or ascii or .shp.
Relevant answer
Answer
I’m glad you are interested in integrating Bayesian Networks with GIS. There are several ways to do that, depending on your needs and preferences. Here are some possible options:
  • You can use the PMAT plugin for QGIS, which allows you to perform probabilistic map algebra using Bayesian networks. You can download the plugin from the QGIS plugin repository1 or from the PMAT website2. You can also read more about the plugin and its applications in this blog post3.
  • You can use the BNAS model, which is an agent-based modeling approach for urban land-use change simulation using Bayesian networks. The model uses BN learning algorithms, fine spatial modeling units, real data, and census data. You can find more details about the model and its implementation in this paper4.
  • You can use the R package bnspatial, which provides functions for spatial analysis and mapping of Bayesian networks. You can install the package from CRAN or GitHub5. You can also check out the vignettes and tutorials on how to use the package.
  • You can use the Python library pyBN, which is a collection of functions for creating, manipulating, and learning Bayesian networks. You can install the library from PyPI or GitHub. You can also read the documentation and examples on how to use the library.
  • asked a question related to Library
Question
11 answers
Why could it be of any great value teaching Library and Information Science scholars various Software Development and web development Packages?
Relevant answer
Answer
Otherwise, how would you know anything about Libraries and Information Sciences, if you have no clue as to how it is created and disseminated?
  • asked a question related to Library
Question
6 answers
Greetings,
for my systematic review I have 23 research articles 21 of which I got from PubMed, CINAHL, OVID and WOS. I got 2 articles from British library, the explore further option and these articles are cited as peer-reviewed.
can I put British Library as a database in my Prisma Flow diagram. Or do I indicate that two articles were gotten from british library in my methodology? your answers are highly appreciated.
Relevant answer
Answer
Go ahead ! Oluchukwu Okoye
  • asked a question related to Library
Question
4 answers
Good day everyone.
I have been doing some GRACE data processing in GEE, but from what I can tell - only the first mission's data (dating from 2002/04 to 2017/01) is accessible through the library for import.
Any recommendations how I can access more recent data from the GRACE-FO mission for analysis in GEE?
Any feedback is greatly appreciated.
Best wishes.
CV
Relevant answer
Answer
Cindy Viviers have you been able to incorporate GRACE-FO data in GEE? I am also working on the similar one and need assistance.
  • asked a question related to Library
Question
2 answers
I used the LC-ESI/MS method on natural fat and i got masses in negative ion mode. So how I can identify lipids and get the molecular weight of the natural fat.
Relevant answer
Answer
Thank you sir for your reply.
  • asked a question related to Library
Question
4 answers
I use ThermoFisher .raw files to explore materials that absorb UV light, and I use a specific program called XCalibur Qual Browser. I called ThermoFisher Customer support and they said that there is no in-house computer language that can be used to automate data processing of these .raw files. I was wondering if someone here might know of a Python library or some other computer language that can be used to data process these .raw files. In the photo provided, the top panel shows what materials are, and the bottom panel shows the light-absorbing properties of the materials. I am trying to make the computer language data process the bottom panel of data, not the top one.
Relevant answer
Answer
Use the `pyopenms` library, which provides an interface to handle mass spectrometry data, including .raw files. Here's an example of how you can use `pyopenms` to access and process the data:
1. Install the `pyopenms` library using pip:
```bash
pip install pyopenms
```
2. Import the necessary modules in your Python script:
```python
import pyopenms
```
3. Load the .raw file using `pyopenms.MSDataFile`:
```python
filename = "path/to/your/file.raw"
ms_data = pyopenms.MSDataFile(filename)
```
4. Access the spectra data from the .raw file:
```python
spectra = ms_data.getSpectra()
```
5. Iterate over the spectra and extract the desired information:
```python
for spectrum in spectra:
# Process the spectrum data, such as extracting intensity values or m/z ratios
# based on your specific needs
pass
```
With the `pyopenms` library, you can access the spectral data stored in the .raw files and perform various processing tasks, such as extracting intensity values, m/z ratios, or other properties of interest from the bottom panel of data you mentioned
Apart from `pyopenms`, there are other Python libraries available for mass spectrometry data analysis, such as `mzML`, `pyteomics`, and `ms_peak_picker`. You can explore these libraries as well if they better suit your requirements.
Good luck
  • asked a question related to Library
Question
2 answers
I want to use an ac motor in proteus but I couldn't find. So how can I get that motor ?
Relevant answer
Answer
Do you want to use a single-phase or 3-phase motor? Because single phase motor is not available on proteus only 3 phase is.
  • asked a question related to Library
Question
29 answers
Suggest subject or topic for PH.D .
Relevant answer
Answer
Disaster management and library ethics
Open source software use among library users
  • asked a question related to Library
Question
3 answers
In the realm of data visualization in Python, which library stands out as the most versatile and effective tool, accommodating diverse data types and producing impactful visual representations?
Relevant answer
Answer
I believe it's more about how you use the tool rather than which tool. But Matplotlib can do quite well for many common requests. They have a library of examples where you can learn how to create more complex plots: https://matplotlib.org/stable/gallery/index.
There's seaborn (developed based on Matplotlib) as well, if you want something to be done real quick, try that. I'd suggest you start with Matplotlib first, understand their principles, and then move on to seaborn, where customization requires you to add more Matplotlib parameters. See the gallery here: https://seaborn.pydata.org/examples/index.html.
There's also plotly (https://plotly.com/examples/), suitable for developing interactive apps. It looks really potential, but is an independent platform from Matplotlib, so will take you a while to learn.
  • asked a question related to Library
Question
1 answer
Hi,
I have a synonymous variant library of a protein, and it has hundreds of variants. They are cloned in the Flp-In T-REx expression vector to work with the Flp-In system. We have worked with this library to measure the protein levels using the Flp-In HEK293 cell line and it has always worked. Right now I would like to transfect this library into other human cell lines and unfortunately, these new cells do not have the Flp-In T-REx landing pad and it would require a lot of work to generate them.
I wanted to ask if there is any other high throughput method to measure the protein levels of these variants in human cells.
Thanks a lot.
Relevant answer
Answer
It won't be as clean as having all your variants in the same genomic locus but you can transfer your library into a lentiviral vector and infect at low MOI in order to get one integration per cell.
  • asked a question related to Library
Question
4 answers
Among users, there are preferences for modern UI, availability, rich library, and accessibility. Taking all of these options into account, which Latex programs would you recommend?
  • asked a question related to Library
Question
1 answer
I am currently conducting research on shrimp stock assessment using the ‘TropFishR’ package to analyze a monthly carapace length frequency dataset. The package allows for the analysis of one year of data, specifically data collected from January to December of a particular year. Sample code for opening the library, working with an Excel file, and opening the dataset from the working directory is provided below:
## Open the TropFishR library
library(TropFishR)
## Open the Excel data file
library(openxlsx)
## Set the working directory where the data is located
setwd
## Open the dataset in the working directory
data <- read.xlsx("frequency.xlsx")
## To reproduce the result
set.seed(1)
## Define the date, assuming 15 as the midpoint of sampling days
## 1:12 indicates data collected from January to December
## -2022 indicates the year, with the remaining codes remaining the same
dates <- as.Date(paste0("15-",01:12,"-2022"),format="%d-%m-%Y")
However, if we have more than one year of data, how can we feed it into the ‘TropFishR’ package?
Relevant answer
Answer
Thank you, Dr. Jayasankar and Dr. Eldho, for your kind responses and support. The "lubridate" R package has been instrumental in facilitating my work with diverse years of length frequency data in TropFishR.
##load package
library(TropFishR)
library(lubridate)
library(openxlsx)
###set wd
setwd("C:/Users/UNUFTP/OneDrive - United Nations University, Fisheries Training Programme/Desktop/PhD/Pilot Stock assessment/Lagoon/Lagoon")
###load data
lfq3 <- read.xlsx("Moo.xlsx")
lfq3
set.seed(1)
###select dates column
dates <- colnames(lfq3)[-1]
dates
##format the dates
dates3 <- dmy(dates)
dates3
#### To create midLengths vector
midLengths = lfq3$Lengthclass
midLengths
## To create catch matrix
catch = as.matrix(lfq3[,2:ncol(lfq3)]) ## To create catch matrix
catch
## Now, we need to create a lfq object which is a list
lfq <- list(dates = dates3,midLengths = midLengths,catch = catch)
lfq
## assign lfq as the class of object lfq
class(lfq) <- "lfq"
  • asked a question related to Library
Question
1 answer
In the different construction context, all standards are not similar. therefore, the available BIM objects are suitable or not.....
Relevant answer
By having a localized BIM object library, organizations can enhance productivity, ensure accuracy and compliance, foster collaboration, and benefit from the accumulated knowledge and best practices specific to their region or project type.