Science topic
Library - Science topic
Explore the latest questions and answers in Library, and find Library experts.
Questions related to Library
Hello everyone, We have done library preparation for Genotyping-by-Sequencing (GBS). The sample source is gDNA from chili leaves. The restriction enzymes are using EcoR1 dan Mse1. We've designed the common adapter that compatible with overhang restriction site and universal adapter for Illumina sequencing. Then we'use DNA/RNA UDI index illumina. In sequencing procees final library from GBS, we pool with final library of amplicon and mRNA. We use PE 150 for sequencing. Unfortunately, we didn't get any reads after sequencing.
Could you give us any recomendation how to optimize the library preparation and sequencing to get the optimal output of sequencing.
Your suggestion is very helpful
I am working on a ddRAD-seq experiment and looking for a detailed protocol, particularly focusing on restriction enzyme selection, library preparation, and adapter selection. I would appreciate if anyone could share an optimized protocol or insights on choosing the best restriction enzymes for different genomes. Additionally, any recommendations on adapter design and ligation efficiency would be helpful.
Has anyone encountered challenges in enzyme compatibility or library preparation steps? Any troubleshooting tips would also be valuable.
I am working on a ddRAD-seq experiment and looking for a detailed protocol, particularly focusing on restriction enzyme selection, library preparation, and adapter selection. I would appreciate if anyone could share an optimized protocol or insights on choosing the best restriction enzymes for different genomes. Additionally, any recommendations on adapter design and ligation efficiency would be helpful.
Has anyone encountered challenges in enzyme compatibility or library preparation steps? Any troubleshooting tips would also be valuable.
My study is about cell metabolomics. I’m currently analyzing MS/MS data using MS-DIAL 5 for untargeted metabolomics, but I’m facing an issue where I don’t get any reference-matched/library-matched compounds (I got some, like 4 or 5, but they're not from metabolites of interest and seem like they're from contaminations). Instead, most of the annotated metabolites are either "suggested" or "w/o MS2" annotations. And some of the metabolites of interest regarding energy metabolism are also annotated with "w/o MS2". From what I understand, you cannot take w/o MS2 results with confidence.
Previously, I have also tried increasing the concentration and extracting more cells for metabolites but still face similar issues. I have also played around with MS-DIAL parameters, but still, I can't get the library matched.
My questions are:
1. Can w/o MS2 annotations be considered confident identifications and be used as results or in publication?
2. Are there specific parameter settings in MS-DIAL that I should check to ensure MS2 spectra are being used correctly for annotation?
3. Could this be the problem with the sample preparation or instruments/methods rather than data analysis?
Any advice on troubleshooting or optimizing settings of either MSDIAL or the instrument would be greatly appreciated!
My Setup:
- Instrument: Agilent 6520 Accurate-Mass Q-TOF LC-MS (Data Acquisition: Auto MS/MS or DDA)
- Sample: metabolite extracted from human cell lines
- File type: abf file using Abf File Converter / raw .d from Agilent (drag to MS-DIAL)
- MS-DIAL 5.0 settings: Default parameters with public libraries (MSMS_Public-all-pos-VS19/MSMS_Public-all-neg-VS19)
Thank you!
Hello i have a problem with I have a problem with creating a mathematical model using the Simscape Fluids library. Someone can help me with that ?.
I found an error how do I report it ? The "cover page" you create and attach to an article is the wrong one , it is for a different article.
I have the file attached to show you.
Gail Cathey, M.L.S.
Print Resources / Access Services Librarian
Interlibrary Loan
Course Reserves
Chestnut Hill College
Logue Library
9601 Germantown Ave.
Phila., PA. 19118
(215)248-7053
Hi
I want to export ACM library articles for my SLR. I have this problem. When I do that, it just gives me 1000 records, and I still have 642 records left.
Can anyone suggest how I can export all of the citations?
Kind regards
Can I know step by step how can spectragryph be used to create libraries to identify if specific compounds in my sample are present?
For example if my sample contains many phytochemicals such as Aucubin, Curcumin etc how and can I create a library of already spectra of those specific phytochemicaks and use ro compare with my FT-IR results to check their presence?
Also other than spectragryph are there any other library softwares that can perform this task?
Thanks
I prepped a cDNA library from ovarian cancer cell line RNA (all RIN scores were 9+ on TapeStation 4200) using a KAPA HyperPrep mRNA kit and UDI adapters for Illumina sequencing. When we ran the TapeStation (D1000 HS screen tapes) on the prepped libraries, we saw extra peaks around 230-250bp. On some samples it was a more pronounced peak, but others it was more of a shoulder. We are baffled on what this could be from. We did a QC sequencing run on an illumina nextseq P2 100 cycle and are wondering if our inner distance plot could look like this due to the smaller fragments we see on the tape station? I've had some people suggest the extra peak on the tapestation a bubble peak from over amplification of the library, but bubble peaks would appear larger, not smaller. Our anticipated library size was 200-300bp. For most samples the average size was 350bp, so including the adapter sequences this seems right. We just have no idea what the smaller peak would be that appear in nearly every sample. I've included images of a couple electropherograms and gel images from the tapestation and the inner distance plot. You can see these smaller extra peaks appear as bands on the gel images and how they slightly vary in size from sample to sample.





Hello,
I am new to coding in R and have come up with the following code to perform a nested 2-way ANOVA (with Tukey post-hoc) to be able to account for individual animal variability within each group. I am wondering if someone can confirm this is correct or provide alternative methods? I am assessing the effects of diet and stress on certain cellular outcomes, with n=3-5 animals/group. Thank you!
# Load required packages
library(lme4)
library(emmeans)
library(ggplot2)
# Convert factors
Data_For_R$Diet <- as.factor(Data_For_R$Diet)
Data_For_R$Stress <- as.factor(Data_For_R$Stress)
Data_For_R$Animal <- as.factor(Data_For_R$Animal)
# Nested 2-way ANOVA models
model_aov_stress <- aov(SomaVolume ~ Stress / Animal, data = Data_For_R)
model_aov_diet <- aov(SomaVolume ~ Diet / Animal, data = Data_For_R)
model_aov_combine <- aov(SomaVolume ~ Diet * Stress / Animal, data = Data_For_R)
# Mixed-effects model accounting for animal variability
model_lmer <- lmer(SomaVolume ~ Diet * Stress + (1 | Animal), data = Data_For_R)
# Obtain estimated marginal means for Diet and Stress, considering random effect for Animal
emmeans_result <- emmeans(model_lmer, ~ Diet * Stress)
# Perform pairwise comparisons for the interaction between Diet and Stress, adjusting for animal variability
pairs(emmeans_result, adjust = "tukey")
# Create a new factor to represent the combination of Diet, Stress, and Animal
Data_For_R$Diet_Stress_Animal <- interaction(Data_For_R$Diet, Data_For_R$Stress, Data_For_R$Animal, drop = TRUE)
# Summaries of the models
summary(model_aov_stress)
summary(model_aov_diet)
summary(model_aov_combine)
Greetings from Abuja-Nigeria.I suddenly and rudely discovered my copy of Y.B Usman's seminal publication Manipulation of Religion in Nigeria is missing from my library at a time I most needed the book.Could anyone with a copy to hand help with this?Thanks.
We were using the Zinc database for the virtual compound library in our studies, but there have been problems downloading a large number of compounds for a while. Are there different databases that can be used or how can I solve this problem in the Zinc database?
📷
I need to identify the metabolites in the blood. I have done LC-HRMS. since there is no library provided, I am finding difficulties identifying the hits. Any suggestions..?
This call for papers invites the submission of high-quality, unpublished manuscripts that explore the challenges and opportunities faced by banks in a historical context characterized by the energy transition of national economic systems.
Topics of interest include, but are not limited to, the following three areas:
(1) The evolution of the SSM ten years after its birth.
It is considered important to broaden the discussion on whether, and how, ten years after the SSM Regulation, the new Community architecture has contributed significantly to improving the stability of individual banks in several ways, such as: (i) Harmonisation of supervisory practices (Carretta et al., 2015; Scannella, 2015); (ii) Strengthening prudential supervision (Beccalli & Cesarini, 2021); (iii) Improving the stability of the banking system and the market power of European banks (Banfi & Pampurini, 2016; Bikker. J. & Okolelova. I. 2022); (iv) Reduction of systemic and idiosyncratic risk (Beccalli & Poli, 2015; ECB, 2024b); (v) Improved risk disclosure (Altunbaş et al., 2022).
(2) The harmonisation of the Control and Supervision process: focus on sanctions.
A crucial aspect of the SSM is the sanctioning activities, which has evolved to raise new questions about: (i)The effectiveness of sanctions in improving banking stability (Caiazza et al., 2015) also in consideration of the information about the motivations behind them (Guerello et al., 2019); (ii)The impact of sanctions on stock price (Linder, 2016); (iii) The effectiveness of supervisory action in which there is a comparison between the European and US supervisory sector (Götz & Tröger, 2017); (iv) The restrictiveness of supervisory action in Europe on the basis of the frequency with which sanctions are imposed and their contribution to systemic risk (Korzeb et al., 2024); (v) The contribution of sanctions to the risk of bank default (Murè et al., 2020); (vi) The combination of ESG and sanctions (Murè et al., 2021; Mango et al., 2023); (vii) The impact of sanctions on reputation (Armour et al., 2017); (viii)The impact of sanctions on banks’ performance (Murè, 2014; Murè & Spallone, 2018); (ix) The probability of sanction (Murè et al., 2018); (x) The possible evolutions of the legislation on the adequacy of top management bodies (ECB, 2017; MEF, 2020); (xi) The sanctioning activity of the Bank of Italy in the context of the SSM (Banca d’Italia, 2023).
(3) The evolution of the control governance process: the integration of the Compliance Function with strategic planning and outsourcing possibilities.
Compliance to support the strategic process in intermediaries, also in consideration of the possibility of resorting to the outsourcing of corporate functions (Murè & Bittucci, 2020; Murè, 2021; ECB 2024a).
We encourage all researchers to submit their work by the deadlines outlined above. Your contributions are vital for fostering discussions and advancing knowledge in our field. We look forward to receiving your submissions!
Links below for more information
- Website page: https://www.complianceandstrategyinbanking.eu/
- Submission page: https://complianceandstrategyinbanking.confnow.eu/
- LinkedIn page: https://www.linkedin.com/in/csibc-international-conference/
Please find attached more information, including opportunities for publication related to the JFMMI special issue. Other possibilities will be available soon.
***
References
- Altunbaş Y., Polizzi S., Scannella E. & Thornton J. (2022). “European Banking Union and bank risk disclosure: the effects of the Single Supervisory Mechanism”. Review of Quantitative Finance and Accounting.
- Armour J, Mayer C. & Polo A. (2017). “Regulatory Sanctions and Reputational Damage in Financial Markets”. Journal of Financial and Quantitative Analysis. 52(4):1429-1448.
- COUNCIL REGULATION (EU) No 1024/2013 of 15 October 2013 conferring specific tasks on the European Central Bank concerning policies relating to the prudential supervision of credit institutions.
- ECB (2017). “Linee guida: Fit and proper assessment”.
- ECB (2024, a) “Draft guide on governance and risk culture”.
- ECB (2024, b). “Statement on SSM risk appetite”.
- Banca d’Italia (2023). “Relazione sulla gestione e sulle attività”.
- Banfi & Pampurini (2016). “Il grado di efficienza degli intermediari sottoposti alla vigilanza europea: una valutazione.”. Osservatorio Monetario.
- Beccalli E. & Poli F. (2015). “Bank Risk, Governance and Regulation”. Philip Molyneux, Houndmills.
- Beccalli E. & Cesarini F. (2021). “Il sistema finanziario europeo. Cosa regolare, come regolare, chi deve regolare”. Il Mulino.
- Bikker. J. & Okolelova. I. (2022). “The Single Supervisory Mechanism: Competitive implications for the banking sectors in the euro area”- International Journal of Finance & Economics, Wiley Online Library.
- Caiazza S., Cotugno M., Fiordelisi F. & Stefanelli V. (2015). “Bank Stability and Enforcement Actions in Banking”. CEIS Research Paper 334, Tor Vergata University, CEIS, revised 20 Mar 2015.
- Carretta A., Farina V., Fiordelisi, F., Schwizer P., Stentella Lopes F.S. (2015). “Don’t Stand So Close to Me: The role of supervisory style in banking stability”. Journal of Banking and Finance.
- Götz.M, & Tröger. T (2017). “Fines for misconduct in the banking sector – what is the situation in the EU?”. Pubblicazioni dei ricercatori dell’Istituto Leibniz per la ricerca finanziaria SAFE.
- Guerello C., Murè P., Rovo N. & Spallone M. (2019). “On the informative content of sanctions”. The North American Journal of Economics and Finance, Elsevier, vol. 48(C), pages 591-612.
- Korzeb. Z., Bernardelli. M. & Niedziółka.P (2024). “Enforcement actions against European banks in the years 2005–2022. Do financial penalties imposed on European banks follow any patterns?”. Journal of Banking Regulation.
- Linder D. (2016). “Reputational risk of banks – a study on the effects of regulatory sanctions for major banks in Europe”.
- Mango F., Murè P., Cardi M., Paccione C. & Bittucci L. (2023) “Supervisory Sanctions, ESG Practices and Bank Reputation: Market Performance Analysis of Sanctioned Banks”. Corporate Ownership & Control.
- Marzioni. S., Murè. P. & Spallone M. (2020). “L’impatto delle sanzioni sulla probabilità di default. Il caso delle banche italiane”. IL RISPARMIO. – ISSN 0035-5615
- MEF (2020), Decreto MEF 169/2020.
- Murè P. (2014). “Le sanzioni amministrative per le banche italiane: effetti sulle performance”. Rivista bancaria.Minerva Bancaria.
- Murè P. & Spallone M. (2018). “Gli effetti delle sanzioni amministrative sulle performance delle Banche Popolari Italiane”. Rivista Bancaria.Minerva Bancaria.
- Murè P., Spallone M., Rovo N. & Guerello C. (2018). “Un modello previsionale per le sanzioni bancarie in Italia”. Rivista Bancaria.Minerva Bancaria.
- Murè P. & Bittucci L. (2020). “Dalla traditional compliance al regtech. Soluzioni innovative per il sistema dei controlli interni”, EGEA.
- Murè P. (2021). “La compliance in banca. Tra le soluzioni Regtech e l’integrazione dei fattori ESG”. EGEA.
- Murè P., Spallone M., Mango F., Marzioni S. & Bittucci L. (2021). “ESG and reputation: The case of sanctioned Italian banks”. Corporate Social Responsibility and Environmental Management,John Wiley & Sons, vol. 28(1), pages 265-277, January.
- Scannella E. (2015). “Crisi economica e vigilanza unica europea sulle banche: alcuni riflessi sul mercato dei servizi finanziari”. Economia dei Servizi, Il Mulino, n. 1, gennaio-aprile, 2015, pp. 65-82.
I am infecting 4T1 with a lentiviral library and need to sort the cells to recover the transduced ones. I am running into problems with these cells clumping even after filtering. I use trypsin with EDTA to get them off the dish, inactivate the trypsin with serum containing media, wash 2x with PBS+4% FBS and then filter. During the sort I end up with many cells being 2 stuck together. This is also a problem during passage. Anyone have any experience with these?
Dear researchers, I am trying to assess specific indirect effects in my model with three moderators. However, AMOS always gives a syntax error and my estimand could not run. When I try it on R studio (with lavaan and psych packages), I could not assign parameters to calculate specific indirect effects. Could you please help me identify problems and solutions for this?
Below is the code in R studio:
library(psych)
library(lavaan)
# I already input my CSV data so now I just describe it
describe(my.data)
A =~ A2+ A3 + A4 + A5 + A7 + A8
MS =~ MS1 + MS2 + MS3 + MS4 + MS6 + MS7+ MS8
M =~ M1 + M2 + M4 + MA8
IM =~ IM1 + IM2 + IM3 + IM4
FLA =~ Listen + Speak + Read + Write
# Regression paths from IV to mediators
M ~ a1*IM
A ~ a2*IM
MS ~ a3*IM
# Regression paths from mediators to DV (FLA)
FLA ~ b1*M + b2*A + b3*MS + c1*IM
#From this moment, I tried to assign parameters to calculate specific indirect effects. However, none of the below functions works!
direct : c1
Error: object 'direct' not found
direct:= c1
Error in `:=`(direct, c1) : could not find function ":="
direct<-c1
Error: object 'c1' not found
direct=c1
Error: object 'c1' not found

Bibliometric analysis is a research method that uses quantitative analysis and statistics to assess and analyze scientific literature. It is often used to evaluate the impact and trends of research within a specific field by examining published articles, citation counts, and other metrics. Commonly used in fields like library and information science, bibliometric analysis helps in understanding research productivity, collaboration patterns, influential authors, and high-impact journals.
I am reading an Endnote library file in Vosviewer; however, it gives me the attached message: Vosviewer cannot read the file. There is no valid (%Authors) and (%keywords) fields.
Can you please help me out this issue?
Best Regards,
Fazli
What will be the criteria for selection of compounds for docking from the LC-MS compound library? Is it abundance or other criteria?
Does anyone know of a free Python library for machine learning that can be used on a personal computer? I am particularly interested in neural network libraries similar to FANN.
Good morning,
I am very new to ATAC-seq and library preparation.
I just did my first trial in Arabidopsis samples and after tagmentation and library prep the bioanalyzer profile doesn't look very promising (see attached).
What I really don't understand is the very concentrated peak around 1000-1500bp (100s) in all samples. Even in the last one (which is genomic, it can be seen).
Any idea of the origin of this band/peak? (I have my theories but I want unbiased answers! xd)
Thanks!
Hi everyone,
While running the certain material available in Eco invent library, I came across the negative water depletion values,
What could be probable reason behind it?
The number of pores in the R.10.4.1 flow cell decreased significantly from +/- 1400 to 291 after nanopore sequencing with only 24 samples multiplexing. I used the SQK-RPB114-24 kit for processing the 24 samples as one library. Would anyone recommend anything about the protocol or to change something about it? Does anyone have about the same experience and what did you do to make it in some way better?
The protocol fro the new NEBNext UltraExpress® RNA Library Prep Kit NEB #E3330S/L closely follows to the previous version NEBNext Ultra II RNA library Prep Kit # E7770 S/L, but random primer step is missing. How 1st strand synthesis works without it? Is random primer added into some mix now?
We use the NEBNext Library Quant Kit for Illumina to determine our Illumina library concentrations. Before taking the library to qPCR, we usually run the library on Bioanalyzer to get an idea of the concentration. Based on Bioanalyzer, we will dilute the library down to 5,000pM for qPCR. Most of the time, qPCR results will show libraries around 5,000pM (maybe 4,000-6000pM). In some cases, the qPCR concentration can be double or even triple the Bioanalyzer concentration. When this happens we QC the library by Qubit. In most cases, the Qubit concentration will be similar to the Bioanalzyer concentration.
This makes it challenging for us to determine which concentration to use for loading the sequencer. If we use the qPCR concentration, the runs will be under-clustered. We are trying to understand why we would get higher concentrations from qPCR. It makes more sense if the qPCR concentrations were lower than Qubit/BA suggesting that the adapter ligation was not very successful.
Why do you think we would get such high qPCR concentrations? One thought is that there may be single stranded DNA in the library which is not detected by Qubit or BA, but qPCR is able to amplify. Curious to know your thoughts.
Thank you,
Karrie
Hi ya'll,
I am here to ask for recommendations on software or platforms I can use to manage a massive database.
I am working on a big museum samples barcoding project. For now, we are going through ~6000 specimen drawers one by one and selecting two to four specimens of each species for the barcoding process. Our database is getting bigger and bigger as we keep doing this.
For each specimen, we have a specimen barcode (including species name, collect year, identifier names, collect locality et al.), the drawer code (which drawer it was selected from), the DNA extraction plate code, the DNA extraction well code (we are using the 96-well plate), PCR plate code, Library pool code, Sequencing run No., Freezer code, freezer rack code (we have four -80 freezers and lots of racks to store DNAs) and a lot of other information.
I right now have 5 people working on this project and I am using the Google spreadsheet to manage and share the progress with all the collaborators. But the sheet is getting bigger and bigger, and there are lots of tabs created. Specifically, it is not easy to figure out the errors, like typos, two specimens were given the same code, and some drawers were samples twice....
I am wondering if there is any specimen tracking system, software, or functions I can use to manage the dataset easier, like linking all the information together while avoiding duplication errors?
Thank you for your time and my best wishes,
Menglin
Is the spectral library in python , helpful for reading SAR images
South Indian Journal of Library and Information Science "Integration of E-Resources and Smart Technologies in Law College Libraries: Enhancing Access and Learning Experiences"
Hi All,
I am ordering an overalapping peptide library to study the binding epitope of my antibody. I wonder if there is a formula to calculate the probability of number of epitope hits (e.g. single, double) with different epitope length, peptide length and offset (or peptide overlap). Knowing the probability of double hit would be helpful to determine how many peptides to order (as they are quite expensive!). Thank you for your help.
I am looking for Proteus Library for current transformer SCT-013 . Where can i find it?
Hello all,
I am trying to determine the dependence of the energy gap of silicon as a function of temperature. In the literature, it is stated that the decrease in the energy gap of silicon with increasing temperature can be explained by thermal expansion and electron-phonon interaction.
First, I used the thermo_pw library (which uses the QHA approximation) to determine the lattice parameter of silicon as a function of temperature. Then, I ran the following calculations: SCF, NSCF, DOS, band, and finally plotband. I performed these calculations using the lattice parameters of Si corresponding to temperatures in a range from 4K to 800K. For this simulation, I am using PBE pseudopotentials, an ecutwfc of 25 Ry, and a unit cell with 2 atoms.
The problem is that the gap increases with temperature instead of decreasing. I obtained a gap of 0.6187 eV at 4K and 0.6315 eV at 800K.
I also tried calculating the band structure considering electron-phonon coupling using the EPW library, but the gap still increases with temperature.
Has anyone already tried to calculate the silicon gap as a function of temperature? What am I doing wrong?
I received C. elegans nuclei (20 million nuclei) for ATAC-seq and prepared the library based on the Buenrostro 2015 protocol. The final cycle number was determined by 1/3rd maximal qPCR fluorescence and total cycle number used was 14. Could this be due to too high input or have others seen anything similar due to a different reason? We are going to try lowering the input significantly and titrate input amount to find the nucleosome pattern. Any input would be appreciated. Thank you.
How can we can train our model using the data, so that it can identify the disease and recommend possible treatment.(subject to the review of concerned expert). Also, suggestions are welcome on possibility of making the maximum utilization of this proposed model using stream-lit library by making it go public. I have built certain disease prediction model and looking forward to build a multi in one model that accepts multiple type of values for better analysis.
I don't know how to submit my article?, then I inform you that I registered with Ajol
. Thank you in advance for your attention to this submission
The idea of Bayesian Neural network is as primitive as the answer of failing student. You have a neural network model but no matter how hard you train it, there is always a residual error, so what to do? And student tells you, - than replace each scalar in the network by normally distributed random variable and tune expectations and variances to match the data.
Although this concept is failing miserably, we can find large group of scientists who keep pushing it into usage. I can easily provide the proof of these strong statements. The elementary stochastic system, which anyone can reproduce at home is a coin and dice. You pick two random inputs by rolling one die twice, let say they are 3 and 5 and flip the coin. In case of head you roll 3 dice, add outcomes, otherwise 5 dice. The sum of outcomes is your stochastic output. Simple, right? Now make few hundred records and try to obtain bimodal distribution by any of publicly available library designed to support BNN. The result will not be even remotely close to reality. But the solution is simple and know for at least 50 years. It is KNN. For each given input you find several similar records. Each output is considered as expectation of normal distribution, you assign variance from common sense, and you see this beautiful bimodal distribution very close the real. Called KDE, known for decades. Funny?
That is not all. Freely available library Tensorflow is capable to detect gaps in data and return your confidence interval, which becomes larger for sparse data. That is already mocking of the science. All you need to do to identify these gaps is to generate new inputs as evenly distributed points in the field of definition, find the distance to nearest dataset point for each, record it and make a new model, which tells you your training data density. Why to use Tensorflow, when it needs 50 lines of code and can be done by student for an hour.
I tested Tensorflow with coin and dice data. The returned result was compared to true distribution by Kramer von Mises criteria. The accuracy was 15%. KNN gives 85%. I made my own method, which is slight improvement of KNN, and improved it to 90%.
I never believe that scientists promoting BNN is not aware that this technology is fake. My question is what we can do about it? Let say I publish my research, I contact scientists promoting BNN directly, he ignores and keeps promoting his research. We all don't like when doctors prescribe us expensive drugs when regular drugs is a cure and when auto mechanics suggesting to replace parts that can work. Isn't that the same thing?
I will add the links to my published research exposing weakness of BNN for those who interested.
"Today several adjectival phrases have been used to describe English like ‘International Language’, ‘Lingua-Franca’, ‘Language for Globally Connecting’, ‘Library Language’, ‘Official Language’, ‘Administrative Language’, ‘Queen of Languages’, ‘Employment Passport’ and ‘the most Preferred Language’ etc." (Jabir, M. 2019)
Reference:
Jabir, M. (2019). The Use of ICT in Teaching English: A Study of ELT in the Secondary Schools of Kargil District . An M. Phil Dissertation Submitted to Jaipur National University, p. 5.
Dogwood RNA isolations From leaf tissue
used Zymo modified protocol
KAPA mRNA-stranded library prep.
RNA QC looks decent, input of RNA into library is 1.5micrograms
RNA QC looks good
libraries failed
Per Zymo ran RNA elutes through cleanup column and re-ran libraries still all libraries failed.
I have been preparing NGS Library, where the samples input volumes, conditions followed and the PCR Cycles are same but still the concentration obtained was uneven and the fragments size where also differ from sample to sample. What could be the possible reason for this uneven results.
Until the early 2000s, the National Bibliography used to be an important source for the development of library collections, mainly for the acquisition of new items. I would like to know if in your country it remains important for research in your library. Please, could you tell me?
How can contexual bandits (used in Recommendation Systems) be implemented in code via library package ?
Dear all,
I've recently processed some samples for ATAC-seq. My corresponding ATAC-seq library looks different (see picture: Bio-Analyzer) than the expected profile. I was wondering if I can still sequence it or it will be too biased.
Thank you for your help
Best,
Karim

Hello everyone,
I am facing a problem when making a plot in "R". I am generating a ROC curve, but in the graph I have been oberving that my "0.0" scale of of X-axis is far from "0.0" scale of Y-axis. I don't understand where is problem. I want to make the plot where "0.0" will start from the same point. I am giving you the example of what I found from R (Please check figure of R) and also what I want (Like figure drawn by GraphPad)
If anyone please help me to find the solution, I will be highly benefited. I am providing the script that I use.
# Install and load necessary packages
install.packages("pROC")
# install.packages("readxl")
library(pROC)
library(readxl)
# Read data from Excel file (replace with your file path)
data <- read_excel("D:\\Samsun medical center\\ELISA data analysis\\elisadata\\New prism analysis for AUC curve analysis\\Sample data for R.xlsx")
# Extract control and cancer patient data
control <- data$Control
cancer <- data$Cancer
# Combine data and create a grouping variable
data_combined <- c(control, cancer)
group <- factor(c(rep("Control", length(control)), rep("Cancer", length(cancer))))
# Create ROC curve
roc_data <- roc(group, data_combined)
# Plot ROC curve
plot(roc_data, main = "ROC Curve", col = c("blue", "red"), legacy.axes = TRUE,
print.auc = TRUE
xlab = "100% - Specificity", ylab = "Sensitivity", asp = 1) # Set aspect ratio to 1:1
# Calculate AUC with confidence interval
auc_value <- auc(roc_data)
ci_value <- ci.auc(roc_data)
# Extract AUCs for control and cancer groups
auc_control <- roc_data$aucs[group == "Control"]
auc_cancer <- roc_data$aucs[group == "Cancer"]
# Perform Mann-Whitney U test to compare AUCs
p_value <- wilcox.test(auc_control, auc_cancer, alternative = "greater")$p.value
# Display AUC, CI, and p-value on the plot
legend("bottomright",
legend = paste("AUC =", round(auc_value, 2),
"\n95% CI =", round(ci_value[1], 2), "-", round(ci_value[3], 2),
"\np-value =", signif(p_value, 3)),
bty = "n")
# Calculate Youden's Index
youden_index <- roc_data$thresholds[which.max(roc_data$sensitivities + roc_data$specificities - 1)]
cat("Youden's Index Cutoff:", youden_index, "\n")
# Find the index corresponding to Youden's Index
index <- which(roc_data$thresholds == youden_index)
# Extract sensitivity and specificity at Youden's Index
sensitivity_value <- roc_data$sensitivities[index]
specificity_value <- roc_data$specificities[index]
# Convert sensitivity and specificity values to percentages
sensitivity_percentage <- sensitivity_value * 100
specificity_percentage <- specificity_value * 100
# Print sensitivity and specificity values as percentages
cat("Sensitivity at Youden's Index:", sensitivity_percentage, "%\n")
cat("Specificity at Youden's Index:", specificity_percentage, "%\n")


Hi, I am trying to make synthetic phage library. Right now , do people still use Kunkel method or other methods? Since it looks like Kunkel method is an old method. I am looking for more convenient method to build the library.
I researched about this but most of the instructions online is for AutoDock 4. I tried the same steps by adding the needed parameters to AD4_Parameter.dat and AD4.1_bound.dat but I cannot find the .gpf and .dpf files so the changes i made with the parameters were useless. Please help me, what should I do if I cannot find the .gpf and .dpf? Thank you.

Some of my in-text citations references show up with an 'a' at the end (e.g., Smith, 2023a). The author does not have a 2nd publication in my Mendeley library, and I have checked for duplicates. Can someone please advise how to fix this other than to continually manually update the citation?
Thank you!
I'm performing an antibody phage display with a VHH library and I consistently get frameshift mutants (mainly frame +2) after biopanning. I'm using TG-1 cells for amplification of phagemids and VCSM13 as helper phage. Biopannings are performed in target protein-coated immunotubes and PBS-milk is used as blocking agent. I have tried to coat the immunotubes with different protein concentrations (10-100 ug/mL in carbonate coating buffer) with the same results. Also tried the microtiter plate format. When I analyze the original library, all the clones are in the correct frame. I would appreciate any explanation or suggestion. Thanks!
What is scope of the implementing LIS classification and cataloguing in different field ?
Dear ResearchGate Community,
I am currently engaged in a thesis project involving the analysis of essential oils using gas chromatography-mass spectrometry (GC-MS), PerkinElmer, Clarus 690. Specifically, I am examining tea tree (Melaleuca alternifolia) essential oil, which is expected to contain terpinene-4-ol as its main constituent.
My challenge lies in the identification process, particularly when utilizing the NIST library for peak identification. Despite following standard protocols and procedures, I consistently encounter very low probabilities for matches, even for well-known compounds like terpinene-4-ol. These low probabilities persist across all unknown peaks, making it difficult to confidently identify compounds present in the essential oil samples.
Attached to this inquiry are screenshots illustrating the methodology employed, chromatograms, spectrograms, and the peak identification results from the NIST library.
I am reaching out to the community for insights, suggestions, or potential solutions to address this issue. Any advice on improving the accuracy and reliability of peak identification in GC-MS analysis of essential oils would be greatly appreciated.
Thank you for your time and assistance.
Best regards,
Achwek Meftehi
PhD Student Neurosciences and Biochemistry
Faculty of Sciences, Tunis




Can anyone give clear explanation add parameters for it to the parameter library first for auto dock. Because Ni atom is not in the library it seems? Step by step instruction.
Why all the buzz about AI-assisted writing? Think about it—haven’t we already embraced tools like Grammarly and Quillbot and other AI-assisted and Computer Assisted Writing to help us write better(Wang, 2022)? And remember when we switched from digging through library cards to hopping onto research databases? Evidently, each has advantages and disadvantages (Falagas, 2008). Sure, there was a time when many educators were wary about students using computers for writing, worried it might spoil their writing skills (Billings, 1986) or second language acquisition (Lai, 2006; Gündüz, 2005). But look how that turned out: we adapted and learned to see the value in the technology. So, what's the big deal now? AI writing tools are just the next step. Instead of pushing back, why not dive in, learn how it works, and show others how to use it? Let's make the most of what tech can offer and keep up with the times!
Billings, D. M. (1986). Advantages and disadvantages of computer-assisted instruction. Dimensions of Critical Care Nursing, 5(6), 356-362.
Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of science, and Google scholar: strengths and weaknesses. The FASEB journal, 22(2), 338-342.
Gündüz, N. (2005). Computer-assisted language learning. Journal of language and linguistic studies, 1(2), 193-214.
Lai, C. C., & Kritsonis, W. A. (2006). The advantages and disadvantages of computer technology in second language acquisition. Online Submission, 3(1).
Wang, Z. (2022). Computer-assisted EFL writing and evaluations based on artificial intelligence: a case from a college reading and writing course. Library Hi Tech, 40(1), 80-97.
When I run autogrid4 it says: autogrid4: ERROR: Unknown receptor type: "Se" -- Add parameters for it to the parameter library first!
How do i handle it? Thanks
I have 10 pre-processed studies for which I have prepared ASV tables, Taxa tables, Metadata, and phylogenetic trees. Now I want to merge these studies and create and single or merged phyloseq object to do further downstream processing.
ASV tables, Taxa tables, Metadata - these are the CSV files while tree is in text format.
# Load required libraries
library(phyloseq)
library("ape")
# Function to load metadata files from a folder
load_metadata_files <- function(folder_path) {
metadata_files <- list.files(path = folder_path, pattern = "\\.csv", full.names = TRUE)
metadata_list <- lapply(metadata_files, read.csv, header = TRUE, row.names = NULL)
return(metadata_list)
}
# Function to load ASV files from a folder
load_asv_files <- function(folder_path) {
asv_files <- list.files(path = folder_path, pattern = "\\.csv", full.names = TRUE)
asv_list <- lapply(asv_files, read.csv, header = TRUE, row.names = 1)
return(asv_list)
}
# Function to load taxonomy files from a folder
load_taxonomy_files <- function(folder_path) {
taxonomy_files <- list.files(path = folder_path, pattern = "\\.csv", full.names = TRUE)
taxonomy_list <- lapply(taxonomy_files, read.csv, header = TRUE, row.names = 1)
return(taxonomy_list)
}
# Function to load phylogenetic tree files from a folder
load_tree_files <- function(folder_path) {
tree_files <- list.files(path = folder_path, pattern = "\\.txt", full.names = TRUE)
trees <- lapply(tree_files, read.tree)
return(trees)
}
# Specify folder paths
metadata_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/Metadata_SB"
asv_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/ASV_SB"
taxonomy_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/Taxa_SB"
tree_folder <- "C:/Users/Saesha Verma/OneDrive/Desktop/Tree_SB"
# Load metadata, ASV, and taxonomy files
metadata_list <- load_metadata_files(metadata_folder)
asv_list <- load_asv_files(asv_folder)
taxonomy_list <- load_taxonomy_files(taxonomy_folder)
tree_list <- load_tree_files(tree_folder)
create_phyloseq <- function(asv_list, taxonomy_list, metadata_list, tree_list) {
# Merge ASV tables based on sample IDs
merged_asv <- do.call(rbind, asv_list)
# Combine taxonomy tables into a single tax_table
tax_table <- do.call(rbind, taxonomy_list)
# Combine metadata tables into a single sample_data object
sample_data <- do.call(rbind, metadata_list)
# Merge phylogenetic trees
merged_tree <- lapply(tree_list, function(x) list(phylo(x)))
# Create phyloseq object
ps <- phyloseq(otu_table(merged_asv, taxa_are_rows = TRUE),
tax_table = tax_table,
sample_data = sample_data,
phy_tree = merged_tree)
return(ps)
}
ps <- create_phyloseq(asv_list, taxonomy_list, metadata_list, tree_list)
I am using this code but I encounter error :
ps <- create_phyloseq(asv_list, taxonomy_list, metadata_list, tree_list)
Error in rbind(deparse.level, ...) :
numbers of columns of arguments do not match
My endnote library doesn't accept adding more than 10 references! I want to make a library to use for citing while writing, that could exceed 10 references. Any help?
I've been working in the Library Department for the past six years, and I've noticed that only about 10% of students visit the library. I'm wondering if there is anywhere, for example, a set of data that can help me understand how we can increase the number of users for academic libraries.
I'm working on library prep for ITS NGS using Earth Microbiome Protocol and am getting double banding and smearing on my gels. What might be the cause for this? I should be seeing a band around 230 bp.

I am searching for all the possible ways to measure this gap and validate it statistically.
Hi,
I was lucky enough to get my paper in the cover of Deep Sea Rearch Part I: Oceanographic Research Papers in 2022. Now I'm wondering to get a good picture of the cover to frame it. Unfortunately, they did send us the cover with the final design and at the DSR page the quality is very (very) low. They offer download the cover but just the picture without the journal graphic design. To add more sadness and touch your souls, we don't get the journal in our library.... So, if someone have for any chance access to the volume 186 of Deep Sea Research Part I and can send the scan, or a good picture, I would deeply graceful. It was one of my PhD tessis papers and having a cover was a nice thing to keep in my studio.
thanks in advance
Iván
The sentence above are largely rhetorical and perhaps it would be fairer to ask how these attacks are reacted to.
The British Library recently suffered a cyber attack by a criminal gang (I have done work on the Russian involvement with such criminal gangs but other than disruption it is difficult to see what could be obtained by Putin's government) and their personnel's data was dumped on the Dark Web when they refused to pay ransom. The BLs chief executive expressed the view that such people were against everything which libraries represent: openness, empowerment, access to knowledge." Such attacks have been slowly rising. Bostin city library was shut down in a ransomware attack in 2021. Toronto Public Library suffered a massive cyber attack in October. The city responded by declaring that such attacks were directed essentially towards civilised values.
While these attacks are criminal ones, Putin funded many of these criminal cyber groups and eventually they helped construct Russian misinformation and began working directly with Russia.
Are these actions to do with authoritarian states? Traditionally the Romans are suspected of destroying the Alexandrian Library. Generally, information is cut off in religious societies. Information dealing with understanding reality and the senses is attacked.
The bombing of Ukrainian information centres, archives, schools, museums, universities have been noted in the present war as in Gaza. Is the fate of Hypatia to be renewed?
Hypatia
📷
They attacked her in mid exploration
Cutting away her golden thoughts
As they cut away her flesh, destroying
A mind that they couldn’t destroy in
Debate, a sparkling old woman
Whose thoughts were spun from steel.
The screaming mob desecrated her tiny form
Dragging it into the dust, through the rubbish
And shit. Tearing off her clothes
The Parabalani exposed her to celestial winds crossing
The arora, rubbing
Spoilt Alexandrian soil into her unexplored vagina.
She did not die as a philosopher, calculating and
Learning, but, torn apart, the old woman
Screamed out for her father,
Terrified, in sacrificial pain so much worse
Than beheadings and crucifixion. Her modesty,
Kept for 60 years, mutilated by a 1000 killers in a single
Minute.
Her head bounced in the forum,
Her arms thrown to the 4 corners,
Her soul stamped into the gutter,
As the new religion cried out for tolerance.
In a morning thinking became forbidden
Books burnt, laughs ignored and fires built for heretics.
Hypatia was a female philosopher in Alexandria in the 4th century who was torn apart by a Christian mob, her skin scraped from her bones.
I need to teach how to conduct research using internet or library sources, therefore I need to develop a successful curriculum for the proposed training on how to conduct research>
On May 30, 2023, I gave a lecture on "The Ontology of Computer Games" at the Immanuel Kant Baltic Federal University Research Library.
Here's a link to the full lecture on YouTube: https://www.youtube.com/watch?v=7QEFsrQcJak
The lecture is in Russian.
The questions posed by my lecture are:
What is the reality of computer games?
Do they affect the human mentality?
Do they change moral principles?
Do games encourage violence?
Do games weaken empathy?
I am trying to generate topology file in GROMACS for an enzyme (PDB code =1HBN). But I encountered the following error.
Fatal error:
"The residues in the chain ALA2--ALA549 do not have a consistent type. The
first residue has type 'Protein', while residue MHS257 is of type 'Other'.
Either there is a mistake in your chain, or it includes nonstandard residue
names that have not yet been added to the residuetypes.dat file in the GROMACS
library directory. If there are other molecules such as ligands, they should
not have the same chain ID as the adjacent protein chain since it's a separate
molecule."
Can anyone please kindly help me to solve this.
Hi i have estimated an armax model using python sippy library. The estimation gives me two transfer functions H and G. How can I combine them into a single one to predict model output for new input u(t)or to compute unit step response? I thought to somehow derive state space representation maybe...
I am using Haddock 2.4 to check docking parameters between DNA and ligand . My DNA file contain Na ion and I format it as given in Haddock library but it still gives the error -
"Error in PDB file.Issue when parsing the PDB file at line 308.ATOM/HETATM line does not meet the expected format"
format of ions in my pdb file-
HETATM 307 NA+1 NA1D 101 8.118 -6.766 34.223 0.50 15.72 NA
HETATM 308 NA+1 NA1D 101 8.498 -6.164 34.656 0.50 20.12 NA
I have attached my pdb file of DNA
I have encountered an error to measure the light intensity of my laser source (650nm) (see image attached). The serial plot remains constant even i have changed the intensity of my light source, I have even tried both extremes: dark environment and close to lase source, yet there are no changes to the serial plot. Have anyone enconutered similar problem? How do i solve this error?
Here, the codes were used for the complete setup of photodiode BH1750 and Arduino Nano:
/*
Advanced BH1750 library usage example
This example has some comments about advanced usage features.
Connection:
VCC -> 3V3 or 5V
GND -> GND
SCL -> SCL (A5 on Arduino Uno, Leonardo, etc or 21 on Mega and Due, on esp8266 free selectable)
SDA -> SDA (A4 on Arduino Uno, Leonardo, etc or 20 on Mega and Due, on esp8266 free selectable)
ADD -> (not connected) or GND
ADD pin is used to set sensor I2C address. If it has voltage greater or equal to
0.7VCC voltage (e.g. you've connected it to VCC) the sensor address will be
0x5C. In other case (if ADD voltage less than 0.7 * VCC) the sensor address will
be 0x23 (by default).
*/
#include <Wire.h>
#include <BH1750.h>
/*
BH1750 can be physically configured to use two I2C addresses:
- 0x23 (most common) (if ADD pin had < 0.7VCC voltage)
- 0x5C (if ADD pin had > 0.7VCC voltage)
Library uses 0x23 address as default, but you can define any other address.
If you had troubles with default value - try to change it to 0x5C.
*/
BH1750 lightMeter(0x23);
void setup(){
Serial.begin(9600);
// Initialize the I2C bus (BH1750 library doesn't do this automatically)
Wire.begin();
// On esp8266 you can select SCL and SDA pins using Wire.begin(D4, D3);
/*
BH1750 has six different measurement modes. They are divided in two groups;
continuous and one-time measurements. In continuous mode, sensor continuously
measures lightness value. In one-time mode the sensor makes only one
measurement and then goes into Power Down mode.
Each mode, has three different precisions:
- Low Resolution Mode - (4 lx precision, 16ms measurement time)
- High Resolution Mode - (1 lx precision, 120ms measurement time)
- High Resolution Mode 2 - (0.5 lx precision, 120ms measurement time)
By default, the library uses Continuous High Resolution Mode, but you can
set any other mode, by passing it to BH1750.begin() or BH1750.configure()
functions.
[!] Remember, if you use One-Time mode, your sensor will go to Power Down
mode each time, when it completes a measurement and you've read it.
Full mode list:
BH1750_CONTINUOUS_LOW_RES_MODE
BH1750_CONTINUOUS_HIGH_RES_MODE (default)
BH1750_CONTINUOUS_HIGH_RES_MODE_2
BH1750_ONE_TIME_LOW_RES_MODE
BH1750_ONE_TIME_HIGH_RES_MODE
BH1750_ONE_TIME_HIGH_RES_MODE_2
*/
// begin returns a boolean that can be used to detect setup problems.
if (lightMeter.begin(BH1750::CONTINUOUS_HIGH_RES_MODE)) {
Serial.println(F("BH1750 Advanced begin"));
}
else {
Serial.println(F("Error initialising BH1750"));
}
}
void loop() {
float lux = lightMeter.readLightLevel();
Serial.print("Light: ");
Serial.print(lux);
Serial.println(" lx");
delay(1000);
}


In a metagenomic library profile desired size is 450-550bp,getting unwanted fragments which is causing failure of the library or it is difficult to get the desired data after sequencing.
My analyte standards are highly pure. R match and F match were good (800-900) but probability of match with the coumpound in Nist Library was Low(20-60%). How can i improve it? I am Using 10PPM standards prepared in HPLC grade N-Hexane. I am using Helium as my carrier gas?
Would providing an explanation due to a high cost of an article and unavailability in the library be a good reason for exclusion criteria when working on a systematic review?
I am researching information on what defines outreach.
In chapter 35 in Don Quijote, Cervantes used a scene from "The golden ass" (unfortunate translation) from Apuleius. The rare version Cervantes did read in catholic Italy was a censored version. As he later read the original version in the king of Algiers's library, he thought his copying would never be spotted. By the way, what was the manchego slave doing in King of Algiers's library?
(9) (PDF) Miguel de Cervantes, slave, and his master Hassan Pacha Veneziano (researchgate.net)
Please give your valuable opinions.....
Dear colleagues!
Does anybody have any information on how many public libraries currently available in Egypt, Pakistan or Nigeria? It is very difficult to receive this data, that is not available in English. For sure, it is available in Urdu or Arabic.
This information is necessary for research on public libraries trends. I could share data for library stats for more than 20 countries from all the world.
Good greeting
I am interest to get the publication of researcher (books, thesis, dissertations and papers, etc.), so we are pleasure to supply us with these references in order to save it at the electronic library of ministry of youth and sport in Iraq country to be useful to the society in typical and the researcher and the students in specialty.
Thank you very much
Dr. Nagham Ali Hussien
Good greeting
Can you help us to get the references (books, thesis, dissertations, ect.) with all fields and legally form, In order to save it at the electronic library of ministry of youth and sport in Iraq country? so we are pleasure to supply us with these references to be useful to the society in typical and the researcher also the students in specialty.
Thank you very much
Dr. Nagham Ali Hussien
I want to design an aptamer for morphine by in sillico method and i need some basic sequence but i can not find them. if you have a library or know some article that can help me , please introduce me. thanks
Utilization of ChatGPT for effective library services
Dear All,
İ want to prepare a library of compounds for virtual screening. But I keep running into multiple issues that are already resolved but when I apply the solutions on my compounds, nothing seems to work.
1. There are no z-coordinates in the SDF files I downloaded from the Enamine database. How do I add them.
2. The obabel command line does not open these files from enamine. Gives varıous errors that I checked online. Some are related to the shell and some to the bugs in the software.
3. İs there any other software or method to generate these compound structures.
4. Any idea how to download a specific library from ZINC database. such as only the FDA approved compounds?
I shall be grateful for the help.
Thank you
Best regards
Ayesha Fatima

I am new to NGS and just trying to understand the numbers. It appears the actual amount of DNA fragments resulting from most NGS library prep protocols (in the pmol range) far exceeds the number of reads offered on most Illumina platforms (far lower than pmol range).
Two real questions:
1. what proportion of the library-prepared DNA fragments is actually loaded onto a flow cell?
2. what proportion of the DNA fragments loaded onto a flow cell actually attaches to the flow cell, clusters and results in an eventual read?
(I presume these both will depend on what sequencing system is used - so perhaps we can take something like a MiSeq system as an example?)
Thanks in advance.
Staff structure for central university library
---
title: "Study Area Map"
author: "Musab Isak"
date: "2023-08-27"
output: html_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
**Load packages**
```{r}
library(tidyverse) #provide tools for data manipulation
library(ggplot2) #Used for creating intricate and customizable data visualizations
library(dplyr) #Offers a set of functions for efficient data manipulation
library(sf) #Provides tools for working with spatial data
library(rnaturalearth) #Allows you to access and retrieve geographical data from the Natural earth dataset for mapping and analysis.
library(rnaturalearthdata) # Provides functions to load and manage Natural Earth data within R
library(raster) #Focuses on the manipulation and analysis of gridded spatial data
library(viridis) #Offers a set of perceptually uniform color scales for creating informative and visually appealing visualizations.
library(ggspatial) #Extends ggplot2 for spatial data, allowing you to combine the power of ggplot2 with spatial visualizations and data.
```
**Get Shape Data of the country and study areas**
```{r}
turkey <- st_as_sf(getData("GADM", level = 1, country = "TUR"))
# level 1 refers first level of administrative divisions, it could correspond to major regions or states within the country, like provinces or states in some other countries.
turkey1 <- st_as_sf(getData("GADM", level = 2, country = "TUR"))
#level 2 refers second level of administrative divisions within a country. It's a more detailed breakdown of administrative regions.
trapzon <- st_as_sf(subset(turkey1, NAME_1== "Trabzon")) #This object likely contains the spatial information for the administrative region named "Trabzon."
giresun <- st_as_sf(subset(turkey1, NAME_1== "Giresun"))
ordu <- st_as_sf(subset(turkey1, NAME_1== "Ordu"))
samsun <- st_as_sf(subset(turkey1, NAME_1== "Samsun"))
```
**Plot Location without legend names**
```{r}
turkey_ggplot <- ggplot() +
geom_sf(data = turkey, aes(fill = "Turkey")) +
geom_sf(data = subset(turkey1, NAME_1 == "Trabzon"), fill = "yellow") +
geom_sf(data = subset(turkey1, NAME_1 == "Ordu"), fill = "green") +
geom_sf(data = subset(turkey1, NAME_1 == "Samsun"), fill = "skyblue") +
geom_sf(data = subset(turkey1, NAME_1 == "Giresun"), fill = "purple") +
geom_rect(aes(xmin = 35, xmax = 41, ymin = 40, ymax = 41.8), color = "black", fill = NA) +
scale_fill_manual(values = "pink") +
theme_minimal() +
theme(plot.background = element_blank())
print(turkey_ggplot)
```
**Plot Location with legend names**
```{r}
# Create a new data frame for the legend
legend_data <- data.frame(
City = c("Trabzon", "Ordu", "Samsun", "Giresun"),
Color = c("yellow", "green", "skyblue", "purple")
)
turkey_ggplot <- ggplot() +
geom_sf(data = turkey, aes(fill = "Turkey")) +
geom_sf(data = subset(turkey1, NAME_1 %in% c("Trabzon", "Ordu", "Samsun", "Giresun")), aes(fill = NAME_1)) +
geom_rect(aes(xmin = 35, xmax = 41, ymin = 40, ymax = 41.8), color = "black", fill = NA) +
scale_fill_manual(values = c("Turkey" = "pink", "Trabzon" = "yellow", "Ordu" = "green", "Samsun" = "skyblue", "Giresun" = "purple"),
guide = guide_legend(override.aes = list(fill = "white"))) +
theme_minimal() +
theme(plot.background = element_blank()) +
guides(fill = guide_legend(title = "Cities", label.theme = element_text(color = "black", face = "bold"))) + labs(title = "STUDY AREA")
print(turkey_ggplot)
```
**NOTE** You can add Annotation and scalebar by using *scalebar()* function and *annotation_north_arrow()* if you are interested in.
Could anybody give me the diagnosis of the acritarch genus Pterosphaeridia Maedler, 1963? I cannot find the original article of Maedler (1963) in our library.
Mädler, K.A., 1963: III. Die figurierten organischen Bestandteile der Posidonienschiefer. Geologisches Jahrbuch, Beihefte, v.58, p.287-406, pl.15-30.
Thank you in advance!
Hi everyone,
I am trying to clone a CRISPR sequencing library. After transformation by electroporation, I picked single colonies and did minipreps, then digested with a single cutter enzyme. Most of my colonies look good, with a single band at the right size. However, some of my colonies have the correct band, plus three additional larger bands. What could be happening here? Could it be concatemers or incomplete digestion? Could my single colonies have two plasmids, one at the right size and one much bigger that's not being cut?
Any advice on what might be happening, and how to figure out what's going on with my plasmids, would be really appreciated. I want to know what is in the plasmid ideally!
Thank you

I have modeled coastal vulnerability in GeNIe and I want to know how to integrate Bayesian Networks with GIS (Geographic Information Systems). Or are there r or Python or any other language library or any open source software that can successfully integrate BN and GIS? The GIS data formats are either tif or ascii or .shp.
Why could it be of any great value teaching Library and Information Science scholars various Software Development and web development Packages?
Greetings,
for my systematic review I have 23 research articles 21 of which I got from PubMed, CINAHL, OVID and WOS. I got 2 articles from British library, the explore further option and these articles are cited as peer-reviewed.
can I put British Library as a database in my Prisma Flow diagram. Or do I indicate that two articles were gotten from british library in my methodology? your answers are highly appreciated.
Good day everyone.
I have been doing some GRACE data processing in GEE, but from what I can tell - only the first mission's data (dating from 2002/04 to 2017/01) is accessible through the library for import.
Any recommendations how I can access more recent data from the GRACE-FO mission for analysis in GEE?
Any feedback is greatly appreciated.
Best wishes.
CV
I used the LC-ESI/MS method on natural fat and i got masses in negative ion mode. So how I can identify lipids and get the molecular weight of the natural fat.
I use ThermoFisher .raw files to explore materials that absorb UV light, and I use a specific program called XCalibur Qual Browser. I called ThermoFisher Customer support and they said that there is no in-house computer language that can be used to automate data processing of these .raw files. I was wondering if someone here might know of a Python library or some other computer language that can be used to data process these .raw files. In the photo provided, the top panel shows what materials are, and the bottom panel shows the light-absorbing properties of the materials. I am trying to make the computer language data process the bottom panel of data, not the top one.

I want to use an ac motor in proteus but I couldn't find. So how can I get that motor ?
Suggest subject or topic for PH.D .
In the realm of data visualization in Python, which library stands out as the most versatile and effective tool, accommodating diverse data types and producing impactful visual representations?
Hi,
I have a synonymous variant library of a protein, and it has hundreds of variants. They are cloned in the Flp-In T-REx expression vector to work with the Flp-In system. We have worked with this library to measure the protein levels using the Flp-In HEK293 cell line and it has always worked. Right now I would like to transfect this library into other human cell lines and unfortunately, these new cells do not have the Flp-In T-REx landing pad and it would require a lot of work to generate them.
I wanted to ask if there is any other high throughput method to measure the protein levels of these variants in human cells.
Thanks a lot.
Among users, there are preferences for modern UI, availability, rich library, and accessibility. Taking all of these options into account, which Latex programs would you recommend?
I am currently conducting research on shrimp stock assessment using the ‘TropFishR’ package to analyze a monthly carapace length frequency dataset. The package allows for the analysis of one year of data, specifically data collected from January to December of a particular year. Sample code for opening the library, working with an Excel file, and opening the dataset from the working directory is provided below:
## Open the TropFishR library
library(TropFishR)
## Open the Excel data file
library(openxlsx)
## Set the working directory where the data is located
setwd
## Open the dataset in the working directory
data <- read.xlsx("frequency.xlsx")
## To reproduce the result
set.seed(1)
## Define the date, assuming 15 as the midpoint of sampling days
## 1:12 indicates data collected from January to December
## -2022 indicates the year, with the remaining codes remaining the same
dates <- as.Date(paste0("15-",01:12,"-2022"),format="%d-%m-%Y")
However, if we have more than one year of data, how can we feed it into the ‘TropFishR’ package?
In the different construction context, all standards are not similar. therefore, the available BIM objects are suitable or not.....