Science topic

Library - Science topic

Explore the latest questions and answers in Library, and find Library experts.
Questions related to Library
  • asked a question related to Library
Question
2 answers
I'm doing a compression test on four self-reinforced composites
fibre/ matrix
1- PET / PET
2- PLA / PLA
3- CARBON / PET
4- GLASS / PET
fabric style 2x2 twill weave woven
I need the required properties to add them in Ansys library for analysing
thanks all
Relevant answer
Answer
  • asked a question related to Library
Question
2 answers
I want to use the human antibody fragment repertoires in phagemid format as distributed by Source BioScience on behalf of the Medical Research Council for non-profit research purposes, eg Tomlinson I&J library. Does anyone know to request these libraries? I haven't heard back from Soruce Bioscience and can't find any information for contacting MRC regarding antibody libraries.
Thanks
Relevant answer
Answer
Hi,
It doesn't exist anymore unfortunately!
  • asked a question related to Library
Question
3 answers
I am planning to run16S metagenomic sequencing on libraries prepared from colon content of C57BL/6 mice to understand the gut microbiota diversity of control group and DSS induced colitis group. I am having problem of very low library concertation of some samples from DSS groups after quantifying by qPCR. However, the qubit showed considerable amount of library concentration. I have repeated the library preparation on same samples by increasing DNA input and cycle numbers for PCR. But the result is still same. Can I diluted the libraries based on qubit concertation for further sequencing? Can I use fecal samples instead of colon samples for those samples for preparing libraries? Any information would be greatly appreciated. I have used following kits for library preparation and quantification.
16S Library preparation kit : Ion 16S™ Metagenomics Kit, A26216
Library quantification kit : Ion Universal Library Quantitation Kit, A26217
Relevant answer
Answer
Man Kit Cheung is probably right. In any case, I think this article might answer your second question.
  • asked a question related to Library
Question
8 answers
Which is the Most useful topic in LIS field?
Relevant answer
Answer
Hi,
It seems that the most valuable for research in LIS studies is bibliometrics. The reason for this is that within this vast field you could study every area or subarea in LIS, and use a quantitative research method.
  • asked a question related to Library
Question
2 answers
I have created a library of materials using Nicolet FTIR spectroscopy? Does anyone if the library can be opened on a different FTIR database such as Bruker or Perkin?
Thanks
Relevant answer
Answer
Hope this could be alternative way, Mr. Zakir
  • asked a question related to Library
Question
3 answers
Hello there,
I am searching for some freely available pixel-based (can be derived from satellites or mixed like CRU or IRI data library) datasets which have a resolution of less than 500m (preferably less than 100m). It would be nice if you name some!
Thank you so much for your attention and participation.
Relevant answer
Answer
Dear Sakib,
There are literally thousands of freely available data sets worldwide. What exactly do you need? No instrument is perfect and each data set has its own advantages and drawbacks. You should select those that are most appropriate for your purposes, and in particular determine your accuracy requirements, as they will imply close looks at the calibration issues as well as considerations regarding post-processing. Once you have clearly identified the parameters you need, the spatial and temporal extents and resolutions required, and the minimum accuracy needed, then you can search for the best inputs for your purpose.
By the way, NASA does offer a wide range of data sets but it is not the only source of information: the European Space Agency (ESA), as well as national space agencies of Japan, China, France, UK or Brazil (and many others) also have worthwhile offerings. You will find useful links to those data sources by searching the web.
Best regards, Michel.
  • asked a question related to Library
Question
4 answers
For a dataset like
1.BCI Motor imagery EEG signals(example: BCI competition IV),
2.SEED dataset
which python library is best suitable for processing and feature extraction tasks?
Relevant answer
Answer
Thank you Shima Shafiee , Ignas Laude , Mohamed Alseddiqi for the helpful insights!
  • asked a question related to Library
Question
22 answers
latest trends or topics
Relevant answer
Answer
Indeed it is a good topic you are proposing. Only I want to advise you to look into the status of the proposed library and probable amount of information you may get from one library only from the point of view of its sufficiency to draw a conclusion.
  • asked a question related to Library
Question
2 answers
We use the NEBNext Library Quant Kit for Illumina to determine our Illumina library concentrations. Before taking the library to qPCR, we usually run the library on Bioanalyzer to get an idea of the concentration. Based on Bioanalyzer, we will dilute the library down to 5,000pM for qPCR. Most of the time, qPCR results will show libraries around 5,000pM (maybe 4,000-6000pM). In some cases, the qPCR concentration can be double or even triple the Bioanalyzer concentration. When this happens we QC the library by Qubit. In most cases, the Qubit concentration will be similar to the Bioanalzyer concentration.
This makes it challenging for us to determine which concentration to use for loading the sequencer. If we use the qPCR concentration, the runs will be under-clustered. We are trying to understand why we would get higher concentrations from qPCR. It makes more sense if the qPCR concentrations were lower than Qubit/BA suggesting that the adapter ligation was not very successful.
Why do you think we would get such high qPCR concentrations? One thought is that there may be single stranded DNA in the library which is not detected by Qubit or BA, but qPCR is able to amplify. Curious to know your thoughts.
Thank you,
Karrie
Relevant answer
Answer
I also recently had this happen where our qPCR is double/triple our Qubit.
  • asked a question related to Library
Question
3 answers
I have a library of covalent compounds, and I want to run covalent docking for them using Autodock 4. Is there any possible way where I can run multiple compounds at once, like is it possible to use all ligands in a single file, or a certain command that can run all of them at once?
Relevant answer
Answer
  • asked a question related to Library
Question
1 answer
Dear all,
I'm currently working on a ChIP-Seq for a transcription factor and was preparing libraries using the qiagen QIAseq Ultralow Input Library Kit, checking the ligation of adapters with the Kapa library quantification kit for Illumina sequencing and performing size selection with Ampure XP beads as described in the library kit protocol. The libraries were then checked on a bioanalyzer high sensitivity DNA ChIP.
The first replicate worked fine for me (some samples are included on the bioanalyzer Chips, sometimes the chip didn't run nicely, don't mind that), however, in the second replicate strange larger peaks appear in the bioanalyzer chip and also there are multiple peaks/bands instead of a normal distribution.
The shearing of both replicates was fine (done mechanically with a bioruptor) and targets could be recovered as checked in a ChIP-qPCR. Also, all samples that had been prepared in parallel looked weird (ChIPs and inputs), for both antibodies used and in replicate 1 for antibody 3, although all other samples were fine. If I purified those samples in parallel with a re-purification of an old sample. the old sample still looked fine, so I assume it has to do with the library preparation itself. Adapters were not reused and diluted to a 1:10 dilution as I was using 5 ng of input DNA. The number of cycles was chosen according to the Kapa library quantification and the minimum of cycles was used. For the PCR reaction I prepared a master mix and added it to the samples, so maybe that was a problem? Or do you think an enzyme could have gone bad? Is it just trash that got over-amplified, or could it be a problem with the adapter ligation? Or the most simple question: has anyone seen peaks like this before? And do you think it might be worth to give it a shot at sequencing nonetheless or is it just trash? For the libraries with antibody 1: could it be that those are "just" over-amplified? And if yes, could I try to sequence them, maybe after another round of size-selection to remove larger peaks?
The ChIPs were performed with primary cells (so I would be super happy, if I could use any of the samples) using two different conditions in wt and KO mice and the analysis therefore "just" should be the overlay between the different conditions (yes in wt, no in KO etc), only peak calling, nothing quantitative. Could I have a chance the sample quality might be sufficient for this setting? At least with antibody 1? Might the effect of this fragmentation into single peaks phenomena get lost once I pool the libraries for sequencing?
Thank you so much in advance.
Relevant answer
Answer
Have you find any answer to the question? can you explain for me I have the same issue.
  • asked a question related to Library
Question
1 answer
When I run autogrid4 it says: autogrid4: ERROR: Unknown receptor type: "Se" -- Add parameters for it to the parameter library first!
How do i handle it? Thanks
Relevant answer
Answer
Hi,
At first, you should add the Se atom parameters to the AD4_parameter.dat file.
Se parameters:
atom_par Se 4.21 0.291 14.000 -0.00110 0.0 0.0 0 -1 -1 4 # Non H-bonding
Save and paste the modified dat-file to the folder whit the autodock4 and autogrid4 exe-files (C:\Program Files (x86)\The Scripps Research Institute\Autodock\4.2.6).
Thereafter, you have to add the following command "parameter_file AD4_parameters.dat" (without quotes) to the top of the gpf- and dpf-files.
Now the AutoDock is ready for docking of the Se-containing compounds :)
A similar question was also resolved here:
Good luck!
  • asked a question related to Library
Question
4 answers
In his name is the judge
Hi
I have to use tsk fuzzy system in python for my research.
Please recommend me a library for tsk ( not mamdani ) fuzzy system in python.
Also if this library exist please Introduce me a source for learning it in python.
wish you best
Take refuge in the right.
Relevant answer
Answer
thank you dear Kishore Bingi
  • asked a question related to Library
Question
4 answers
I am currently designing a bettter version of my lentiviral construct that I am using for reporter assays to test a library of regulatory elements by transductions. The element is cloned upstream of a minimal CMV promoter and both drive eGFP. As a transduction control, I have BFP driven by a weak PGK promoter. I am experiencing a high background to signal, most probably coming from the ectopic activation from active loci where the lentivirus integrates in the genome. I was wondering whether anyone can suggest whether cloning an insulator (which insulator to use eg. CTCF or otherknown sequence? and in which part of the construct?) would help in minimizing the ectopic effect and increase the signal to background ratio.
Any help is appreciated.
Relevant answer
Answer
Hi Roberto
During my internship at Kinderspital Zurich, I was involved in the work associated with lentiviral vector that had UCOE element which worked as anti-silencing element or insulator that prevent from methylation associated epigenetic silencing of trasngene. The link for the article is attached below:
  • asked a question related to Library
Question
4 answers
Hello, I'm a pary
My field of study is PhD in Library and Medical Information
I want to choose a topic for my dissertation that is related to health literacy. Can you help me? Or introduce me to the faculties that have this field in the Phd program.
Thanks alot
Relevant answer
Answer
Have you considered exploring the link between computer and information literacy of healthcare professionals on the one hand and the development of patient health literacy on the other? You can do one research on computer and information literacy of healthcare professionals and another research on health literacy of patients. You can format the questions so that certain cause-and-effect relationships can be identified in the analysis of the answers. Good luck !
  • asked a question related to Library
Question
5 answers
For the molecular docking analysis using AutoDock Vina, how can I energy minimize hundreds of ligands in the ZINC15 drug library and also convert them to pdbqt?
Thank you very much!
Relevant answer
Answer
Of course, you can.
extract your minimized ligands from the working directory folder 'LIGANDS' and your protein in PDBQ format. For ease, put them in a same folder and create a box file.txt
you can then run docking on vina from .cmd or on autodock4.
  • asked a question related to Library
Question
4 answers
By comparing some discrepancies between the results of Over Representation Analyses (one-sided Fisher exact test, a.k.a. hypergeometric test) performed with enricher() (from ClusterProfiler R library) and with other web tools such as MsigDB, I realized there is an unaddressed ambiguity (it was at least for me) in the definition of genes in the query list (eg. upregulated genes) and genes in the universe/background (attached image).
While other tools and general workshops suggest that k should be the complete query list and N the universe of measurable genes (e.g. the whole transcriptome for RNAseq), ClusterProfiler (I think the most widely used library for pathway analysis in R) restricts the analysis to only genes present in the annotation set in use.
That leads of course to generally larger p-values than what we would get with the conventional approach. I feel that restricting the analysis to only annotated genes is reasonable and more specific, but I think it's worth opening a discussion about that. Which approach do you usually use/recommend? Do you have any opinions to share about it?
P.S. I also opened a discussion on the GitHub page of ClusterProfiler (https://github.com/YuLab-SMU/clusterProfiler/discussions/478)
Relevant answer
Answer
Philippe Fort I probably didn't state it clearly in my initial post. The restriction of the background applied by clusterProfilers is not related to the expressed transcripts in the tissue of interest but to the genes with an associated pathway in the reference annotation set:
- MSigDB: 4383 genes associated with at least one pathway;
- Kegg: 5245 genes associated with at least one pathway;
- Reactome: 10646 genes associated with at least one pathway;
- Gene Ontology - Biological Process: 17872 genes associated with at least one pathway;
Stats extracted from the msigdbr (7.5.1) R library for the human species
  • asked a question related to Library
Question
2 answers
Hi,
I intend to genotype an organism (without a reference genome) using the double-digest restriction site associated DNA (ddRAD) sequencing method, . I'm not sure how to choose a good pair of enzymes to prepare my library, or what criteria I should consider to make a good choice of enzymes.
Thank you
Relevant answer
Answer
Hi Santoze,
There are a couple of ways to finalise the restriction enzymes.
1. You can try in-silico digestion on a closely related species genome if the genome of your species is not available.
2. You can also try and digest DNA of your species using various combinations of enzymes, and then check the size distribution (using tape station or bioanalyser).
I recommend doing both the above steps before finalising the enzyme combination.
You can try the combination of a rare (6base) cutter and a frequent (4base) cutter to see if you are getting enough fragments of the desired range.
Of course, you can take help from what enzyme combinations are frequently used in the literature on related species and test for your species.
  • asked a question related to Library
Question
3 answers
Hello all,
TL;DR
I made a Python package that allows Lifespec .FL files to be parsed. It also installs a command line tool for converting .FL files .csv.
-----
We use a Lifespec machine in our lab to perform TRPL and other measurements. It appeared that the only way to save the data from the .FL format was to manually use their F980 software to convert each file manually. This became tedious so I created a Python package and command line tool for converting them.
The Python package uses another package called `parse_binary_file` to parse the .FL file. You can then load the data from the .FL file directly into your Python script for analysis.
The command line tool is installed when you install the package. It converts .FL files to .csv, and can do it in batch with several options on how to output the data.
I thought this may be a useful tool for others, so wanted to share here. If you have any questions or comments the best place to post them is on the Issues page of the project's GitHub repo (https://github.com/bicarlsen/lifespec_fl/issues)
Relevant answer
Answer
great effort very nice
  • asked a question related to Library
Question
11 answers
I am thinking of pursuing a PhD but confused in choosing a good topic. What are today's most attractive and relevant topics in Library and Information science most especially in the field of ICT, AI and Robotics?
Relevant answer
Answer
In the 21st century, the central purpose of most types of library is no longer to provide access to information but to energize connection and creation. In light of this, data and web specialists are the right people to turn libraries into knowledge services centers but they need new orientations, skills, and what might be termed knowledge behaviors. Despite the fact that data and web management teams rarely comprise more than one person, the disproportionate value they can bring to libraries demands that they rally together. Any research on how data and web specialists in libraries might collaborate would be welcome.
  • asked a question related to Library
Question
1 answer
We have received 2ml pleural fluid. We have extract RNA for mRNA library preparation. Please suggest which method or kit needs to use for extarction.
Relevant answer
Answer
You can use for example Plasma/Serum Circulating and Exosomal RNA Purification Maxi Kit (Slurry Format, Norgen Biotek, Canada, cat. no.
50900). We used it here in the study of ascites:
Ensure, what type of content you are isolating, either the whole pleural fluid fraction or cell-free supernatant fraction. You can also separate the cells. In case of cells isolation, you can use TriReagent/Trizol, Mirvana, etc. For fluids, alternative isolation kits are also available, for cell-free fractions you can use those for plasma RNA isolation. Always ensure that kit isolates the fraction of small RNAs.
Goog luck.
Luděk
  • asked a question related to Library
Question
4 answers
I am currently trying to include results of my ancestral state estimation on my phylogenetic tree with ggtree's geom_nodelab. However, when I execute my command I always get the error:
Error in `check_aesthetics()`: ! Aesthetics must be either length 1 or the same as the data (33): label
I see that it wants values for the tips as well (17 labels + 16 nodes). However, I just want the ancestral states on the internal nodes and do nothing to the labels. Has anyone an idea? I have been spending a lot of time on this seemingly easy problem and am at the end of my wits.
Reproducible example:
library(geiger)
library(phytools)
library(ggtree)
tree <- sim.bdtree(b = 0.1, d = 0, stop = "time", t = 20, seed = 12345)
cont.trait.mode <- data.frame(trait= runif(length(tree$tip.label)))
rownames(cont.trait.mode) <- tree$tip.label
anc_trait <- fastAnc(tree, cont.trait.mode$trait, CI= T)
anc_trait_df <- as.data.frame(anc_trait$ace)
ggtree(tree) + geom_nodelab(aes(subset=!isTip,label = anc_trait_df$`anc_trait$ace`))
Relevant answer
Answer
Hi
Xavier Navarri
,
thanks for your reply. I thought about putting in "dummies" for the tip labels. But somehow I couldn't believe that this "make-do" approach is the only option to selectively label nodes.
  • asked a question related to Library
Question
5 answers
I won't know how we can calculate return on investment(ROI) from a particular book/journal in an academic library context which never been used (or circulated by the library among the users)by library users. Can you suggest some essential criteria/software/ statistical tools for calculating the ROI of a particular book?
Relevant answer
Answer
OK what is your definition of ROI in your research? That might be helpful to understand your question
  • asked a question related to Library
Question
1 answer
Hi all,
1. How do you identify if you have bubble library or just insufficient size selection?
2. How to deal with bubble library? Would reconditioning PCR work? Is it recommended to use single primer or both forward and reverse primers?
3. How to quantify bubble library for library pooling? qPCR? How?
Thanks.
Lux
Relevant answer
Answer
Heat the libraries for 3-5 mins in 90'c and recheck the QC. Concentration will decrease if it is bubble libraries.
  • asked a question related to Library
Question
4 answers
While doing bio-panning, there was a significant increments of plaque from first to fourth round but in fifth round it decrease drastically ? Can anyone explain, what may be probable reason.
Relevant answer
Answer
Hi Satya, I believe the presence of wild type phages is increasing after round 4 in your case. However, there are three things you can see if these are in order:
1. Try the next round of panning right after getting the fresh amplified elute from previous round. If cannot be done on the same day, store the elute as it is without adding glycerol in 4degrees (it stays fine for a week). I have found that glycerol stocks don't perform that well after thawing.
2. If you are using the same E.Coli culture or re-inoculating your amplification cultures with it, try inoculating afresh from streak plates or from original E.Coli glycerol stock in each round.
3.Try using fresh X-gal/IPTG plates.If using once stored in 4degrees, warm them at least for an hour. Hope this helps.
  • asked a question related to Library
Question
3 answers
I wonder it there is a tool to export a lot of citations (more than 10 000 citations) from google scholar at once to a library like (EndNote).
Relevant answer
Answer
In RG it can't be exported . The number of citations are also not the same with h-index .
  • asked a question related to Library
Question
1 answer
by industrial Case studies i mean a full library with animation of different parts of mechanical equipments such as Motors, Conveyors, Pipes, Compressors...etc
I am actualy working on Unity 3D but cannot find such liraries
Thank you
Relevant answer
  • asked a question related to Library
Question
1 answer
Dear all,
I've recently processed some samples for ATAC-seq. My corresponding ATAC-seq library looks different (see picture: Bio-Analyzer) than the expected profile. I was wondering if I can still sequence it or it will be too biased.
Thank you for your help
Best,
Karim
Relevant answer
Answer
Hi Karim! Did you get the answer? B'cause I got similar profile with my ATAC sample recently. It looked like the nucleosome peaks are not very significant and the only prominent peak was from excess amount of index primer. I was suggested not to proceed with this sample. It'd be really helpful if you can share your views on this. Thanks!
  • asked a question related to Library
Question
1 answer
To all the brilliant scientists out there,
Recently I've been performing single cell DNA library for NGS illumina, and there was an issue that bothers me for weeks, I wonder has anyone encountered the same situation or understand what is the problem.
When I checked my library size on a Bioanalyzer, I noticed there were adapter dimers, so I tried a reduced AMPure XP bead ratio (from the recommended 0.8X twice to 0.7X twice) for library clean-up, and I found I can almost remove all the empty adapters.
However when I analyzed sequencing data from these 2 library: 0.8X twice VS 0.7X twice, I found the mapping rate was 40% VS 10%, I wondered if this is because I reduced the bead ratio, which cause the loss of a good proportion of my fragment?
My expected length is ≥ 300 bp.
Thank you so much for all your help!
Relevant answer
Answer
You got replies in your other post related to this question.
And yes, a double size selection is possible in your situation.
  • asked a question related to Library
Question
4 answers
I have performed several assays with the takara two-hybrid system for yeast. I cloned a small 14KDa protein as bait into pGBKT7 and I used a commercial library from takara as prey. In these I find preys/interactors that have the capacity to activate the Gal4 system in the presence of any pGBKT7 (without bait, with negative control lamine and with bait) but not with other constructions.
Bait (17KDa) + Prey : Blue
Empty + Prey: Blue
Lam + Prey: Blue
Another Bait (3 with different KDa): this one does not grow
I found that many sticky, highly charged or stringy proteins (excluding degradation pathways) can appear. But none of these proteins appear to me in other tests with different baits. Also check if it was due to protein size: no.
Does anyone have any ideas?
Relevant answer
Answer
Sorry, I'm talking about reporter plates. Cells in rich media can grow as usual. All the prays that I assay for my protein are active with empty plasmids o negative control plasmids with binding domine. But if I try to use another different bait preys don't work. We think that is a false positive but we don't understand why using other bait it doesn't work.
  • asked a question related to Library
Question
3 answers
Hello,
I've made a Python library called PyScaps (https://pypi.org/project/pyscaps/) that allows one to analyze SCAPS-1D (https://scaps.elis.ugent.be) models and simulation results. I thought I would share here if others would find it useful.
PS
Don't forget the `s` at the end of `pyscaps` otherwise you will end up at a different project.
Relevant answer
Answer
Muhammad Ali Great :) If you find any issues please feel free to raise them on the GitHub page: https://github.com/bicarlsen/pyscaps/issues.
  • asked a question related to Library
Question
3 answers
what stat test do i do if my independent variable is binary (Y/N, converted to 1/0) and dependent is continuous (score 1-5)? ive tried log and linear regression but neither seem right.
I'm doing it on R Studio.
the log regression was as follows:
library(readr)
CrateAvSRB2col <- read_csv("~/OneDrive - Newcastle University/DISSERTATION/CrateAvSRB2col.csv")
View(CrateAvSRB2col)
DataA=(CrateAvSRB2col)
head(DataA)
summary(DataA)
sapply(DataA, sd)
Model102<-glm(Crate~AvSRB, family= binomial, data=DataA)
summary(Model102)
anova(Model102, test="Chisq")
table(Model102)
##unable to form table due to error 'all arguments must have the same length'###
I was unsure how to write up the logistic regression results as they are not significant
the linear regression was as follows:
lmCrate<-lm(AvSRB ~ Crate, data = DataA)
summary(lmCrate)
plot(DataA, pch = 16, col = "blue")
abline(lmCrate)
the scatter graph produced didn't look right due to the 2 columns of data for the binary 0 and 1
any help on which stat test is right and how to write up log regression results would be much appreciated, thank you
Relevant answer
Answer
Hey Ellie Merson , you state in your question to that the your dependent variable is continuous but just to clarify you are implying that a value of 1.2 is possible? If not Chi-Square is what I would use.
If so you are just do linear regression. I would be happy to run your data in SPSS to tell you what I get, if that is helpful to you.
  • asked a question related to Library
Question
4 answers
I have a relatively small dataset (~160) with a mix of data types including continuous, nominal, and binary. I am trying to compute a variable (continuous) from that dataset (multiple data types) that maximizes the likelihood of an outcome X (binary). Ideally, the algorithm would be able to scale to a dataset in the range of the ~1000's.
Relevant answer
Answer
Chris Hornung Quantitative variables are either discrete or continuous. Variable that is categorical. Categorical variables have a limited number of unique groupings or categories. Categorical data may or may not be in logical order. Gender, material type, and payment method are examples of categorical predictors.
  • asked a question related to Library
Question
3 answers
How do I get a deposit number in the National Library and Documentation House for my book ?
Relevant answer
Answer
You can access through library OPAC
  • asked a question related to Library
Question
3 answers
I am hoping to source an archive of laryngoscopy images for a project relating to the Cormack and Lehane classification system.
Are there any accessible resources for this?
Relevant answer
Answer
For biomedical images, visit https://openi.nlm.nih.gov/
  • asked a question related to Library
Question
2 answers
Good day everyone.
I have been doing some GRACE data processing in GEE, but from what I can tell - only the first mission's data (dating from 2002/04 to 2017/01) is accessible through the library for import.
Any recommendations how I can access more recent data from the GRACE-FO mission for analysis in GEE?
Any feedback is greatly appreciated.
Best wishes.
CV
Relevant answer
Answer
You can download data from https://grace.jpl.nasa.gov/
and then enter the Google Earth engine.
  • asked a question related to Library
Question
2 answers
Could any one help me in downloading an old Phd thesis without paying the money. Or can anybody help me in getting this thesis (link is provided below)
Relevant answer
Answer
Some dissertations are digitised, some are not, some are available and free to download, some are restricted. This one, I guess because it is an old one is neither. It says anyone interested can order from print with a fee.
My advice ask anyone if he/she can actually visit the uni and scan it there.
  • asked a question related to Library
Question
1 answer
In case of transcription factors in Cut&Tag (cut and tag), is there any way if a particular antibody works or not before going for sequencing. Has anyone tried qPCR after library preparation? What antibodies did you validate for Cut&Tag (cut and tag) in your experiments?
Relevant answer
  • asked a question related to Library
Question
1 answer
Hello,
Does anyone have experience with Zymo-Seq RRBS Library Kit and subsequent Illumina sequencing? A part of my project that I work on should be "methylation profiling of brain tissues". I wanted to use Agilent's SureSelectXT Methyl-Seq Library Preparation Kit for targeted methylation sequencing, but it's much more expensive, and I don't think that for our purposes, it's necessary to target hotspot regions. I wanted to know whether you are satisfied with the kit and how do you sequence libraries? The manufacturer recommends at least 30 million reads and read length > 50 bp, so Illumina's NextSeq 500/550 Mid Output Kit v2.5 (150 Cycles) could be sufficient for four libraries?
Relevant answer
Answer
Dear Madam - No Experience.
  • asked a question related to Library
Question
5 answers
Dear all,
I am writing a paper and I have some issues with the citation from Mendeley in Word.
Every citation in the text itself is updated with no problem (currently working on MacOS BigSur, Word version 16.58). The problem appears in a table where I am citing various formulas of vegetation indices. These references are not updated.
I tried
- cmd+a, fn+F9,
- updating the Mendeley PlugIn in Word ("Update From Library")
- as well as selecting one citation directly in the table with the three dots (which appear when putting the cursor on the reference number) and entering fn+F9
What do I overlook? Do you have any suggestions?
Thank you in advance!
Melanie
Relevant answer
Answer
Thank you all for your answers!
I solved it for the moment semi manually...
1) mark the reference in the table
2) in the Mendeley plugin there appears a blue field named "edit citation" -
click on the underlined text (in this blue field) "manually override the citation"
3) click on the green button "revert to default"
4) save changes for the specific reference
greetings,
Melanie
  • asked a question related to Library
Question
2 answers
For a library prep, I need to generate 5-100bp DNA fragments from genomic DNA. I do have a sonicator available but no Idea how long I need to sonificate the DNA. Any protocol suggestions?
Relevant answer
Answer
Thank you for your answer. Yes I really need 5-100 bp small DNA pieces most protocolls are designed to produce larger pieces, that´s the reasons I was hoping for suggestions. Seems like I simply have to try.
  • asked a question related to Library
Question
1 answer
Is this library okay to use for subthreshold simulations? Or Should I use other libraries? Since the supply for that library is using the 1.8V/3.3V which is the standard supply. The library is made by TSMC.
Relevant answer
Answer
yes you can broda.:)
  • asked a question related to Library
Question
2 answers
I have a PI Controller E-871 and I want to control it with python. I found one library PIPython but I didn't find any relevant documentation related to that. If somebody is working on it, please feel free to share your experiences and documentation (if any).
Thank you.
Relevant answer
Answer
Hi
I looked your library and effectively manufacturer do not include any documentation on it but on the "sample" section you could find few samples of functionnality provided by the library. The library is provided by the manufacturer "physikinstrumente" and the code is under on proprietary licence. If you want more help could you send your project's needs ?
Kinds regards,
  • asked a question related to Library
Question
4 answers
Dear colleagues, I am looking for a user-friendly tool to conduct some sensitivity analysis from simulation-based experiments. It would be good if it includes procedures for specifying the parameters to explore, to compute and generate samples and to evaluate the sensitivity with sophisticated modern methods like the Morris method.
I know about two of them already: SimLab and Dakota. However, SimLab seems to not be available any more (https://joint-research-centre.ec.europa.eu/sensitivity-analysis-samo/simlab-and-other-software_en) and I cannot find an alternative download site. This tool was my preferred one. I also know about a Python library, SALib. Any other ideas and suggestions?
Relevant answer
Answer
Any non-Matlab based solutions ? :-) My university does not provide a license.
  • asked a question related to Library
Question
3 answers
Hello researchers,
I am modeling a Reinforced Concrete structure using CypeCAD, Is there a possibility to find the pushover of my structure?
Otherwise, does Shape memory alloy material exists in Cypecad library?
Relevant answer
Answer
Interesting, ... share me the best answer you find.... Please
  • asked a question related to Library
Question
5 answers
can anyone suggest the best way to find the accurate execution time of math.h library functions such as sqrt(), log(), log10(), sin(), cos() e.t.c?
Relevant answer
Answer
Although this qs is little old, I have some answer to contribute:
Here's a code snippet (time.h and math.h need to be #included):
clock_t start = clock();
double x = sqrt(125.0); // or any other function
clock_t finish = clock();
double exec_time = (double)(finish - start) / CLOCKS_PER_SEC;
If you are using the Unix variant, then from command prompt within the shell, we may run as:
$ time ./myexec
where myexec is the name of the executable file.
Unix time command prints time spent by the executable as real-time, user mode CPU time and system mode CPU time.
  • asked a question related to Library
Question
4 answers
I intend to do CRISPR screening on a cell line made in HEK293T cells and I do have Whole Genome CRISPR library (Brunello) plasmid .; this plasmid has 17441 SgRNA targeting almost 900 non essential genes but it doesn't have cas9 . Can I make viral particles with these plasmid and use for CRIPSR screening without co infection of cas9 plasmids. Does HEK293T cells(Wild type) have Cas9 enough to run the CRISPR screening?
Relevant answer
Answer
If you read the methods of Sanson et al. 2018 Dec 21;9(1):5416.Nat Commun
doi: 10.1038/s41467-018-07901-8., you will find that they introduced Cas9 with a lentivirus, making a stable cell line before using the Brunello pooled library on the cells:
Lentiviral pLX_311-Cas9: SV40 promoter expresses blasticidin resistance; EF1a promoter expresses SpCas9 (Addgene 96924).
  • asked a question related to Library
Question
5 answers
I want to know is it possible to collect Twitter data using twint library and if possible then please give me details of the process.
Relevant answer
Answer
The tool is already available here, but it has faced some problems recently.
  • asked a question related to Library
Question
3 answers
How best can you set your method using the GC-MS to scan the NIST Library for unknown compounds in a sample matrix?
Relevant answer
Answer
  • asked a question related to Library
Question
3 answers
I am doing clustering on cloud using AWS to check the clustering time. I used two different libraries sagemaker (a library provided by AWS) and sklearn library I downloaded on EC2. I got totally different results from each. The sklearn library downloadedd on EC2 gave me very less clustering time. I was wondering to use which one in my project and why? With considering that in my project I need to get less clustering time as possible.
Relevant answer
Answer
You can use AWS SageMaker to train and deploy a model using custom Scikit-learn code. An algorithm suitable for determining topics in a set of documents. It is an unsupervised algorithm, which means that it doesn't use example data with answers during training.
  • asked a question related to Library
Question
4 answers
Hi,
I am trying to use Adegent to determine the optimal number of k in my data. However I am getting a BIC graph that is constantly increasing and a PCA graph that is saying each PCA is explaining very little variance. I have attached the graphs.
Also, I am wondering if it has something to do with my structure file. My structure file codes for 1-4, with missing data as -9. In my Genind file it has NA, 0,1,and 2 (subset attached).
I am using ddRAd data that has been assembled using stacks. I have only used 1 random SNP from each locus. I then filteredthe data using PLINK. praticularly --min_maf 0.01 and --geno 0.2.
I have tried using the unfiltered data and I get the same result. I have tried filtering the data to exclude any SNP that has a single mssing data point and it is the same. Currently I am running structure, and the file runs in that.
install.packages("adegenet", dep=TRUE)
install.packages("ade4", dep = TRUE)
install.packages("hierfstat", dep = TRUE)
install.packages("genetics", dep = TRUE)
setwd("\\Users\\morri\\Desktop\\Buccinum_Data\\Methods\\Adegenet")
library(adegenet)
library(ade4)
library(hierfstat)
library(genetic)
whelk <- read.structure("whelkm3filtered.stru", n.ind = 141, n.loc = 3865, onerowperind = FALSE, col.lab = 1, col.pop = as.integer(2), col.others = 0, row.marknames = 1, NA.char = "-9", quiet = TRUE)
whelk
/// GENIND OBJECT /////////
// 141 individuals; 3,865 loci; 7,730 alleles; size: 6.2 Mb
// Basic content
@tab: 141 x 7730 matrix of allele counts
@loc.n.all: number of alleles per locus (range: 2-2)
@loc.fac: locus factor for the 7730 columns of @tab
@all.names: list of allele names for each locus
@ploidy: ploidy of each individual (range: 2-2)
@type: codom
@call: read.structure(file = "whelkm3filtered.stru", n.ind = 141,
n.loc = 3865, onerowperind = FALSE, col.lab = 1, col.pop = as.integer(2),
col.others = 0, row.marknames = 1, NA.char = "-9",
quiet = TRUE)
// Optional content
@pop: population of each individual (group size range: 6-16)
@other: a list containing: X
#finds K, keep all PCs first, and then select number of clusters based on lowest BIC.
clusters <-find.clusters(whelk, max.n.clust=40)
table(pop(whelk), clusters$grp)
#visual representation of how actual groups (”ori”) compare to o inferred groups ("inf")
table.value(table(pop(whelk), clusters$grp), col.lab=paste("inf", 1:12),
row.lab=paste("ori", 1:12))#1:X as I have tried multiple K, but do not want to pick one to "suit" me.
Relevant answer
Answer
But why do we get such a disparity in the number of clusters when we use the STructure software and the adegenet package to calculate the maximum number of clusters?
  • asked a question related to Library
Question
7 answers
What does sustainability in the field of LIS (Library and Information Science) mean to you? Please provide a one-sentence answer. No web links, no copy-pasting, kindly please.
Relevant answer
Answer
The maximum use of library information resources for a longer period of time without degrading the environment.
  • asked a question related to Library
Question
1 answer
We are unable to find the control inhibitor of few proteins given below..If anyone knows ,please let me know along with the source you used to find it out(eg. literature,research papers etc..)
1.P01009-Alpha-1 Antitrypsin
2.P01834-Immunoglobulin kappa constant
3.P07355-5LPU-Annexin-A2
4.P07737-Profilin-1
4.P27824-Calnexin
5.PO1011-Alpha-1 anti-chymotrypsin
6.PO1857-IGHG1_Human
7.Q5VTE0-Putative elongation factor 1-alpha-like 3
8.P10599- Thioredoxin
Relevant answer
Answer
Consider the functions of the proteins on your list. What do you mean by an inhibitor of, for example, alpha-1-antitrypsin, which is a protein that inhibits the enzyme trypsin? It is not itself an enzyme. In most of the cases, the inhibitor would have to inhibit a protein-protein interaction. Unlike enzyme inhibitors, such inhibitors are difficult to find and may not exist.
  • asked a question related to Library
Question
4 answers
fLoRa is library based on INIT module in omnet|++
Relevant answer
Answer
Can I contact you ?
  • asked a question related to Library
Question
2 answers
Hello,
I'm using garch code,where data is a file with 204 values,train is a test sample size 50 with a shifting +1 each next step (25 columns and 178 rows).
To make a prediction,I'm using mod_fitting=ugarchfit(train[(90:96),],spec=mode_specify,out.sample=20)
forc=ugarchforecast(fitORspec = mod_fitting,n.ahead=20),but I've got only one column as a output and where train[(90:96),I would like to get 7 columns as a result.
So I need to shift and change a number manually train[(9),train[(8),train[(20),...
Could you tell me please,is it possible to create a dataframe or something to get a result with multiple columns?
Thank you very much for your elp
Code is below:
library(forecast)
library(fGarch)
library(timeDate)
library(timeSeries)
library(fBasics)
library(quantmod)
library(astsa)
library(tseries)
library(quadprog)
library(zoo)
library(rugarch)
library(dplyr)
library(tidyverse)
library(xts)
#y <- read.csv('lop.txt', header =TRUE)
data <- read.csv('k.csv')
a <- data[,1]
mi<-a
shift <- 50
S <- c()
for (i in 1:(length(mi)-shift+1))
{
s <- mi[i:(i+shift-1)]
S <- rbind(S,s)
}
train<-S
mode_specify=ugarchspec(mean.model=list(armaOrder=c(0,0)),variance.model = list(model="gjrGARCH",garchOrder=c(1,0)),distribution.model='sstd')
mod_fitting=ugarchfit(train[(90:96),],spec=mode_specify,out.sample=20)
mod_fitting
train[(2),]
forc=ugarchforecast(fitORspec = mod_fitting,n.ahead=20)
Relevant answer
Answer
Sorry Valeriia, can you better explain the data and goal? Does the "train" contain a single time series (i.e. it is a vector) or many (i.e. it is a matrix)? For example, I see that you select 6 rows and all the columns (hence I think that train is a matrix with many time series) for fitting the GARCH model. First, I am worried that 6 rows are not enough to obtain an accurate estimation. Second, only one column (a single time series) should be selected if you fit a univariate GARCH model . Third (maybe it is my fault), I did not get what you mean by "I need to shift and change a number manually train[(9),train[(8),train[(20),...".
  • asked a question related to Library
Question
2 answers
I have purchased a lentiviral CRISPR library. However I wish to replace the marker gene in the library backbone with one of my own. I'm trying to decide what the best strategy to do this is:
1) Pick one colony of the library prep, clone in the new marker gene, then use this as the backbone to clone back in the sgRNA library.
2) Make a large library prep with the original library, and then do a large-scale cut and paste back in of my new marker gene. This would involve running the whole library on a gel and I have concerns about efficiency of the process, but would be a lot less steps.
Any practical advice/ resources I should check out please?
Relevant answer
Answer
Hi, in theory, both methods can work, but I agree that option 1 is better, but make sure that there is no carryover of the "library" portion when you replace the selection marker, which is not an easy task to verify. A better course of action would be to obtain the original backbone (which should be available) and replace your selectable marker and then go for the transfer. Another critical aspect is how you will "transfer" your library back and ensure that you have as little empty vector as possible (golden gate, gibson, etc etc) which will depend on the library itself and that you have no vector carryover.
The bigger problem is library bias/shift/loss of representation, regardless of which way you proceed. This is because by the time you re-make your library, it will have gone through 3 or 4 (and possibly more) amplification steps since its original creation by synthesis (combination of bacterial amplificaiton and/or PCR), and each amplification skews the library in various ways. Unfortunately, the "best" way would be to re-synthesize and clone into your new backbone, but that requires a more substantial investment up front. That being said, it is what it is, and the important step is to determine your library bias and verify that it is acceptable, and this will require a higher depth of sequencing.
  • asked a question related to Library
Question
4 answers
I've just installed Matlab 2021 b to run a pre-written code and when I try to run it (the only thing I've added to the code is the file location link) Matlab gives me this error:
"Unable to open the file because of HDF5 Library error. Reason:Unknown"
Does anybody have any idea of what is the problem?
Thank you,
Giulia
Relevant answer
Answer
Dear Giulia:
hdf5read is not recommended. Use h5read instead.
data = hdf5read(filename,datasetname)
attr = hdf5read(filename,attributename)
[data, attr] = hdf5read(...,'ReadAttributes',BOOL)
data = hdf5read(hinfo)
[...] = hdf5read(..., 'V71Dimensions', BOOL).
For more information you can benefit from this link:
#####################
Also you can benefit from this valuable Links:
1- Importing HDF5 Files:
2- Read data from HDF5 datasets:
Best regards....
  • asked a question related to Library
Question
4 answers
I model the propulsion system of a ferry with hybrid fuel cell and battery. The ferry requires 300 kW for propulsion and 100 for electric power. Two shaft, 2 induction motors and dc-dc converters. The problem is that the simulink library has a 50 kW PEM Fuel cell. C
Relevant answer
Thank you Anil. As you understand I am a beginner in Simulink. Can you suggest me really applicable paper describing the modeling of parallel fuel cell systems? After modeling FC and battery, motor and converters I will model the the energy management strategy.
  • asked a question related to Library
Question
2 answers
Test #2: Fortran + C + NetCDF + MPI
Test 1 was successful but while doing Test2 I got error as
bash: mpif90: command not found...
[shantij@master1 ~]$ cd TESTS
[shantij@master1 TESTS]$ mpif90 -c 02_fortran+c+netcdf+mpi_f.f
bash: mpif90: command not found...
[shantij@master1 TESTS]$ mpiifort -c 02_fortran+c+netcdf+mpi_f.f
bash: mpiifort: command not found...
[shantij@master1 TESTS]$
I have also attched the screenshot where I ahve successfully installed mpich library.
How to rectify command not found?
Relevant answer
Answer
  • asked a question related to Library
Question
1 answer
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
  • asked a question related to Library
Question
6 answers
Dear experts,
I am new to fMRI and I am working with rat fMRI data. I am performing the preprocessing steps in fsl (FMRIB Software Library) and since it designed for human brain, I must scale up the voxel size of rat brain image by 10.
I know that I probably must work with scl_slope and scl_inter in nifti header and I use fsledithd command to change the scl_slope to 10 (Now it is set to 1). but when I change the scl_slope in nifti header, the image doesn't change ( the voxel size remains unchanged.) and I think I must use fslcreatehd command to make these change to my image but I don't know how to do that. if this procedure is right please tell me how I must do that.
Also, I edit the voxel size directly with fsledithd(edit pixdim1, pixdim2 and pixdim3 instead of scl_slope and scl_inter) and it seems that it works, but, I don't know that it is correct to edit those parameter instead of scl_slope or not.
Thank you.
Relevant answer
Answer
Thank you for your responses,
I found that in FSL we can change niftii header info by using of fsledithd command, and then change the dx, dy, and dz parameters (x10) as well as sto_xyz_matrix.
  • asked a question related to Library
Question
5 answers
Basically, I am trying to get the latitude and longitude to visualize on a map using address (zip code). In the Nominatim library, I keep getting errors on certification verification. What do I put under the user_agent parameter?
Relevant answer
Answer
geopy is a Python client for a number of well-known geocoding web services. Using third-party geocoders and other data sources, geopy makes it simple for Python developers to get the coordinates of addresses, cities, nations, and landmarks all around the world.
The first step is to install Geopy.
1. Install geopy with pip. Creating a Connection
2. Import Nominatim from geopy.geocoders.
3. import geopy.extra.rate limiter RateLimiter.
4. place = geocode.reverse((lat, long))
5. raw location
6. zipcode = raw location
['address']['postcode']
7. # after starting the geocoder
After we've created the address column, we can begin geocoding, as seen in the code excerpt below.
#1 — We begin by delaying our geocoding by one second between each address.
#2 — Using the geocode we created, add a df['location'] column.
#3 — Finally, we can build a single tuple column with latitude, longitude, and altitude.
  • asked a question related to Library
Question
1 answer
Hi everyone,
I want to simulate different energy plus models with different occupancy profiles from excel using python. Could you suggest a library to do so and in which format I should import my occupancy profiles? As of now, excel sheets has activity data such as sleeping, cooking and so on and different IDF files with default 24*7 occupancy schedule from Design Builder.
Thanks in advance!
Divyanshu
Relevant answer
Answer
Hey Divyanshu,
We have an open source platform BESOS developed at the Energy Systems and Sustainable Cities lab at UVic for running E+ parametric simulations using Python. You can modify any parameter of your occupancy profile in the IDF within the platform. Let me know if that is something you are after.
  • asked a question related to Library
Question
1 answer
How can I have water-LiBr mixture properties in MATLAB library?
Relevant answer
Answer
One possibility would be to link Matlab with Refprop. More information can be found in the following sources:
GitHub - jowr/librefprop.so: Create a shared library from the Fortran sources provided by Refprop from NIST. This project provides an alternative to the refprop.dll that comes with the software. Please use the official instructions if possible
  • asked a question related to Library
Question
1 answer
I find myself in a predicament that I'm not sure how to resolve:
I have managed to isolate a particular multi-cellular structure from postmortem human brain tissue with the intention of isolating RNA from that structure and building libraries for RNA-sequencing.
Reagents and kits used:
Single Cell RNA Purification Kit from Norgen
RNase-Free DNase I Kit from Norgen (on column DNase treatment)
SMARTer Stranded Total RNA-Seq Kit v3 - Pico Input Mammalian library kit from Takara
So far, RNA extraction from that structure has been successful and so has library building with one issue: There is a second, consistently present, smaller peak at a larger size than a library. So I either have the recurring issue of either gDNA (despite performing DNAse treatment) and/or over-amplified products in library traces. No matter how I have tried to adjust RNA input and PCR cycle, a second peak keeps cropping up in library traces.
I think I’ve managed to reduce the size of the second peak as much as I can to the extent where I don’t think my libraries are over-amplified and whatever it is, is too large from my library to be sequenced (see attached file Takara V3 Library Kit Optimization Conditions 6, specifically well B1). Would such a library be adequate for sequencing? I have received the criticism that my desired library peak may also have gDNA - how likely is this, given the shape of the trace? I know some genomic contamination is inevitable but I'm hoping to keep it as low as possible.
Side note for those who don't work with postmortem brain: lower RINe, lower yields in general, and lower quality "everything" is to be expected. So I’m also concerned that lowering amplification more will not be sufficient for a number of lower quality samples.
Any advice would be greatly appreciated!
Relevant answer
Answer
Hi! I'm not certain about your reagents and the procedures. I've never used Norgen, is it similar to Qiagen? That being said. You can still get good information if you have an appropriate "relative comparison. analysis method with a good number of biological replicates and a matching number of control samples. It is possible to remove, or mask, the sequences that correspond to contaminating RNA, like ribosomal RNA on alignment. From the count matrix for the gene subset, if you then do PCA and the differences between control and sample are larger than the differences between the replicates you'll get some answers. From this you can develop methods for refinement. There are probably some alternative RNA cleanups that will help, but if your intended analysis us "profiling" you'llhave bigger downstream problems than simple extraction and library production. Good luck!!
  • asked a question related to Library
Question
1 answer
For instance, in a research to do with knowledge management in the field of library and information science.
Relevant answer
Answer
Hi Aminu,
Yes, you can do that. But why?
The question is why would you want to restrict yourself to one single view. If you wish to make a significant contribution to the body of KM knowledge, you should strive to look at the issue from many different angles and find the new way to integrate these.
I have attached for you a chapter from my book that may be of interest to you.
  • asked a question related to Library
Question
5 answers
System information
  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04): Ubuntu 20.04
  • Python version: 3.6
  • Installed using virtualenv
  • CUDA/cuDNN version: 11.5 / 8.1.0.77
  • GPU model and memory: RTX 3090 24GB nvidia driver 460.39
  • TensorFlow version: 2.4.0 pip install tensorflow-gpu==2.4.0
Describe the problem
Installed cuda 11.2 and cudnn 8.1.0.77. Faced the following problem when I run train.py
Could not load dynamic library 'libcupti.so.11.0'; dlerror: libcupti.so.11.0: cannot open shared object file
Solved the problem
  • List lib files on /usr/local/cuda-11.2/extras/CUPTI/lib64/lib*
$ ls /usr/local/cuda-11.2/extras/CUPTI/lib64/lib*
  • I could not find libcupti.so.11.0. Other files should be there such that libcupti.so, libcupti.so.11.2, ...
  • Manage a link between libcupti.so.11.2 and libcupti.so.11.0 using a comand 'sudo ln -s'
$ sudo ln -s /usr/local/cuda-11.2/extras/CUPTI/lib64/libcupti.so.11.2 /usr/local/cuda-11.2/extras/CUPTI/lib64/libcupti.so.11.0
This fixed the problem for me
Relevant answer
Answer
Bro, look at the installation requirement. If you're installing tensorflow-gpu version 2.4.0, then you should've used CUDA 11.0 and CuDNN 8.0. If you decide to install different version, then it will cause errors.
  • asked a question related to Library
Question
9 answers
Dear all,
I'm trying to reproduce the following paper, but using Quantum Espresso instead of CASTEP:
Unfortunately, I couldn't find the Norm-Conserving pseudopotential (PBE) for some of the atoms in the QE library. I'm using the PPs of PseudoDojo to perform the calculation, but they are "Optimized Norm-Conservinng Vanderbilt PSeudopotential", and I think that differences that I'm finding in DOS and PDOS could be also explained by the PP used (CASTEP generates "normal" NC PP).
Could someone help me to find these "normal" NC pseudopotentials?
Relevant answer
Answer
There are more QE pseudopotentials here:
  • asked a question related to Library
Question
1 answer
Hello, I am intending to screen a library of compounds for an in vitro enzymatic assay. The target enzyme follows a Bi-Bi mechanism. The assay development phase is completed with kinetics parameters for both substrates are determined. I am looking into a rational approach to choose the concentration of both substrates and compounds to satisfy the following criteria:
-Inhibition Mechanism-blind design: the mode of inhibition and the substrate of which the inhibitor compete with is unknown.
-Minimize the concentration of the compounds to a reasonable concentration that still able to pick up any possible inhibitory mechanism.
Successful compounds will be used to identify Hits with dose response and IC50 later on.
Is there any recommendations/guideline followed in pharma to deal with this? is there any upper limit for a Hit to be accepted?
Thanks
Relevant answer
Answer
In my experience, the usual concentration of test compounds from a generic compound library used in high-throughput screening to search for enzyme inhibitors in an in vitro assay is 10 µM. For fragment libraries, a higher concentration, such as 100 µM may be used because the molecules are smaller and simpler, overall, than the molecules in generic screening libraries.
Going to higher compound concentrations may seem appealing, but the result is not necessarily better because you run into increasing problems with compound insolubility and interference with the assay readout as you increase the compound concentration.
As for the substrate concentrations, there is no "right" answer. There are trade-offs. Since you know the kinetic mechanism and the substrate kinetic parameters, you can calculate the ratio IC50/Ki using the appropriate Cheng-Prusoff relationship for inhibitors competitive with one substrate or the other as you vary the substrate concentrations. This ratio can be used as a measure of the sensitivity of the assay to find inhibitors. (For pure non-competitive inhibitors, the substrate concentrations don't affect the sensitivity.) I suggest you prepare a spreadsheet with columns for [A], [B] and IC50/Ki. Vary A and B and calculate IC50/Ki.
Typically, the lower you make one substrate concentration, the more sensitive the assay is for being inhibited by compounds competitive with that substrate (lower IC50/Ki), but the less sensitive it is for finding inhibitors competitive with the other. You can try to find a pair of substrate concentrations that achieve a balance.
To get around this issue, you could run two separate screens, if you can afford to, with one optimized for finding inhibitors competitive with A and the other optimized for finding inhibitors competitive with B.
You also have to consider the substrate concentrations needed to get a sufficient signal to run the screen successfully. Higher substrate concentrations allow a larger signal to be obtained while still under initial rate conditions, but higher substrate concentrations increase IC50/Ki. Cost and/or substrate availability may also have to be considered.
  • asked a question related to Library
Question
2 answers
I have around 4000 manuscript PDFs on my mac. For many years I used 'Papers' developed by Mekentosj to organize the library, with the PDFs neatly organized into folders. Sadly over time as Papers transitioned from Papers 2 and then to Papers 3 , the software became less good and more buggy (and much more expensive). My recent experiences with ReadCube (effectively Papers 4) are not so positive.
What software do others recommend? Ideally I should like to export all the PDFs within their folders into a new platform
thank you!
yaacov
Relevant answer
Answer
You should definitely give Endnote and Mendeley a try!
  • asked a question related to Library
Question
6 answers
For a scoping review in the field of health professions education I want to search the WHO, UN, EU resources as well as hand-search resources of other international bodies or NGOs. I know that the WHO has the Library Database IRIS, how about the others? Exists any register of such resources?
Or, is there even some kind of "guide" or literature out there on how to tackle this systematical.
Any help will be greatly appreciated!
------------------------------------------------
Relevant answer