ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.
Browse by research topic to find out what others in your field are discussing.
- 2I need an algorithm for making it in to a parallel algorithm can you help me to choose a good algorithm for this project, thanks.
parallel processing algorithm
Aniello - I am guessing that Yasin is looking for an algorithm that would be useful in translating existing algorithms into parallel algorithms. I wish him luck in finding such - because if he does, there is much $$ to be made from it.Following
- 3Fabrics for dresses generated by yeasts and bacteria?
Do you have references on fabrics for dresses generated by yeasts and bacteria?
I know the experience of Susan Lee and Gary Cass, I could not find much ...
Take a look at US Patent no. 7,000,000, by John O'Brien of DuPont Corp. for one possibility.Following
- 4Could anyone kindly let me know the quickest way to draw the functional isomers?
Functional Isomers can be drawn for Aldehydes, Ketones, Alcohols, Ethers, Esters and Epoxides. Also, structural isomers can be drawn for alkanes, alkenes and the above molecules. I am wondering if I can find a quick way to formulate the possible number of isomers for organic molecules.
I have found that symmetry of the molecules play an important role in determining the possible isomers that can be drawn.
Thank you very much for your kind help & time, Professor Karaman :)Following
- 8How is nonparametric research different than parametric research?What is the difference between parametric and non parametric research?
Can anybody suggest me about the transformation of data? How do you know if you need to do log(x+1) transformation or square root transformation? Some of my response variable have lots of 0 but not negative, Can anybody please suggest something?Following
- 9Why nobody see that Higgs boson and gravitational waves are the pure theoretical failures?
Higgs boson does not interact with gluons which give energy for 99% of proton mass. How the Higgs boson gives mass to the proton, nobody has no clear answer. Why are we closing eyes and tolerating this model which does not have a minimal correspondence with physical existence? Energy and mass are both inherent physical properties of every particle. No particle can give mass to another particle; this is a complete misunderstanding. Higgs boson does not prove anything, Higgs boson is artificially made flux of energy and nothing more.
Energy and mass are both inherent physical properties of every particle. They have the origin in the diminished energy density of quantum vacuum. A given particle cannot be examined without this diminished energy density of quantum vacuum which determines its energy and mass. The idea that particles exist in an empty space deprived of physical properties is the biggest theoretical failure of physics since its existence. It has led to the idea of Higgs field and gravitational waves which are both pure theoretical failure. No particle can give mass to another particle; this is a complete misunderstanding. Higgs boson does not prove anything, Higgs boson is artificially made flux of energy and nothing more. And no wave can transmit gravity.
Like the time one would wait at the edge of the event horizon of a black hole one will wait an infinite amount of time to have the Higgs Boson proven to be anything but a fantasy and the event of quantum fluctuations.
You are correct there is no way for them to prove what they would love to prove. A field is a field and a particle is a particle. The established scientific community would like everyone to fallow them to the point of total confusion in the science. Either we have to give up on common sense or on the common line in science. The fantasy world that is created and explored by things like the Higgs Boson is remarkable to me. It has nothing to do with understanding Physics it only has to do with understanding the people that promote the "Hope" that they will find what they are looking for.
In my lab I use the understanding of how the physical world works to come up with very creative solutions to the problems put in front of me. Some times I make a discovery that puts in question the science that I thought I knew well. When this happens I do not just disregard the information as wrong I explore it until I answer the question was the old science lacking or was my experiment bad. Some times I realize that we were missing information in the current science that needs to be modified and I make that adjustment.
At no time do I just disregard the information that I have on hand and move forward to build a new model out of the old understanding with a disregard for the new information. This would then lead to the situation that we have today. Ideas are only considered valid if they fit the old understanding and conform to the old theories. If they do not then the scientists that make the new discoveries are disregarded and worse disgraced. At this point they give up on trying to make the change for fear of not being able to be published or not considered relevant.
The system is broken in favor of mediocre science. This is not all people in science but there are lots that yell very loudly when someone tries to knock them off the top of the hill. Where are the True Philosophers? Where are the checks and balances? Where are the people that know better?
Science needs a good swift kick in the pants.
Amrit, I am not sure you are totally correct but I know you are on the right path.
- 6Is there a better way to study apoptosis and necrosis in cell culture of cancer cells?
I'm currently using Hoerchst and Propidium iodide stain for apoptosis and necrosis, but counting the cells on the fluorescent microscope is somewhat tedious? Does anyone have experience with a kit that allows easier analysis of results?
May be the attached articles can be of help.
- 6Is there a detailed protocol for measuring the antioxidant activity of orange juice and orange using DPPH method?
I am presently doing a short period undergraduate project based on assessing the antioxidant activity of orange and orange juice using the DPPH method. However, upon incubating my positive control (different known concentrations of ascorbic acid, ranging from 1 to 10 nanograms per ml) with DPPH reagent, my spectroscopic readings are so inconsistent: they actually make no sense and, thus, cannot yield a useful standard curve. Could there be a detailed protocol to execute this assay successfully please? Thanks in advance.
Thanks Mr O' Sullivan. Another question: what parameters I use to plot the standard curve please (that is on the axes) Do I use the absorbance of my set up positive control(propyl gallate) in determining DPPH % inhibition?Following
- 11What are the disadvantages of using western music notation to transcribe indigenous non-western ethnic music and what can be used as an alternative?
Western music notation used widely in transcribing and notating non-western music. Well, it looks a media beside the oral transmission of music which can help grasping structure of a certain type of music, but it is not. Western music notation forces its limitations in transcriptions and through the history the notated version remains as the document or “original version” against changes take place in oral versions over time. It is in spite of modifications the written version already got through transcribing musical sound to written notation.
Although I have directed a Western Conservatorium for many years, I have grave reservations about using Western notation for any other purpose than supporting people who play Western music and are familiar with the sound of that music and its conventions. No notation system works without a thorough knowledge of the music it refers to. I have been playing Hindustani music (sitar) for forty years, one of the most refined music traditions in the word, and Western staff notation is simply not capable of capturing its essence, or even its basics. Even more upsetting is the extensive use of staff notation in music education (often without reference to the sound), which runs the risk of making most world music traditions appear like bad Western music by ignoring the musical qualities (in rhythmic subtleties, timbre, expression, etc.) that characterise it. I've lectures and published about this extensively, including in my monograph Facing the music: Shaping music education from a global perspective (OUP, 2010).Following
- 6As a non financial background ,what is the best way to analyse financial statement (annual report) of any company?
As a non financial background ,what is the best way to analyse financial statement (annual report) of any company? Thanks
There is no best way for analyzing financial statements; it depends on the purpose of the analysis. However, there are some limitations govern the financial statements analysis. Financial analyst expert should consider such limitations whenever he uses different tools of analysis. He/she must use some other tools to have the big picture of the past, present and the expected performance of the company. Now-days, using financial measures depending on financial statements is not recommended, where, there are many non-financial matters that should be considered and evaluated. Some tools may depend on customers analysis, while others may use some financial and non-financial measures such as the Balanced Scorecard. You may find my paper of some interest to you as the details below:
Ismail, T. (2007), Performance evaluation measures in the private sector: the Egyptian practice, Managerial Auditing Journal, Vol. 22, No. 5.Following
- 5Can you give me suggestions of journals that deal with Mindfulness research in performance?
Im doing research about mindfulness and actor training - I myself practice a method of actor training directly related to Mindfulness
Thank you all!! that is very helpful, how lovely to get your supportFollowing
- 13Does anyone know about autocorrelation in spatial modeling by logistic regression?
I am wondering if anyone here has ever dealt with spatial autocorrelation using Logistic Regression in GIS.
In the literature I have read so far, sometimes the issue is not even addressed. In other instances, the authors used the geographic coordinates as covariates. For example, quoting from Hu, Z., & Lo, C. P. (2007). Modeling urban growth in Atlanta using logistic regression. Computers, Environment and Urban Systems, 31, 667–688: "The second step was including spatial coordinates of data points into the list of independent variables. Spatial autocorrelation can be alleviated to some extent by attempting to introduce location into the link function to remove any such effects present (Bailey & Gatrell, 1995). For example, spatial coordinates of observations might be introduced as additional covariates, or to classify regions in terms of their broad location and treat this classification as an extra categorical explanatory factor in the model."
At the best of my understanding, the latter approach is termed "autocovariate" modeling by: F. Dormann, C., M. McPherson, J., B. Araújo, M., Bivand, R., Bolliger, J., Carl, G., … Wilson, R. (2007). Methods to account for spatial autocorrelation in the analysis of species distributional data: A review. Ecography, 30(5), 609–628. http://doi.org/10.1111/j.2007.0906-7590.05171.x
I would like to know your opinion on the issue, and what approach you happened to use.
Coordinates...PLUS the previous sampling strategy, do not forget!!!
As for the residuals: what do you mean? The residuals of LR are...residuals. They are continuous. I have read a number of articles in which the (logistic) model residuals were tested for spatial autocorrelation. If you want, tomorrow I could try to locate those references.Following
- 3How can I obtain the UV-VIS Spectrum using Gaussian 09?
I'm quite new to this software and can only design quite basic molecules, not run calculations just yet. I have already tried: Job Type (Energy) - Method (TD-SCF, N=6 states) - Solvation (Default, Default) from other guides, but the UV-VIS result is not an available option to select in the Results tab once run. Are there any key steps that I have missed? An example for a simple molecule would be very much appreciated.
You should check out the example problem similar to yours in this G09 reference:
- 16Is the flipped classroom working for you and your students?
Today, a colleague shared with me that students are having to do more work in flipping the classroom. They have to do the in-class activities then go home and do another set of reading etc. Are you finding that the flipped classroom is causing students to have too much work? How has the flipped classroom been working for you and your students?
One-Square -Meter ran a documentary on flipped learning at NTU Singapore. I was able to record a one-minute duration of the documentary end. It is more about an approach to learning than a gathering in a classroom. It is about first-contact learnig than third-contact reading. In my homeland, any crawling baby seeing a burning lamp or candle on the floor is ever excitedly drawn to it. I have observed that the child is moved away from the lamp direction ONLY one-time, out of parent-moral obligation. A reattempt towards the lamp by the baby the second time earns the baby the RIGHT to FIRST-HAND Experience. After the baptism from the hot lamp, the baby knows better. Researches can be conducted in four ways: 1.Study; 2.Questionnaire; 3.Experiment; 4.Observation. Of this four ways, Observation and Experiment are STRICTLY beyond compare and I see this is where flipped learning is heading.
These days are getting more competitive; challenges and crises on the rise. The needs for instant assured solutions are on the rise. Thus the next generation of expert and researchers must be up-to speed. But this is still way low, hence the integration of humanoids, androids, drones, robotics, et cetera in the application of BIG DATA in generation solutions.
So, don't let us stifle the process. We must allow it to run. Those students that cannot keep up with the pace will evemtually stop at a bus-stop, but will still get educated. The students that finally reach the process-output will evetually become those German-Machines on which we all can ASSUREDLY depend today and in the future.
- 99+Does the responsibility of researchers end with the scientific publication of their findings?Or should he or she also ensure that these findings find a way to a) non-scientific public and b) the implementing authorities / institutions (a practical reference provided)? I have made it my habit to any scientific contribution to compose another layman's contribution and to publish in order to create the possibility of practical implementation. All non-academic partners are extremely grateful for it. Without access to databases they would probably know nothing about these results and findings. Other ways to make research applicable?
Sure, Michael, there could be no expropriation where there were no property. But this idea sets up an unrealizable fantasy. For the undisclosed idea or knowledge achieved and held originally by one person is de facto that person's property. It is a separate case-by-case question whether a rational person not contractually bound to do it would find motivation to dump his or her hard-won knowledge into a collective cesspool where any accessor can use it for any purpose, moral or not. Exactly which individuals are benefited by a claimed "public benefit" is a decision that is either made contractually by those same individuals in a free market or else by decision makers who have been delegated those decisions by "the public" or expropriated that right. In any case, the question that your posting raises is the locus of decision making.Following
- 40Why do we often use a GMM approach?It is often argued that the GMM approach is a second best identification strategy compared to IV approach in case of endogeneity of the explanatory variables. Sometimes, it is also hard to believe that the dependent variable lagged one period can be included as additional explanatory variable. GMM is a more of an econometric trick than a proper solution for endogeneity. Is that argument valid?
First a general comment. The introduction of GMM with much fanfare in the middle 80s has served to obsfuscate matters and has led to a great deal of confusion. Despite its claims GMM is not a new estimation technique that includes as special cases many others that preceded it. It is simply what we used to call generalized least squares; it is simply a way of getting "efficient" estimators when in models with additive error terms the covariance matrix of the error is not a scalar matrix. This was first observed by the British statistician Aitken in the first third of the twentieth century. His idea was that if we knew the covariance matrix we could transform the data so that the in the transformed context the error has a scalar covariance matrix and thus least squares is BLUE by the Gauss-Markov theorem. This is the Aitken estimator, when the initial covariance matrix is known. Since it is rarely known the Aitken estimator is infeasible. But if we could estimate it consistently, which is possible in a large number of cases common in empirical practice, we could use this instead. When we do so we have a feasiable Aitken estimator. This appears in my book Econometrics: Statistical Foundations and Applications published in 1970. In tjhe statistical literature this same procedure has been referred to as generalized least squares for some time now. Enter GMM in the 80s. What is GMM? It is simply weighted least squares. Specifically we look at the vector of differences between the observations on the dependent variable and the linear or nonlinera combination of the predetermined or exogenous variables that our model uses to "explain" or "forercast" the dependent variable within the sample plus some weighting matrix. This might be a generalization if the weighting matrix led to interesting choices that suggests useful estimators previously uknown. But all it does is to tell you that if you chose the weighting matrix to be the inverse of the covariance matrix of the error you get efficient estimators, i.e. it tells you nothing that you didn't know from the previous literature. Nonlinearity of parameters is not a serious econometric issue; it has mathematical repercussions in that the existence (and uniqueness) of solutions for the equations defining the estimator(s) is more complicated, but given that the econometric properties continue to hold.
To get to Mr. Eberle's question, it is not clear why the weighting matrix is singular. If this reflects the singularity of the model's error covariance matrix, then this is the canonical way of dealing with such issues. To this end see Deffinition 12.23 and Remark 12.23 p. 539 of the attached chapter. There are very few places in the literature of econometrics where this is discussed and is quite likely not widely known.
The implication of this definition and convention is that test statistics invlving singular distribution entail the use of the generalized inverse of the covariance matrix. An instance of the use of this is found in "A Comment on LR, Lagrange Multiplier and Wald Statistics" which I recently upload in Researchgate.
I do not understand what is meant by "two step GMM is far from ideal".
If we just move away from GMM which only serves to confuse, you are dealing with something like
Usually two-step means you first estimate consistently Sigma, then you instert in the minimand and then minimize with respect to beta. Under many circumstances the estimator of beta resulting from this operation has the same asymptotic distribution as that resulting when you estimate beta and Sigma simultaneously. It is analogous to 3SLS and full information maximum likelihood estimators in simultaneous equations.
Your last inquiry about robustness to autocorrelation etc: obtaining the efficient estimator confers no immunity against complications involving the violation of some of your assumptions regarding the error term. Both the two step as I have defined it, and the efficient (one step) estimators will suffer if such violation occurs.
If some points are not clear feel free to ask follow-up questions.Following
- NewHow to dissolve humic acid (Pahokee peat humic acid standard) in Water?
Can I dissolve it in NaOH solution first and then bring down the pH?Following
- 1Can anybody confirm the determination of this moth?
Determination of moths by photo is difficult to impossible, I know. This applies especially to Coleophoridae. I determined this moth as Coleophora lixella ZELLER 1849. There is a similar species, Coleophora tricolor WALSINGHAM, 1899, which is not part of the German fauna.
Specifications: Germany, Bayerischer Wald, open wood with oaks, hornbeam and pines, sunny and dry, July 26th, 2013, light trap.
Thank you very much for your answers
B.W Rudolf Ritt
It looks pretty much like Coleophora lixella Zeller. In order to confirm this identity you need to dissect the genitalia. There is an excellent photograph of the male genitalia of C. lixella that can help you to confirm in the following link:
- 2Can regression analysis be adopted as a tool for prediction of water levels?
I have come across studies attempting to predict future dam water levels (in Nigeria and Pakistan), using different approaches (artificial neural networks, SWAT model). I am interested in developing a tool that would use readily-available climatic data (mainly precipitation, evaporation and temperature) to conduct analysis of variance and ultimately predict water levels for a given reservoir/catchment.
There are many regression algorithms that can be used in your case such as multiple linear regression, generalized linear modeling, generalized additive modeling, etc.
You can implement these methods easily in R. through stats, glm, gam packages.
The most accurate algorithm is generalized additive modeling that leads to more accurate prediction than other methods.Following
- NewHow to begin membrane potential imaging as a beginner?
I'm a beginner who is going to have his first experience in working with voltage sensitive dyes. I've spent a lot of time learning the basic principles, photonics, different imaging procedures, noise,...
Now I'm completely confused about how to begin. I have to express that all I need is simplicity and cheapness. Could you please address me some articles which are useful for beginners?
Is it possible to record fluorescence images all by myself?Following
- 10Which statistical softwares is suitable and easy to use?
Among the statistical tools: Origin Lab, SPSS, sigma plot and R, which one is reliable and easy to operate for biological related experimental analysis?
I find spas easy and helpfulFollowing
- 9Any suggestion on which genus this specie is?
Collected in mangrove mud.
From Southeast Brazil (São Paulo).
Thank you =)
Let me toss Sayella Dall, 1885 (Type OD Leucotina hemphilli Dall, 1884) into the mix for discussion, as a potential earlier synonym of Parodizia and Petitella. See Wise, 1996, fig. 15 .Following
- 2What are the medical effects on RO water on human health?
reverse osmosis or RO is now very common in the world,as alternative to fresh water,The composition of water varies widely with local geological conditions. Neither
groundwater nor surface water has ever been chemically pure H2O, since water contains small
amounts of gases, minerals and organic matter of natural origin. The total concentrations of
substances dissolved in fresh water considered to be of good quality can be hundreds of mg/L.
Thanks to epidemiology and advances in microbiology and chemistry since the 19th century,
numerous waterborne disease causative agents have been identified. The knowledge that water
may contain some constituents that are undesirable is the point of departure for establishing
guidelines and regulations for drinking water quality. Maximum acceptable concentrations of
inorganic and organic substances and microorganisms have been established internationally and
in many countries to assure the safety of drinking water. The potential effects of totally
unmineralised water had not generally been considered, since this water is not found in nature
except possibly for rainwater and naturally formed ice. Although rainwater and ice are not used as drinking water,is Ro water healthy to drink with minerals defficient?
The effect of RO on human health,is given ,as many WHO,Knowledge of some effects of consumption of demineralised water is based on
experimental and observational data. Experiments have been conducted in laboratory animals and
human volunteers, and observational data have been obtained from populations supplied with
desalinated water, individuals drinking reverse osmosis-treated demineralised water, and infants
given beverages prepared with distilled water. Because limited information is available from these
studies, we should also consider the results of epidemiological studies where health effects were
compared for populations using low-mineral (soft) water and more mineral-rich waters.
Demineralised water that has not been remineralised is considered an extreme case of low-mineral
or soft water because it contains only small amounts of dissolved minerals such as calcium and
magnesium that are the major contributors to hardness.
The possible adverse consequences of low mineral content water consumption are discussed
in the following categories:
• Direct effects on the intestinal mucous membrane, metabolism and mineral homeostasis or
other body functions.
• Little or no intake of calcium and magnesium from low-mineral water.
• Low intake of other essential elements and microelements.
• Loss of calcium, magnesium and other essential elements in prepared food.
• Possible increased dietary intake of toxic metals.
1. Direct effects of low mineral content water on the intestinal mucous membrane,
- 2Why can we not obtain any results when using our telomere length assay kit?
We have employed the telomere length assay kit provided by Roche Company. Unfortunately, after trying several times, we could not obtain any result. DNA is first extracted from lymphocyte of CLL patients and then digested according to the protocol, electrophoresis and transfer to the nylon membrane are implemented manually .Finally, with all steps being done, the marker bands can be observed on the X-ray films, while the telomere smear is absent. However, our first attempt, juxtaposing the membrane with the film, led to complete darkness of the markers, therefore we decreased the exposure time to 10 and 5 minutes in our later attempts. It is also noticeable; that the control DNA provided by kit itself also never showed the telomere smear in any our experiments. In another attempt, we extended the digestion time from 2 hours to overnight and again obtained no result. Regarding the fact that the kit is so expensive and difficult to procure and also the limited number of reactions which could be implemented, we would like to ask you to provide us with any probable solutions to such problems.
I agree with Daniela's comments. We also used the kit and followed the protocol that was provided. The quality of the DNA, the extent of UV-cross-linking to the membrane and the extent of hybridization all contribute to the final outcome. Therefore, practically speaking, performing a trial run with positive control should work before using unknown or test samples.Following
- 1How to prepare Nanotubes and Nano particles powder for scanning electron microscopy
Hello every one
Recently in my lab, I'm working on scanning electron microscope (Quanta 450, FEI), i imaged many ready samples , but i have no idea how to prepare nanotube and nano particles powder for SEM imaging .
please , I need any protocol or method to prepare nano powders for SEM .
I would suggest to prepare some thin films of the nanotubes/nanoparticles to obtain SEM images. You may try to disperse nanotubes/nanoparticles in a solution and spin/drop cast the solution into solid films.
See our SEM images in the paper:
- 4How to tell if an excited state frequency calculation is progressing (Gaussian09)?
I haven't been able to find anything about this online and I'm the only one in my institution working with G09, so some help would be much appreciated!
My question simply is: how do you monitor excited state frequency calculations while they are running to see if they are in fact converging?
I am performing excited state optimizations, and I would like to do frequency calculations on these structures. When performing any optimization, I am able to see Max/RMS of the force/displacement and predicted change in energy in the output .log file updated over the course of the calculation. However, when I perform an excited state frequency calculation, I can see these max/RMS numbers and if they meet the convergence criteria only at the very end of the entire calculation. Am I missing some other important information in the .log file for this kind of calculation that would indicate the status of the calculation?
Miguel, here is the input command line for the kind of calculation I am asking about. I am reading from the checkpoint file from a successfully optimized excited state structure completed already. I have made sure to keep the theory exactly the same between the Opt and this Freq calculation.
# freq td=(root=1) ub3lyp/6-31+g(d) scrf=(cpcm,solvent=water)
guess=read geom=check empiricaldispersion=gd3Following
- NewPrediabetes, is it a health problem unto itself? does it carry more risk of heart disease and stroke than people with normoglycemia?
I've recently been wondering about overdiagnosis and problem with treating people to target surrogate markers and not health outcomes.Following
- 6Whats the effiecient way to dealth with polyploid species having genome sizes range from 3.5 to 4.0 pg following the ddRAD protocol?
We are going to do ddRAD protocol based sequencing by using Illumina Hiseq platform. Our modeled species are having polyploidy with genome sizes range from 3.5 to 4.0 pg. Our main concern is that how many individuals we need to pool and how many barcodes we need to use in libraries preparation to get enough amount of data with high coverage. Your precious suggestions and links towards related literature will be highly appreciated. Sincerely
I will attach it to you in an email... Within the .tar.gz archive you will find:
- fragsim.pl: Perl script to simulate RE fragmentation using any number of restriction sites
- radsim.tsv: Example tab-delimited output of the program- shows number of fragments of each size, and for each "Type" of fragment (e.g. flanked by PstI on both ends, etc)
- radsim.pdf: Example of the graphical outputs
I don't know how well versed you are in working from the command-line, but you will need to have the following installed (possibly incomplete list):
- Perl 5- programming language...
- R- statistical programming language. There is some R code embedded within my perl script
- Statistics::R package for perl, forms an interface for the embedded R code
- GetOpt::Long perl package, parses command-line input
If you have trouble installing any feel free to message me.. I am sorry I do not have a manual written up, as this was not intended for distribution- however you are obviously free to use it. Just please acknowledge me if it ends up being helpful.
To use the program, you will need to execute and provide any command-line input. If you run the program as:
You will see the argument options which are currently built in. If you do not, than most likely some dependency is not installed.
You can provide restriction sites as so:
./fragsim.pl -r G^AATTC C^CGG
Where the "^" indicates the cleavage site in a single direction of the sequence. The output will be a tab-delimited table showing the number of fragments for each type at each size, taking the form of:
Length Missing R1-R1 R2-R2 R1-R2 Sum
1 0 0 26 2 28
Where "Missing" indicates fragments which were terminated by contig boundaries, thus were prematurely terminated due to the incompleteness of the assembly. More "Missing site" fragments means a less accurate estimate of the number of loci in a given size range. If you have a contig-level assembly, I recommend you concatenate fragments together at random, and perform replicate concatenations to assess the variation in your estimate.
To calculate the number of loci in a size range (say 250bp to 350bp) you can either use some basic scripting skills, or just load the tsv table into excel.
The "algorithm" is actually pretty simple- if anyone wants details on how to do this simulation let me know and I can elaborate.
If you have any questions or hit any bugs let me know. And anyone else who wishes to use this please feel free to email me, just acknowledge me if you find it helpful.Following
- 4For Crude glycerol analysis which method is better HPLC OR GC ? As crude glycerol contains glycerol, soap, salt, moisture, methanol, MONG.
Biodiesel byproduct crude glycerol purification goes to physical and chemical treatment and after that analysed by HPLC or GC.
I would recommend HPLC because of the moisture, soap and salt content. If you use GC, you may want to put an extraction step and drying of organic solvent, as well as a derivatization step (such as methylation of fatty acids) ahead.Following
- NewWhat is the possible potentials of Markov Chains in Categorical Data Analysis?
Has anyone used Markov Chains for modeling of Categorical Variables as function of continuous parameters for classes sequence prediction?Following
- 9Could someone help me with the identification of this buterffly?
It appear in leaves of some Crassulaceae, I think could be a problem as this is a known pest from NW Michoacan, Mexico
Ok, Thanks Luis Miguel!
Yes I think the distribution coincides! nice image for compare!