Questions related to Math Biology
If I prepare a stock of undiluted platelet rich plasma from whole blood, I then would like to perform a count on a flow cytometer. I take a small sample of e.g. 20uL and flow cytometer tells me e.g. x events/uL in the 20uL sample.
Now I would like to find out how many events were in the original undiluted stock per mL.
Do I multiple my x value by 1000 to find the platelets/mL in the original stock or this is too simple of an assumption?
I would to ask if anyone knows whether the volume of buffer I resuspend a pellet in affects cell yield.
Scenario 1 : I resuspend my cell pellet in 50uL PBS (let's say there is some antibodies I am trying to incubate the cells with). Then I wash with 100uL PBS and centrifuge at a final volume of 150uL.
Scenario 2: I resuspend my cell pellet in 100uL PBS (incubate with antibodies) . Then I wash with 200ul PBS and centrifuge at a final volume of 300uL.
Scenario 3: I resuspend my cell pellet in 100uL PBS (incubate with antibodies) . Then I wash with 300ul PBS and centrifuge at a final volume of 400uL.
Scenario 4: I resuspend my cell pellet in 200ul PBS (incubate with antibodies). Then I wash with 300ul PBS and centrifuge at a final volume of 500uL.
In which scenario will I have greater cell yield and better pellet quality after centrifugation step is complete? I would like to lose as few cells as possible. I would like to know how people determine the buffer volume they resuspend their pellet in and the volume of buffer used for washes. I have noticed in multiple papers they go for higher volumes. Does higher volumes mean better cell yield ? I need to retain as many cells in the pellet as possible and lose few as possible during the wash and centrifugation. I think the volume you resuspend the pellet in and the volume in the centrifuge matters and I would ask your opinions on the best scenario to keep as many cells as possible.
I have a 200 mg/ml ampiciline solution, how much should i add to 300 ml of LB medium to get the final concentration of 40 ug/ml?? How much bacteria (in ml) should i add to 75 ml of medium to dilute the culture 50 times??
Is it possible to calculate the E/I ratio on single-cell levels only based on miniature signals (Excitatory and inhibitory)?
If yes, then what parameters should I take to calculate it? and how?
Currently, I can get lots of parameters from mIPSPs and mEPSPs such as the number of events, decay and rise time, area, baseline, noise, half-width, 10-90 slope, etc...
Which one of these can i use to calculate the E/I balance and the calculation mathematical formula?
In the last few months, SIR-like models have been intensively used to represent the propagation dynamics of COVID-19 continuously. Even accepting that, to some degree, the different underlying hypotheses for SIR-like models are fulfilled, it has been reported that they fail to predict some relevant features of the pandemic.
Aiming to acknowledge behavioral differences between distinct populations, we proposed a multigroup SEIRA model . Nevertheless, when analyzing real data from several single populations (which would force our model to behave as a SIR-like model) not all the observed dynamics could be easily represented. Aiming to solve that problem, pointing delays in reports of new cases, we proposed a methodology to reclassify them to the day where contagion was more likely to have occurred . That has worked fine by now. The later was raising a question: if there were a problem with representing trends using SIR-like models, who would be to blame? SIR-like models, data-reporting protocols, or anything else?
Thanks in advance for your opinions/comments!
SIR models are simple epidemic models, but their generalizations are used in many instances for decision making in front of crisis like the present covid-19 epidemic. A population of N individuals, at time t, is partitioned into susceptible s(t), infected i(t), and recovered r(t). This last class includes recovered, immune and dead people. A simple SIR model (differential equation) can be written as
ds(t)/dt = - b s(t) i(t)
di(t)/dt = (b s(t) - a) i(t)
dr(t)/dt = a i(t)
where a, b are positive parameters of the model.
The question is whether this model can be considered consistent taking into account that s(t), i(t), r(t) are positive and add up to N (or any constant like N=1). Are the solutions and parameters dependent on N? Is positiveness of the solutions guaranteed? Are the derivatives meaningful?
I'm looking for a book for microarray data analysis. I'm a mathematician and I'm interested to find a book able to give a framework for microarray data analysis (from the beginning to the end-backgroung correction, normalization, dim. reduction, clustering, etc...). I found this: http://www.springer.com/gp/book/9781402072604
There are some more appropriated ?
I am working with an enzyme that makes 2-5 A chains from ATP. I am assuming the chain are 4 adenine residues long(no way to find out for sure; can't do mass spec). I am able to figure out how much ppi (pyrophosphate) is made when I mix enzyme with ATP... So how do I find out how much 4 residue long 2-5 A chains are being made? Using avocado's number?
- I am currently doing a research that mainly tests the influence of problem-based learning system on self-directed learning readiness of medical students. Two groups of medical students (PBL, nonPBL) will be identified and their SDLR will be measured. I think that unpaired t-test is most appropriate for such issue, am I right?
Also, in the same research, I am going to correlate SDLR to the academic year of the participants, and in this case, three groups (year 1, 3, 5) will be identified. I think that ANOVA and post hoc are most appropriate, am I right?
Also, I am going to correlate SDLR with academic performance (grades), but I'm not sure which is most appropriate, Pearson's r maybe?
hopefully someone answers soon.
Thanks in advance.
Do you think that the iThenticate/CrossCheck/Similarity Index would cause heavy and serious confusion in mathematics? Even destroy, ruin, damage Mathematics? Our mathematics and mathematicians should follow and inherite symbols, phrases, terminology, notions, notations in previous papers, but now we have to change these to avoid, to escape, to hide, to decrease the iThenticate/CrossCheck/Similarity Index! It’s very ridiculous for mathematics and mathematicians! Mathematics is disappearing! being damaged!
Hi guys i have recently conducted a meta-analysis looking to compare 3 different drugs against each other, i am struggling to know which statistic on the meta-analysis do i use to compare the 3 drugs against each other
Am i correct in saying that you would just compare the 3 WMD in each subgroup alongside their confidence interval? i have attached a picture of my meta analysis down below
Researchers at University of Pretoria veterinary faculty reported that mathematics was the best predictor of student performance in veterinary training.
I saw the formula above the question, but I can't understand in detail, any one can explain in detail above the formula along with how to analysis with statistical software example using EXCEL or any other software is there pls explain...
I am currently doing my thesis and one of the experiments I did was to quantify the Fucose (a sugar) in my sample, using a colorimetric reaction.
I have already obtained the line equation, but I am not confident that I will arrive with the correct data in terms of Fucose content (mg/mL or ug/mL) in the sample.
Ultimately, I would like to know the %Fucose in the sample.
I have made and attached a file outlining all the procedures I have done and the data obtained. I humbly hope that you show me how to get the concentration of Fucose, and the percentage (%), in the sample.
Has anyone read the Mol Ecol paper by Ferretti et al. 2013: Population genomics from pool sequencing? Specifically, has anyone tried to use their calculation of Fst from estimated nucleotide diversity?
For some reason, what ever I do I keep getting an Fst of zero -- even for very simple data with only two populations and very different nucleotide diversities. I have attached the equations, a description of the parameters, and my calculations to this question.
It could be that I have misread something important, or my order of operations in the summations is wrong.
For pooled data, being able to use nucleotide diversity to calculate Fst would be such a huge advantage to any study. Can anyone see what I have done wrong?
I need urgent help to calculate absolute values of standard deviations from surface & volume calculations of cylindrical shapes. The tricky part is to calculate according to the law of error propagation:
The 2 formulas are the following:
lenght of mantle m = root[(R-r)2 + h2]
mantle surface M = (R+r)* Pi* m
measurement values are: r=0,89 R=1,43 h=27, as well as Dr=0,79 DR=0,08 Dh=0,5
Can somebody help me out with the exact formula for the standard deviation of the measures?
Thanks for help! Verena Hoelzer
I am working on food toxicology. I am about to get a set of results on the toxicity from various substances in terms of ug per ml or ug per mg. with these data, can I derive a risk index for food substances? if so, how?
The Golden Ratio is generally used in mathematics and arts science based on Fibonacci number. Definitely we have used this technique after the great mathematician, i.e. nature. You can see that nature has used this everywhere like in sunflower, stem and branch arrangement in plant, petals and most importantly the DNA. DNA molecule 21 A width and 34 A length are two Fibonacci number obeying this rule. Also the minor and major groove of the DNA shows similar results.Taking account of DNA chromosomal arrangement there should be a mathematical view of genome function, i.e. DNA replication and repair to gene expression and mutation, breaking, joining of DNA double strand or single strand,,RNA and proteins structure, etc.
I am interested in developing a mathematical model for biopolymer coated fertilizers using a multiscale modelling approach. I would be greatful if anybody could guide me more on this.
I have a dataset with 48 sites, 6 site co-variates, 5 sample covariates modelled for 5 occasions. I think I can fit up to at least 20 parameters. Whenever I run a base model psi(.)p(.) I get a estimates with SE. However, when I increase parameters estimates go haywire (infinity etc). Even running a correlated detection base model I get the same, does anyone have a fix for this in PRESENCE or is it that the data is bad?
I have a set of continuous data that present a very strange distribution with multiple symmetrical peaks at the left and right of a very sharp mean value.
Can someone help me by suggesting how to fit Thomae distribution (see paper attached) to my data in order to check the hypothesis of a discrete sampling which is only apparently continuous?
'b' and 'd' are negative exponents; 'a', 'c', and 'e' are positive constants. I have been working with numerical solutions, but I am most interested in any equation that could at least approximate the solution for 'x'.