Science topic

# Statistical Inference - Science topic

Explore the latest questions and answers in Statistical Inference, and find Statistical Inference experts.
Questions related to Statistical Inference
Question
I am working with a distribution in which the support value of x depends upon the scale parameter of the distribution and when I obtaining the Fisher Information of the MLE, it exists and giving some constant value. So, in order to find the asymptotic variance of the parameter, can I take the inverse of the Fisher Information matrix even they violets the C.R. Regularity condition, and will it hold the normality property of the MLE??
Please suggest to me how can I proceed to find the confidence interval of the parameter.
Interesting
Question
Dear Colleagues ... Greetings
I would like to compare some robust regression methods based on the bootstrap technique. the comparison will be by using Monte-Carlo simulation (the regression coefficients are known) so, I wonder how can I bootstrap the determination coefficient and MSE.
Huda
Here you can follow:
1- draw the bootstrap sample (size m)
2- Estimate the coefficient by you need from this sample by using the (robust method)
3- find Mse for coefficient
4- Repeat the above bootstarp process B times, here you can estimate Mse for coefficients by taking the average, here you will get the estimation for bootstarp Mse
5- This loop can be repeat N times by Monte-Carlo Simulation
Question
Nowadays there is an increasing debate among the researcher around the globe regarding the significant p-value in the statistical testing. They are in the conclusion that instead of p-value we can choose point and interval estimate ( OR, RR & CI) whether it is significant or not.
I have recently read an article in Nature. which conclude that scientific inference is more important than statistical inference then what is the importance of statistical testing in the research if it cannot conclude what is the reality? If it just roaming around only old established scientific relationship? Is it the failure of statistics? What would be the way forward for the researchers to consider the proper statistical test?
The link of an article which I read in Nature is here
@ Gang (John) Xie ,
I don't agree with the opinion that Null Hypothesis Significance Test (NHST) is a never-justified practice. Hypothesis testing is purely based on scientific logic/philosophy and hence is a justified practice..
It may be acceptable that statistical inference can play a limited role in scientific interface. However, this role is too significant to make the conclusion(s) and/ or to draw inference and/or to interpret results in analysis of observations / data in obtained from experiments / survey.
It is to be noted that the role of every branch of science is limited within some domain.
Question
Quais são atualmente, as medidas de frequência, inferências estatísticas e/ou indicadores mais utilizadas em estudos sobre produção ambulatorial / força de trabalho em saúde?
What are currently the most frequently used measures of frequency, statistical inferences and/or health indicators in studies on outpatient production / health workforce?
For at least one test administration, almost all papers contained some kind of reliability study. The over-whelming favorite of coefficient types was coefficient alpha.
Question
I measured call rates for manatee vocalizations and have a range of call rates from 0-11 calls/min. The data set for all observations are not normally distributed, however I want to report call rates for individual groups (ie. Call rates for one manatee, groups of two, etc). Is it more appropriate to use the median and interquartile range to describe each set or would the mean and standard deviation be acceptable. I do not have a large standard deviation when looking at individual groups. Ie. (Call rates for groups of 6 animals was 6.28 calls/min with a standard deviation of 4.21)
Question
In general term, Bayesian estimation provides better results than MLE . Is there any situation, Where Maximum Likelihood Estimation (MLE) methods gives better results than Bayesian Estimation Methods?
I think that your answer may vary depending on what you consider as better results. In your case, I will assume that you are referring to better results in terms of smaller bias and mean square error. As stated above, if you have poor knowledge and assume a prior that is very far from the true value, the MLE may return better results. In terms of Bias, if you work hard you can remove the Bias of the MLE using formal rules and you will get better results in terms of Bias and MSE. But if you would like to look at as point estimation, the MLE can be seen as the MAP when you assume a uniform distribution.
On the other hand, the question is much more profound in terms of treating your parameter as a random variable and including uncertainty in your inference. This kind of approach may assist you during the construction of the model, especially if you have a complex structure, for instance, hierarchical models (with many levels) are handled much easier under the Bayesian approach.
Question
Dear Colleagues,
I'd be grateful if you provide me the solution or give an idea to solve integration.
Huda
Mx (0) is solvable by applying direct elementary substitution.
Similarly, you can differentiate the integral up to any order and then
evaluate at t = 0.
Question
Hi!
I have a panel dataset of N productive units over T periods of time. I estimate a model by maximum likelihood, and I estimate clustered standar errors, given the non independent nature of the sample, wich is the normal case in panel datasets.
Next, I want to test for the best specification of several nested models. Initially, I was using the likelihood ratio (LR) test in Stata using the "force" option. I came to realize that was wrong, because one of the assumptions of the test is that the observations are independent, which is not my case.
My question is, is there other test to test the performance of several nested models? is there any modification to the LR test in the clusters errors case?
Many thanks in advance to anyone who can shed light in this regard.
Question
Dear community,
I am struggling with statistics for price comparison.
I would like to check if the mean market price of a given product A differs over two consecutive time periods, namely from December till January and from February till March.
My H_0 would be that means are equal and H_1 that from Feb till March the mean is lower.
For this I have all necessary data as time series, sampled at the same frequency.
I thought of using the paired t-test, yet price distribution is not normal (extremely low p-value of Shapiro-Wilk test).
I guess that the two random samples of two groups cannot be treated as independent, as my intuition is that price in February would depend on price in January.
Do you know any test that would fit here? Given the nature of the problem?
Two-sample t-Test.
Question
Dear Colleagues
Could you please help me by explaining how to generate samples from the Bivariate exponential distribution (Downton's Bivariate exponential distribution ) with details. I'd be grateful if you provide me the related references and sources Thanks in advance.
I wish you all the best
Dear Huda,
In R statistical software, you can generate n random number from exponential distribution with the function rexp(n, rate), where rate is the reciprocal of the mean of the generated numbers.
I'm not sure what you mean 'binary exponential distribution', but if you need only 0s and 1s from an exponential distribution, you can choose from one of the following rescalings (depending on the problem you want to solve):
1) ]0;1[ -> 0; [1;Inf[ ->1
2) ]0;1[ -> 0; [1;2[ ->1; [2;Inf[ -> NA
3) ]0;0.5[ -> 0; [0.5;Inf[ ->1
4) ]0;0.5[ -> 0; [0.5;1.5[ ->1; [1.5;Inf[ -> NA
HTH,
Ákos
Question
Are there reasons to use a biased estimator for a parameter when there is an unbiased estimator for it?
An estimator to be used should satisfy basically four properties/criteria. These four criteria are consistency, unbiasness, efficiency and sufficiency. Some other properties are completeness, compactness etc.
However, consistency is an essential property while unbiasness is a desirable property.
Consistency can not be sacrificed. On the other hand, unbiasness is also not sacrificed usually. It may be sacrificed only in unavoidable situation. However, in that situation consistency must be remained intact.
Thus, it is scientifically not correct to use a biased estimator for a parameter when there is an unbiased estimator for it.
Only in unavoidable situation, it is suggested to use a biased estimator (which must be consistent) for a parameter when there is no unbiased estimator for it.
Question
As I recall, I saw an estimator which combined a ratio estimator and a product estimator using one independent variable, x, where the same x was used in each part.  Here is my concern: A ratio estimator is based on a positive correlation of x with y.  A product estimator is based on a negative correlation of x with y.  (See page 186 in Cochran, W.G(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.)  So how can the same x variable be both positively and negatively correlated with y at the same time?
Can anyone explain how this is supposed to work?
I was referring to only one independent variable, when there are no others.
Question
In statistics, there are existing formulas for the sample size with confidence intervals to estimate parameters (mean, proportion…) of the population (see the link). But if we add bootstrapping into such process, is there any exact formula? I notice there are books or discussions saying that the sample size cannot be too small. But I just wonder is there any direct formula to calculate the sample size?
This may be useful
Computer-Intensive Statistics c 2009–11 B. D. Ripley
see section on How many bootstrap resamples - crucially it will depend on what you are trying to estimate eg need more fro standard error than a mean
Question
At the US Energy Information Administration (EIA), for various establishment surveys, Official Statistics have been generated using model-based ratio estimation, particularly the model-based classical ratio estimator.  Other uses of ratios have been considered at the EIA and elsewhere as well.  Please see
At the bottom of page 19 there it says "... on page 104 of Brewer(2002) [Ken Brewer's book on combining design-based and model-based inferences, published under Arnold], he states that 'The classical ratio estimator … is a very simple case of a cosmetically calibrated estimator.'"
Here I would like to hear of any and all uses made of design-based or model-based ratio or regression estimation, including calibration, for any sample surveys, but especially establishment surveys used for official statistics.
Examples of the use of design-based methods, model-based methods, and model-assisted design-based methods are all invited. (How much actual use is the GREG getting, for example?)  This is just to see what applications are being made.  It may be a good repository of such information for future reference.
Thank you.  -  Cheers.
So, to start it off, here is a presentation which covers some implementations of model-based ratio estimation, generally model-based classical ratio estimation, for some energy establishment surveys:
But there are many other estimators to consider, even just among ratio estimators.  Consider the following:

......
......
So, examples please.  Which methods are being used where to do what?
Thank you.
Question
I would like to find the probability distribution of log[U/(1-U)] when U~u(0,1). How to derive this?
I do not quite agree with the answer of Wulf Rehder. He has computed only the cumulative distribution function(CDF). The question wants the probability distribution function(PDF), so we need to take a derivative over the CDF.
In the first case, PDF = 1/(1+x)^2 = (1-u)^2
In the second case, PDF = e^x/(1+e^x)^2 = u(1-u)
Question
I have seen several references to "impure heteroscedasticity" online as heteroscedasticity caused by omitted variable bias.  However, I once saw an Internet reference, as I recall, which reminds me of a phenomenon where data that should be modeled separately are modeled together, causing an appearance of increased heteroscedasticity.  I think there was a youtube video.  That seems like another example of "impure" heteroscedasticity to me. Think of a simple linear regression, say with zero intercept, where the slope, b, for one group/subpopulation/population is slightly larger than another, but those two populations are erroneously modeled together, with a compromise b. The increase in variance of y for larger cases of x would be at least partially due to this modeling problem.  (I'm not sure that "model specification error" covers this case where one model is used instead of the two - or more - models needed.)
I have not found that reference online again.  Has anyone seen it?
I am interested in any reference to heteroscedasticity mimicry.  I'd like to include such a reference in the background/introduction to a paper on analysis of heteroscedasticity which, in contrast, is only from the error structure for an appropriate model, with attention to unequal 'size' members of a population.  This would then delineate what my paper is about, in contrast to 'heteroscedasticity' caused by other factors.
Thank you.
Multilevel random coefficient models are particularly suitable when between-group variances are heteroscedastic like in the example you mention. There are plenty of good references on these models in the statistical literature.
Question
I know that there are different tests like cronbach's alpha, kuder-Richardson formulas etc but I am not sure about choosing right tests. The objectives of my research are to evaluate current techniques used in investment decisions (capital budgeting) in Pakistan and also to evaluate the impact of different factors ( i.e. age of CFO, CEO, industry type, education of CFO, Experience of CFO etc) affecting capital budgeting on the selection of a particular method of capital budgeting technique.
I recommend you using factor analysis and the confirmatory factor analysis. using SEM analysis you may check the construct reliability and validity (convergent and discriminant) of your questionnaire.
Question
Hi!
I am preparing a review on the subject of Design of Experiment.
The standard literature [1,2] describes the One factor at Time methods and the factorial experiments without any mathematical proof of them.
I understand the reason of these methods but I would like a citation to refer to the mathematical proof.
Any help will be highly appreciated
 Box, George EP, J. Stuart Hunter, and William Gordon Hunter. Statistics for experimenters: design, innovation, and discovery. Vol. 2. New York: Wiley-Interscience, 2005.
 Jobson, John. Applied multivariate data analysis: volume I and II: Springer Science & Business Media, 2012.
While waiting for a paper copy of "Design of experiments" from Montgomery. I reviewed the online version(boos.google.com).
I could not locate the mathematical proof of the importance of Interaction. I undersatnd it but it is missing a paper that proof it.
Question
The bayesian approach offers enormous advantages with respect to classical frequentist method when there is genuine a-priori information  on the parameters of the model to be estimated. For the problem at hand is there a priori information?
In my experience the added complication of the Bayesian approach vs. tractable solutions is most beneficial and justified when the model contains nuisance parameters (not of primary interest and which may be difficult to get data for). In this case the Bayesian approach propagates uncertainty due to lack of information in these parameters into the posterior.
Question
My opinion is that it is impossible basing on finite random sample. If anybody do not agree, so, would you kindly propose estimator for the case of a family of distributions given in attached  pdf file. I am interested in both things: a) construction of such estimator (if possible) b) why it is impossible to construct estimator (if impossible).
Thank you for consideration!
It means that on the boundary you may have normal stresses but not tangential/frictional stresses.
Question
My DV is measured two times per observations, once pre- and once post-treatment (interval scale). The IV has multiple levels and is categorical.
I am interested in how the different levels of the IV differ in their effect on changes from pre- to post-treatment.
First possibility:
An OLS regression of IV with change scores (post minus pre measurements) as DV.
Second possibility:
An OLS regression of IV and pre-treatment measurement of the DV with post-treatment measurement as the DV.
Third possibility:
An OLS regression of IV interacted with pre-treatment measurement of the DV with post-treatment measurement as the DV
What are the up- and downsides of these three approaches? Do they only differ with respect to the hypotheses they test, or are there any technical/statistical con's to some of these appraoches? Especially concerning the second possibility, I came accross criticisms because of endogeneity, i.e. correlation of unobserved effects in the error term with the first-round measurements of the DV as the IV. Also, the second and third possibility, in my specific case, have very high R-squared due to the inclusion of pre-treatment measurements, compared to the first possibility. I feel that this comes at some costs, though.
Hi,
Firstly, let us focus on the first choice and the second choice.
In the first choice: You actually model the change in score (y2-y1 where y2 is the measurement post treatment and y1 is the measurement pre-treatment) with respect to a covariate (let say x, and x is a categorical variable).
So the information that this model could be answered (after testing all the assumptions for a regression model and all these assumptions are satisfied) is what the relation between x with respect to the change in the score. So you could not capture the relation between the pre-treatment measurement and post-treatment measurement if any.
For the second option, it can have the advantage that you might see how the pre-treatment measurement could affect the estimate of the post-treatment measurement. I am not sure in which field are you working on but in many fields, the base line (the starting point) is very important to predict a measurement later on.
So personally, I favor the second option over the first one.
Now if we look at the second one and the third one, it is good to start a model where an interaction is there. In this case there are only two covariates (pre-measurement = y1 and x) so it is quite straightforward to look at the model with interaction and compare with the model without interaction to see which one in your case is better to answer the question at hand.
I hope it helps.
Thao.
Question
Hello,
So, I'm utilizing the gene ontology enrichment analysis on a list of proteins and specifically for cellular components
The tool is present on this site: http://geneontology.org/
Now, my question is: If I want to try and compare between the different sets, is it better to look at the p-values or the fold enrichment. Bear in mind that all of these sets have passed the "significance" value.
I also noticed that the bigger the set is, the more significant the result because I suppose you have a higher "n". So, I supposed looking fold enrichment is more interesting? I'm not sure.
Thank you.
Hello Matteo,
Thank you for your answer. It was very informative. I will try using the revigo tool!
I will try and explain in more details what I'm doing because I think then that you could offer better help.
So, the main thing that I did was use a prediction framework for virus-host protein interactions.
This framework gave me a list of host proteins which was predicted to interact with my viral protein.
Since, this is an in-silico study.  I'am more interested in checking for enrichment of certain sets than to check for an interaction with individual proteins. In more biological terms, I just want to have an idea on which part of the cellular component will my protein interact.
Also, I have another question, since we are going into details.
So, currently, the enrichment data show that my protein interacts on everything that is considered as cytoskeletal. So, I have 5-8 enrichment in sets that contains such related genes.
What is usually the cut-off for enrichment? I have set with an enrichment of 8 and set with an enrichment of 2. The set with an enrichment of 2 has a p-value lower than the set with an enrichment of 8. Although, both are significant. In that case, on which value do I look? (I will also check the corrected p-value)
In addition, I also have enrichment in the gene set for nuclear proteins. My viral protein is not present in the nucleus. Could that be problematic for my analysis?
The protein prediction interaction could surely still be valid. But, my viral protein doesn't enter into the nucleus to allow for that interaction to occur.
I will check your question, and try to help.
Thank you Matteo!
Question
"Survey" is a very broad term, having widely different meanings to a variety of people, and applies well where many may not fully realize, or perhaps even consider, that their scientific data may constitute a survey, so please interpret this question broadly across disciplines.
It is to the rigorous, scientific principles of survey/mathematical statistics that this particular question is addressed, especially in the use of continuous data.  Applications include official statistics, such as energy industry data, soil science, forestry, mining, and related uses in agriculture, econometrics, biostatistics, etc.
Good references would include
Cochran, W.G(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.
Lohr, S.L(2010), Sampling: Design and Analysis, 2nd ed., Brooks/Cole.
and
Särndal, CE, Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer-Verlang.
For any scientific data collection, one should consider the overall impact of all types of errors when determining the best methods for sampling and estimation of aggregate measures and measures of their uncertainty.  Some historical considerations are given in Ken Brewer's Waksberg Award article:
Brewer, K.R.W. (2014), “Three controversies in the history of survey sampling,” Survey Methodology,
(December 2013/January 2014), Vol 39, No 2, pp. 249-262. Statistics Canada, Catalogue No. 12-001-X.

In practice, however, it seems that often only certain aspects are emphasized, and others virtually ignored.  A common example is that variance may be considered, but bias not-so-much.  Or even more common, sampling error may be measured with great attention, but nonsampling error such as measurement and frame errors may be given short shrift.  Measures for one concept of accuracy may incidentally capture partial information on another, but a balanced thought of the cumulative impact of all areas on the uncertainty of any aggregate measure, such as a total, and overall measure of that uncertainty, may not get enough attention/thought.

What are your thoughts on promoting a balanced attention to TSE?
Thank you.
Promote increased replication. Every experiment is really two experiments. The first is the planned experiment that is conducted. We hope it has sufficient replication. However, I could go back and take another sample from the population that you sampled and I would get a different answer (at least quantitatively). So the experiment is one replicate of all possible experiments that could have happened. If you increased replication there would be room to do other types of analysis that could start to address this concern.
The place to start this would be an introductory statistics course so that more people consider experimenting on broader issues. However, this might be a problem for people that struggle with statistics.
I got to the second controversy in the Brewer article. Point 1, gave me trouble "Because randomization had not been used, the investigators had not been able to invoke the Central Limit Theorem. Consequently they had been unable to use the normality of the estimates " I think this statement is nonsense. The distribution of the individual sample will converge to the underlying distribution of the population being sampled. If I sample selectively, then the techniques used to choose my sample points also define my population. So random sampling doesn't seem to enter into this On the other hand, my experiment in one of many possible similar experiments unless the technique for sampling is somehow deterministic. This is not allowed: I will sample the first person named Sally who walks through the hospital door at 8:00 on Tuesday, and I know that the department head always arrives at 8:00 on Tuesday and her name is Sally (yes she could have a cold on my sampling day, or get stuck in traffic, thereby resulting in a small probability that I would get someone else). However, as long as there is some random process that is sampled, then the distribution of all possible experiments that I could do will be Gaussian by the CLT. Randomization, in terms of sampling design, doesn't enter into this. If I have an available population of 5000 people, and I assign each of them a number, then have a random number generator spit out 50 numbers and I use those people the CLT will apply to my sample. If I select every 100th person the CLT will still apply unless someone is making sure that all the people go through the door in exactly the same order every time I run the experiment no matter which door I use. Randomization may help insure that the sample is not deterministic, but it is not a requirement for the CLT to function.
Question
I'm looking at the National Diet and Nutrition Survey data. I'm interested in studying the association between some variables. My question is, although the survey is a 2 stage cluster survey, can i use the unweighted data as i'm not interested in the population parameters as much as i'm in the relation between variables. Must i use complex survey analysis or can i use normal analysis procedure. Again, I'm just interested in the relation bet some variables. The data set has several weighting variables to make results representative of the whole UK population.
Dear Ahmed M. Kamel,
Thank you
Question
I am grading exams in an introductory cognition course, and I am again annoyed by answers that I consider simplified to the point of being misleading, or just plain misleading, then finding that is mostly a fair interpretation of the book.
Bottom-up versus top-down processing is presented as an important debate with the supposedly radical solution that both might happen.  Students come away with a notion that it is possible to have pure top-down processing, and it is an exceptional student who notices that this would be hallucination entirely detached from reality.
Categorical perception is presented as evidence for speech perception being special, with a two sentence disclaimer that most students miss.  It is not presented as explainable by Bayesian cue combination involving remembered prototypes, but then the concept of perception as statistical inference doesn't come up anywhere in the book.  The next piece of evidence for speech perception being special is the McGurk effect, but there is no attempt to explain that as top-down input from multisensory cue combination feeding back on categorical perception.
Heuristics in reasoning are presented as simply a fact of life.  Concepts such a computational and opportunity costs don't get a look in.
The general approach is to present a lot of mostly disconnected facts, heavy on the history of discovery in the field.  Last time I checked, there was not a lot of difference between introductory text books.  They occasionally selected different examples, or covered topics to different degrees, but I haven't found anything that abandons the history of science for science, and that tries to present a more coherent and integrated view.  Does anyone know of a book that does?
Andy Clark's Mindware (OUP, 2001), while billed as an introduction to the 'Philosophy of Cognitive Science,' is worth a look. It isn't bogged down in history (although he's certainly aware of it, and mentions it when it helps explain things), and favours trying to give a sense of current (as of 2001) theoretical issues and research trends.
Question
After reading papers on power analysis (e.g. Colegrave & Ruxton 2003 in Behavioral Ecology; Goodman & Berlin 1994 in Annals of Internal Medicine) I got the following impressions:
a) Ex-ante power analysis is informative as long as the experiment is not yet conducted. After having conducted the experiment and after having done the statistical analysis, ex-ante power analysis is not more informative than p-values.
b) Reporting confidence intervals (e.g. for estimated regression coefficients) in case of insignificant coefficients provides information about the reliability of the results. By that, I mean that the confidence intervals can inform about what many researchers want to know when results are not significant (i.e. p-values are higher than some magic threshold), i.e. how reliable (considering the Type II eror) the results are. With 'reliable' I mean that some treatment-effects are within the confidence interval and are therefore likely to represent the true treatment effect.
Are both my impressions correct? Specifically, is it valid to say something like "We conducted a power analysis prior to the experiment. We report confidence intervals on our estimated regression coefficients in order for the reader to assess how 'strong' our support for the null hypothesis is (instead of ex-post power analysis, which was demanded by some scientists)"
a) is correct, b) is not.
That is just the key issue of frequentist statistics that an individual p-value or an individual conficence interval (CI) is meaningless. It is only the benchmark used to control error-rates in (hypothetical and infinite) repetitions of "similar" experiments. P-values and CIs are NOT interpreted with respect to a single outcome. They do not show you how reliable a particular result is (how well the result represents a "true" effect).The reliability is introduced as part of the strategy (not as part of the results), that is, when you define the alternative hypotheses and the desired error-rates. The formalism of the data analysis will lead you to accept one of the two alternatives, and the confidence associated with your decision is not given by the actual outcome but by the error-rates you have chosen and after which you designed the experiment.
In your proposed statement you mix incompatible concepts. With power-analysis and CIs you are in the realm of Neyman/Pearson hypothesis tests. There is no "null hypothesis", only two distinctively different alternative hypotheses (usually one of them indicates a "zero-effect" and is often confused with the null hypothesis, and the other is a specific non-zero value [and not just "anything else but zero"!]). The alternatives are chosen in relation to the problem, and it would not make sense to allow a reader to interpret this data differently (because this data was generated in an experiment that was specifically desined to test these two alternatives, with the given model, and with the given confidence). Just chosing a different confidence (or error-rates) after the experiment would render the experimental design useless; it would be an out-of-context interpretation of the data. If an interpretation of the results in a different context is wanted, the entire power-thing and hypothesis-testing is quite meaningless.
In contrast, Fishers significance testing does not know the concept of error-rates, confidence intervals or power. Here, the compliance of some observed data with a given model is analyzed. The model should be rich enough to give a good description of the data, but for the test it is restricted in a parameter which is called the null hypothesis. A miserable compliance (measured as a "p-value") under this restriction is used as indication that the restriction of the model (i.e. the null hypothesis) is to be rejected. A compliance may not simply be interpreted as indicative for the restriction, because both, model and restriction are theoretical concepts that will be logically false anyway. The aim is only to demonstrate that the available data is already sufficient to show this.
Neither of these two philosophies, the Neymanian hypothesis testing and the Fisherian significance testing, does address the question that is asked by the researcher (and this is what is so terribly confusing!). What we actually want is a statement about how much the data makes us believe in a particular model (more specifically: in a particular value of a parameter or effect). This is a Bayesian approach. So the price we would have to pay to get answers to our questions is to accept a subjective view of probability. This leads us to two possible solutions: either we want to present "objective" conclusions, but then these concusions are about questions we did not ask - or we answer the questions we actually have, what is possible only on a "subjective" basis.
But to come back to your statement: you can report confidence intervals, but they can only be interpreted as a "statistical signal-to-noise-ratio" (just like p-values, that are the same thing, viewd from a different angle). There is per se no information about a reliability of these particular results or conclusions. It is simply not there, no matter how much we wish it should be.
Question
Cochran (1953, 1963, 1977), Sampling Techniques, is generally consider a classic, was very well and compactly written, and very useful, but is from an era when probability of selection/design-based methods were virtually used exclusively.  However, even so, Cochran did use modeling in his book (my copy is the 3rd ed, 1977) to explain variance under cluster sampling in section 9.5, and to compare some estimation techniques for population totals, when one uses clusters of unequal size, in section 9A.5.  He used "g" as a coefficient of heteroscedasticity.  He also showed, in section 6.7, when the classical ratio estimator is a best linear unbiased estimator (BLUE), giving the model in equation 6.19, saying when this is "...hard to beat."    --   Ken Brewer noted that if you count the use of the word "model" or "models" in Cochran's Sampling Techniques, first edition, 1953, there are substantially more instances in the second half (22) than the first (1).   In section 3.3, "The appearance of relevant textbooks," of his Waksberg Award article linked in the references below, Ken Brewer says that "... I had the distinct impression that the more [Cochran] wrote, the more he was at ease in using population models."
Cochran, W.G. (1953), Sampling Techniques, 1st ed., Oxford, England: John Wiley
Cochran, W.G. (1977), Sampling Techniques,  3rd ed., John Wiley & Sons.
Brewer, K.R.W. (2014), “Three controversies in the history of survey sampling,” Survey Methodology,
(December 2013/January 2014), Vol 39, No 2, pp. 249-262. Statistics Canada, Catalogue No. 12-001-X.
..........
What other survey sampling text of that period made use of models?
Thank you Prof. Subramani.  I appreciate your contribution.  This is very helpful.
- (no Dr., Prof.) Jim Knaub
Question
I have one bulk soil sample which i will extract by two different methods. Both methods have around 6-7 steps before the purified protein is ready for  mass spectrometery identification of the different proteins extracted by each method. The bulk sample is split in two A and B , sample A is subjected to a boiling treatment, some sonication and then is split into 5 parts for overnight protein precipitation . The five subsamples are  cleaned and precipitated with methanol etc . Three different samples of these are separated on an SDS page gel. Each lane is in-gel digested and run through a mass spectrometer.
The second sample B is treated in a different manner for two processes and then is split into 8-9 for the overnight protein precipitation using a different chemistry. The precipitated protein is then centrifuged the next day and cleaned with solvent and prepared for mass spectrometry. Three different samples from B are then run on an SDS-page gel, in-gel digested and run on a mass spectrometer, giving MS and MS /MSprofiles.
I want to compare the groups of proteins I have extracted by both extraction methods A and B and make inferances from the differences. Perhaps to say that A method extracts proteins which belong more to membrane type protein families than  method B . To do this I think I need triplicate observations so that I can make some statistical inferences. Can I call these triplicates of observations biological replicates rather than technical replicates ?
So you start from a sample, then subdivided in two: A goes with a purification path, get split again and the split samples analyzed, B goes with a different purification path, get split again and analyzed.
Now a technical sample is, in this case, simply the sample split after the two different procedures, so you have 3 technical replicates of A and 3 technical replicates of B. Generally a technical replicate is defined as a sample at the end of a given procedure who get split before the actual analysis to account for internal variability (like measurement errors) whereas in a biological sample in a different sample from the same origin sampled in the same (or close as possible) condition and this account for biological variability.
So I don't think you can call the three split samples from each procedure biological sample.
Question
This question relates to one I asked previously (https://www.researchgate.net/post/How_to_deal_with_large_numbers_of_replicate_data_points_per_subject). I have performed a generalized estimating equations model (binary logistic regression) with SPSS on categorical repeated measures data. As factors my model has condition (3 levels) and cell diameter (3 levels, cells grouped by diameter), plus their interaction. In my results, the main effect of diameter and the interaction term are significant, and looking at pairwise comparisons I can see exactly which groups differ from which as the effect only concerns some pairs, which is the expected result.
My question concerns reporting: I would like to use odds ratio in addition to p-values since it gives a better idea of which differences are really notable, but I can’t find a way to get the odds ratio for only some specific pairwise comparisons. The parameter estimates includes B for each main effect and interaction, which when exponentiated gives the odds ratio, but I’m not sure if it’s possible and how to extrapolate the odds ratio for specific pairs (since each of my factors has 3 levels)? Or can it be calculated using any of the information in the pairwise comparisons EM means output (it provides mean difference and standard error for each contrast, but since I have categorical data these seem to be inapplicable anyway)?
Thank you very much for any help!
Ah, okay.  Given that you want pair-wise contrasts of the different phenotypes within each size, the problem is much easier than I first thought, and you don't need to use my macro.  You need to change COMPARE=Phenotype*Size to COMPARE=Phenotype on your EMMEANS sub-command, and you need to change SCALE=ORIGINAL to SCALE=TRANSFORMED.  Then you need to exponentiate the mean pair-wise differences (and their CIs) to show the results as odds ratios.  (You are currently seeing differences in log-odds.)
Re SCALE=TRANSFORMED, note the following from the Command Syntax Reference manual entry for GENLIN > EMMEANS:
• TRANSFORMED. Estimated marginal means are based on the link function transformation. Estimated marginal means are computed for the linear predictor.
If you use TRANSFORMED, you are using the logit link function. Thus, the estimated marginal means will show the log-odds, and the pair-wise contrasts will show differences in log-odds.  When you exponentiate those differences, you get odds ratios.
You can use OMS to write the table of pair-wise contrasts to a new dataset, and then use a few simple COMPUTE commands to do all of this.  Your syntax will look something like the following.
****** START OF SYNTAX *****.
* Use OMS to write the table of pair-wise contrasts from
* the EMMEANS sub-command to a new dataset called 'pwcon'.
DATASET DECLARE pwcon.
OMS
/SELECT TABLES
/IF COMMANDS=['Generalized Linear Models'] SUBTYPES=['Pairwise Comparisons']
/DESTINATION FORMAT=SAV NUMBERED=TableNumber_
OUTFILE='pwcon' VIEWER=YES
/TAG = "pwcon".
* Use EMMEANS to get pair-wise contrasts between different phenotypes
* within the same cell size group (Size).
*******************************************************************.
* NOTICE that I changed COMPARE=Phenotype*Size to COMPARE=Phenotype.
*******************************************************************.
GENLIN UCM (REFERENCE=LAST) BY Phenotype Size (ORDER=ASCENDING)
/MODEL Phenotype Size Phenotype*Size INTERCEPT=YES
/CRITERIA METHOD=FISHER(1) SCALE=1 MAXITERATIONS=100 MAXSTEPHALVING=5 PCONVERGE=1E-006(ABSOLUTE)
SINGULAR=1E-012 ANALYSISTYPE=3(WALD) CILEVEL=95 LIKELIHOOD=FULL
/EMMEANS TABLES=Phenotype*Size SCALE=TRANSFORMED COMPARE=Phenotype CONTRAST=PAIRWISE PADJUST=LSD
/REPEATED SUBJECT=Animal SORT=YES CORRTYPE=INDEPENDENT ADJUSTCORR=YES COVB=ROBUST
MAXITERATIONS=100 PCONVERGE=1e-006(ABSOLUTE) UPDATECORR=1
/MISSING CLASSMISSING=EXCLUDE
/PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION COVB.
OMSEND TAG ="pwcon".
DATASET ACTIVATE pwcon.
COMPUTE OddsRatio = EXP(MeanDifferenceIJ).
COMPUTE ORlower = EXP(Lower).
COMPUTE ORupper = EXP(Upper).
FORMATS OddsRatio to ORupper (F8.3).
LIST MeanDifferenceIJ to ORupper.
****** END OF SYNTAX *****.
HTH.
Question
Suppose that we have observed data (x_1, x_2, .. x_n) that may or may not fit a particular distribution better than some given distributions. In many scholarly articles it is found that researchers use to multiply/devide or say scale the given data by a small/large value, say m. The scaled data then further have been used to show that it fits the particular distribution better than those given distributions based on criteria like negative log-likelihood, AIC, BIC, KS test, etc. Further often citing computational convenient reason statistical inference like maximum likelihood estimates, asymptotic confidence interval, Bayes, Bayes credible and HPD interval estimates have been obtained for the scaled data instead of the given data. It may happen that after scaling the data a particular distribution may fit better or worse than the others and also may depend upon the value of m.
I would like to thank all the concerned persons in advance for their views. One may also provide Scholarly articles relevant to this discussion.
I would like to insist in the comment by J.J. Egozcue: the key to understanding which transformation makes sense is to understand which is the sample space and its structure. A better fit to a model can be just wishful thinking if the approach can lead to nonsensical results.
Question
OBSERVATIONS
NO. S1 d2 S2 d2 T1 d2 T2 d2
1 14.65 0.69 23.67 0.02 12.76 2.72 23.85 0.19
2 16.97 9.94 26.98 11.97 18.04 13.19 23.71 0.09
3 10.54 10.74 22.67 0.72 1 2.75 2.75 25.45 4.16
4 16.23 5.82 20.12 11.56 12.73 2.82 21.65 3.10
5 13.19 0.39 25.84 5.38 14.31 0.01 20.15 10.63
6 11.32 6.23 21.84 2.82 15.86 2.11 25.65 5.02
Total 82.90 33.82 141.12 32.48 86.45 23.59 140.46 23.19
Mean 13.82 23.52 14.41 23.41
Potency (Units/ml.) = Potency Ratio x Dilution Factor x Std. Dilution.
If you need  if you need a LD50/LC50 calculator based on probit analysis, it is freely available at the following site: This calculator also provides estimation of 95% fiducial confidence limits for LD/LC values
Question
I am interested in historical references such as the examples below.

This question is with regard to inference from all surveys, but especially for establishment surveys (businesses, farms, and organizations, as noted by the International Conference[s] on Establishment Surveys, ICES).
Ken Brewer (formerly of the Australian National University, and elsewhere) has not only contributed greatly to the development of inference from surveys and the foundation of statistical inference, he also has been a recorder of this history.  We often learn a great deal from history, and the cycles perhaps contained within it.  Often one might see a claim that an approach is new, when it may be simply a rediscovery of something old.  Synthesizing knowledge and devising the best/optimum approach for a given situation takes both innovation, and a basic knowledge of all that has been learned.
A prime example is Ken Brewer's Waksberg article:
Brewer, K.R.W. (2014), “Three controversies in the history of survey sampling,” Survey Methodology,
(December 2013/January 2014), Vol 39, No 2, pp. 249-262. Statistics Canada, Catalogue No. 12-001-X.

Ken believed in using probability sampling and models together, but he explains the different approaches.
And there is this amusing account:
Brewer, K. (2005). Anomalies, probing, insights: Ken Foreman’s role in the sampling inference controversy of the late 20th century. Aust. & New Zealand J. Statist., 47(4), 385-399.
There is also the following:
Hanif, M. (2011),
Brewer, K.R.W. (1994). Survey sampling inference: some past perspectives and present prospects.
Pak. J. Statist., 10A, 213-233.
Brewer, K.R.W. (1999). Design-based or prediction-based inference stratified random vs stratified balanced sampling.
Int. Statist. Rev., 67(1), 35-47.
there is a very short account which I included, as recommended by Ken Brewer.  Note that the model example that Ken gave in his Waksberg Award paper was for the case of the classical ratio estimator.
There is also this:
Brewer, K. and Gregoire, T.G. (2009). Introduction to survey sampling. Sample surveys: design, methods, and applications, handbook of statistics, 29A, eds. Pfeffermann, D. and Rao, C.R., Elsevier, Amsterdam, 9-37.
But I especially like the humorous aspect of Ken Brewer's account of the trials and tribulations of Ken Foreman, referenced above.

Other characters who may loom large would include William Cochran, Morris Hansen, Leslie Kish, Carl-Erik Sarndal, Fritz Schueren, and many others.  A good current example would be Sharon Lohr.

Do you have other references to the history of survey inference?
Question
I have dependent variable of "rational decision making" with two independent variables of Leadership and "Self efficacy". When i use only one independent variable of leadership, it shows positive relation with rational decision making which is consistent with the theory. However, when I added variable of self efficacy as another independent variable then the leadership showed negative relation with rational decision making which is inconsistent with the theory. Why it happen?
It can be happened due to the multicolinearity so, first you need to check the relationship between two independent variables. If they are related they can't give you the correct estimated parameters because the effect of one independent variable is already can be hidden by the other one. therefore, try to check correlation between them and try to detect it by using Principal component Analysis or Factor analysis.
Question
I am building a statistical distribution model for my data. And I am working with several data sets, each has different data range.
In order to compare the results of my model I needed to unify the range of my data sets,  so I thought about normalizing them. A technique called feature scaling is used, and it is performed as such;
(xi -min(X))/( max(X) - min(X)) , where xi is the data sample and X is the data set.
However, by checking the mean and standard deviation of the normalized data, it is found to be almost zero. Does that affect the statistical distribution of my original data?
.
this will happen when you have very large rare events (i would not call them "outliers" before having looked at them in detail !) : max(X) may be very large but related to very rare events, most of the xi are much smaller and the resulting distribution will be concentrated at zero
- log transformation may help
- Z transform may help
depending on the shape of your distribution ... and your ultimate analysis goal !
another rough trick is to replace min(X) and max(X) by appropriate quantiles (say 1% and 99%), thus having the same kind of simple normalization and avoiding the effects of the rarest and largest events
.
oh ... and this kind of normalization is just a translation followed by a multiplicative scaling, so it does not really change your distribution (as would be the case with log or Z transforms) : a translated and scaled gaussian is still a gaussian, but with a different mean and variance ...
.
Question
I'm going to try to make this kind of general as I tend to be a bit wordy:
I am dealing with 3 data sets (18 treated w/ X, 18 treated w/ X2, 18 controls) at 3 time points (33 = 9 sets total).
Each of my 9 individual data sets are nonparametric or not normal:(
To test for any sig. differences between control (untreated) groups across the 3 time points, I used;
Kruskall-Wallis test -> result: no sig. difference in control group medians over time.
I also the software Mintab Express software to perform a test for equal variances [I specified that data was not normal (apparently "Multiple Comparisons and Levene's methods" were used);-> result: no significant differences in control group variance over time.
Now, My question:
• Since I have established that there is no significant difference in control groups w/ respect to both median and variance over time...does this give me carte blanche to compare treated data sets from different time points to each other using the same tests?
• In other words, does finding no sig. difference in median & variance over time in controls effectively rule out any significant chance of effects related to time alone, and not to treatments (these were mussels kept in tanks), having a significant effect on data?
Also: Are there any other tests I could perform to identify significant differences in distribution of (nonparametric) data with dose/duration?
I know next to nothing about stats and would appreciate any help I can get! Thanks!!!
As I understand your research question more I see that the Friedman's test isn't what you're looking for, as the groups are independent and did not receive all treatments/conditions. What might be more appropriate than my previous suggestion is a mixed anova, but I'm not sure what a nonparametric form of this would be. I guess that's why you are opting for multiple comparsons. This post might be of some help: https://www.researchgate.net/post/Is_there_a_non-parametric_equivalent_of_a_2-way_ANOVA
Sorry, I hope someone is able to better answer this question.
Question
In the first model: a= b0 + b1x + b2y + b3z.
b1 is insignificant.
Second model: a= b0 + b1x + b2y + b3z + b4yz.
b1 becomes significant.
I get what it means when a regular control suppresses a relationship, but what does it mean when an interaction variable suppresses a relationship? What is the interaction coefficient supposed to mean by itself?
Thank you for any help in advance!!
The problem is twofold. First the coefficients b2 and b3 mean different things in model 2 and model 1. Exactly what they mean depends on how you have parameterized the model and whether b2 and b4 are centred. In a model without an interaction the effects are additive and so b2 and b3 are not impacted by the value of y or z. In the interaction model the effects of b2 and b3 depend vary with y or z.
Second, and probably most important in this case x is correlated with y, z and yz. This means that the estimate of b1 is impacted by the additional shared variance with yz.
Centering will help with one of these problems but probably not with the collinearity between yz and x. Centering in this case is not standardization (you should not generally standardize if dealing with logged data IMO) and may not be necessary or sensible.
Question
I'd be grateful if you  provide me the solution or give an idea where I can find the solution of the integration. Thanks in advance.
Huda
Dear Huda,
Paul Chiou is correct. I wrote out the details for you. The change of variable is the trick. After that it's really simple.
Question
When fitting a non-linear trend, how to judge whether the used function is over fitting or under fitting? Is there any hypothesis testing? For example, model 1): LogRR=BiX  when X<3, Bi=B1; When  X>=3, Bi=B2;  Model 2): LogRR=B1X+B2X2; We assume X=3 is the cutpoint of the curve; How to judge which model is the best one?
The subscript "-i" seems to be used to show the out-of-sample predicted value calculated for the omitted observation.
Without the subscript "-i" it would just be estimated value based on all available data points.
Question
Please, is anyone have some references regarding the influence of sampling on inference when using bayesian statistics?
I just beginning to use bayesian and I try to better understand some results on personal datas with verry heterogeneous sample size.
Guillaume
The paper by Siu and Kelly from 1998 for example explains this nicely in my opinion.
Question
At the following link, on the first page, you will see a categorization of heteroscedasticity into that which naturally should often be expected due to population member size differences, and that which may indicate an omitted variable:
posted under LungFei Lee - Ohio State:
This is nicely related to the following YouTube video - about five minutes long:
Anonymous (? ):
There are a number of very nice presentations by the following author, which may be found on the internet.  Here I supply two such links:
Walter Sosa-Escudero,
University of Illinois
Though those presentations are excellent, in my experience I think it better to account for heteroscedasticity in the error structure, using a coefficient of heteroscedasticity, than to use the OLS estimate and adjust the variance estimate.  At least in a great deal of work that I did, though the expected slope should be no problem, in practice the OLS and WLS slopes for a single regressor, for linear regression through the origin, can vary substantially for highly skewed establishment survey data.  (I would expect this would also have impact on some nonlinear and multiple regression applications as well.)
Finally, here is one more good posting that starts with omitted variables, though the file is named after the last topic in the file:
posted under Christine Zulehner
Goethe University Frankfurt
University of Vienna:
My question is, why adjust OLS for omitted variables in the analysis, rather than start with WLS, when any heteroscedasticity may be largely from that which is naturally found?  Shouldn't one start with perhaps sigma_i-squared proportionate to x, as a size measure (or in multiple regression, an important regressor, or preliminary predictions of y as the size measure), and see if residual analysis plots show something substantially larger for heteroscedasticity, before seeking an omitted variable, unless subject matter theory argues that something in particular, and relevant, has been ignored?    --   Further, if there is omitted variable bias, might a WLS estimator be less biased than an OLS one???
PS -  The video example, however, does seem somewhat contrived, as one might just use per capita funding, rather than total funding.
The estimated residuals will capture some heteroscedastic variance of  an omitted variable, therefore the natural heteroscedasticity won't be correctly estimated. Even if correctly estimated, the subsequent GLS estimation won't solve the estimation bias problem stemming form the omission. Naturally, I believe, the omitted variable problem should be taken care of first. Otherwise, it would be a waste of time, getting nowhere.
Question
In Galit Shmueli's "To Explain or Predict," https://www.researchgate.net/publication/48178170_To_Explain_or_to_Predict, on pages 5 and 6, there is a reference to Hastie, Tibshirani, and Freedman(2009), for statistical learning, which breaks expected square of the prediction error into the two parts of a variance of a prediction error and the square of the bias due to model misspecification.  (Variance - bias tradeoff is discussed in Hastie, et.al. and other sources.)
An example of another kind of variance bias tradeoff that comes to mind would be the use of cutoff or quasi-cutoff sampling for highly skewed establishment surveys using model-based estimation (i.e., prediction from regression in such a cross-sectional survey of a finite population).  The much smaller variance obtained is partially  traded for a higher bias applied to small members of the population that should not be very much of the population totals (as may be studied by cross validation and other means).  Thus some model misspecification will often not be crucial, especially if applied to carefully grouped (stratified) data.
[Note that if a BLUE (best linear unbiased estimator) is considered desirable, it is the estimator with the best variance, so bias must be considered under control, or you have to do something about it.]
Other means to tradeoff variance and bias seem apparent:  General examples include various small area estimation (SAE) methods.  -
Shrinkage estimators tradeoff increased bias for lower variance.
Are there other general categories of applications that come to mind?
Do you have any specific applications that you might share?
Perhaps you may have a paper on ResearchGate that relates to this.
Any example of any kind of bias variance tradeoff would be of possible interest.
Thank you.
Question
Teaching based intuition/graphical/applications or teaching based demonstrations/technical language/excercises?
I think no single teaching method/technique would help you. Actually, I tried both methods with one of my colleagues who has difficulties in understanding statistics and I failed. To me, graphical method works fine;I can cover the black areas and move on. However, this technique does not work with my colleagues.
In general, I may share some points of my own experience:
• Don't assume that students understand anything about statistics, even if they had a basic course or something like that before. In my experience, most of the students forget the basic rules of statistics all the time. So, it is better to quickly review the basics before presenting any new idea.
• Let students explain the same concepts you have just said. In other words, don't assume that students understand what you are saying. I always see students understanding a completely different thing than what I say.
• Let students think of the problem and the solution before you present the rule. This will save your time repeating the same idea over and over again. Give students one moment of silence to think. Let them infer by themselves; or at least try.
• Repeat or re-display the same concept with different methods. As I said, no one technique with serve all students.
This is my own experience. Please search for scientific papers that may give you more trusted results.
Best Regards,
Question
I have to create a R package for my research topic. I don't know how to develop an R package. Can anyone help me develop one?
Dear Manoj,
In that case, I'm afraid I'm not able to be of much help. Developing an R package can take considerable time, which I cannot spare at the moment. Do note that it can be very beneficial for yourself to learn the R language properly. There is plenty of free material out there for self-education. All it takes is time (if you have some).
In any way, I wish you the best in finding the help you need.
Kind regards,
Pepijn
Question
How to get the statistical inference of the difference of two coefficient of variation (CV)?
We know some methods, such as Levene's test, can be used to compare two standard deviation. But standard deviation is often proportional to the mean. Is there any method/tool can be used to conduct comparison between CVs?
Here's an R function that will compare the cov values for two batches, both as simple differences in cov and as ratios of cov values. Call it as: resampling_cov(dataframe, ntrials).  Maybe it will help.
Question
Sometimes researchers applied inferences to samples that were selected intentionally, for many authors there is a mistake. Always the statistic inference is applied to probabilistic sample selection?
Dear Alain Manuel Chaple Gil,
I agree with 'YES, in most cases' [Frank B. K. Twenefour]. I would say 'in general, yes".
Let us first define Statistical Inference:
"The theory, methods, and practice of forming judgments about the parameters of a population, usually on the basis of random sampling".
"Statistical inference means drawing conclusions based on data" [Duke University]
Another way of looking at this is: One cannot 'infer' that which is already 'known'. Therefore if the samples are not selected at random, meaning they were 'intentionally selected', then there is no 'inference', but rather knowledge about them to begin with; which caused someone to specifically select the samples in the first place. If this is true then the objectivity of randomness is negated and the inference is less reliable.
Random sampling allows for the deduction and assumption used in inference.
Without random sample one gets poor deduction and poor assumptions.
I hope I have offered a 'reflective' perspective on statistical inference.
Thank you Alain Manuel Chaple Gil, for your question!
Respectfully,
Jeanetta Mastron
On another note: IF anyone is looking for the 'argument' of the answers that are YES to the question, they may find their "NO" answers in the blogging of Alan Downey
"Statistical inference is only mostly wrong" March 2, 2015 - see below link
Question
Dear colleagues
Good Day;
Let T be a random variable with Gamma distribution:
T~Gamma (n,θ)
What is the probability density function of θ ̂, where
θ ̂= v1 / v2
v1 = a0*T/((n-1) ) + a1*T2/((n-1)(n-2))+ a2*T3/((n-1)(n-2) (n-3))
v2= a0+a1*T/(n-1) + a2*T2/((n-1)(n-2)) , where, a0, a1, a2 are constants.
With Best Regards
Huda
Hi Huda,
The ratio of two random variables does not in general have a well-defined variance. You can try to re-normalize θ ̂ or use approximations or Taylor series expansions about the expected values of v1 and v2. In terms of expected values, it looks like for independent random  variables, var(x/y) ≈[⟨xx⟩⟨y⟩⟨y⟩−2⟨xy⟩⟨x⟩⟨y⟩+⟨xx⟩⟨x⟩⟨y⟩]/⟨y⟩^4. However, in your case, v1 and v2 are not independent...
Good luck!
Slava
Question
Most of the times I read fundamental of Statistical Inference in Mathematical way, But I want to know How, When,Why use the different kind of element use like sufficiency, Efficiency, Rao & Blackwell theory.
Some my teacher or elder brother said to read Mood, Grybill or Hoog and Craig or Casela Berger books but I want to learn easily like an non statistical background people.
I will really happy If you give me some references about books, video lectures, or lectures.
Dear friends,
In my view, the best way to learn statistical methods is to solve problems related to data analysis, with some practical purpose in mind.
Ilya
Question
I use Fisher's exact test a lot. Here is an example in http://en.wikipedia.org/wiki/Fisher's_exact_test
Men    Women
Dieting         1         9
Non-dieting  11       3
Using Fisher's exact test I can know whether the proportion of dieting in women is significantly higher than in men.
The problem is that there might exist other factors affeting dieting besides gender, such as age, current weight, etc. For example, I also have age data for each individual, and I want to control the factor "age". That is, I want to know whether the proportion of dieting in women is significantly higher than in men while excluding the influence of age. Which statistics method should I use?
I also do not know any exact test when controlling for covariables.
The best alternative that I know is to use a generalized linear model of the Poisson family ("Poisson regression"). Here you can add whatever covariables you have and you get odds ratios "adjusted for age".
Question
Fisher introduced the concept of fiducial inference in his paper on Inverse probability (1930), as a new mode of reasoning from observation to the hypothetical causes without any a priori probability. Unfortunately as Zabell said in 1992: “Unlike Fisher’s many original and important contributions to statistical methodology and theory, it had never gained widespread acceptance, despite the importance that Fisher himself attached to the idea. Instead, it was the subject of a long, bitter and acrimonious debate within the statistical community, and while Fisher’s impassioned advocacy gave it viability during his own lifetime, it quickly exited the theoretical mainstream after his death”.
However during the 20th century, Fraser (1961, 1968) proposed a structural approach which follows the fiducial closely, but avoids some of its complications. Similarly Dempster proposed direct probability statements (1963), which may be considered as fiducial statements, and he believed that Fisher’s arguments can be made more consistent through modification into a direct probability argument. And Efron in his lecture on Fisher (1998) said about the fiducial distribution: “May be Fisher’s biggest blunder will become a big hit in the 21st century!”
And it was mainly during the 21st century that the statistical community began to recognise its importance. In his 2009 paper, Hannig, extended Fisher’s fiducial argument and obtained a generalised fiducial recipe that greatly expands the applicability of fiducial ideas. In their 2013 paper, Xie and Singh propose a confidence distribution function to estimate a parameter in frequentist inference in the style of a Bayesian posterior. They said that this approach may provide a potential conciliation point for the Bayesian-fiducial-frequentist controversies of the past.
I already discussed these points with some other researchers and I think that a more general discussion seems to be of interest for Research Gate members.
References
Dempster, A.P. (1963). On direct probabilities. Journal of the Royal Statistical Society. Series B, 25 (1), 100-110.
Fisher, R.A. (1930).Inverse probability. Proceedings of the Cambridge Philosophical Society, xxvi, 528-535.
Fraser, D. (1961). The fiducial method and invariance. Biometrika, 48, 261-280.
Fraser, D. (1968). The structure of inference. John Wiley & Sons, New York-London-Sidney.
Hannig, J. (2009). On generalized fiducial inference. Statistica Sinica, 19, 491-544.
Xie, M., Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: A review. International Statistical Review, 81 (1), 3-77.
Zabell, S.L. (1992). R.A. Fisher and the fiducial argument. Statistical Science, 7 (3), 369-387.
Dear George,
Thank you for your reference to the psychologist Michael Bradley’s works on inferential statistics, and I entirely agree with him when he writes in 2014: “It is a fallacious and misleading exercise to imply measurement accuracy by assuming set errors rates from the specific statistical samples presented in any particular exploratory study”. This paper however was investigating a part of a widest cleft, as Savage put it in 1961, between frequentists and this cleft was linked to Fisher’s fiducial Inference and other points. For example, Neyman said in 1941: “the theory of fiducial inference is simply non-existent in the same way as, for example, a theory of numbers defined by mutually contradictory definitions” while Fisher accused Neyman to use his own work without reference. But their disagreements were also about refutation and confirmation, which was the topic of Bradley’s work, and about experiments.
So that if Bradley point of view was interesting, I don’t think that it might answer entirely to my more general question.
References
Bradley, M.T., Brand, A. (2014). The Neyman and Pearson versus Fisher controversy is informative in current practice in inferential statistics. Conference paper to the Canadian Psychological Association.
Neyman, J. (1941). Fiducial Arguments and the theory of confidence intervals. Biometrika, 32, 128-150.
Savage, L. (1961). The foundations of statistics reconsidered. In Neyman J., ed., Proceedings of the fourth symposium on mathematical statistics and probability, Berkeley, vol. 1, 575-586.
Question
I am studying seasonal changes in abundance of a fish species along a disturbance gradient. I sampled three locations at four seasons. My sampling sites at each location were very heterogeneous and the overall data was overdispersed . I am planning to analyze data using a GLMM with zero inflated model, considering LOCATION as a fixed factor and sampling site as a random factor. Should I also consider SEASON as a random factor (due to probable autocorrelation) or just nest it within LOCATION?
Thank you. I am actually interested in differences among seasons but the data are highly correlated across seasons and I am not sure about using season as a fixed effect.
Question
Null: Unit root (assumes individual unit root process)
Im, Pesaran and Shin W-stat -2.56786
PP - Fisher Chi-square 41.2785
It has definitely, Thanks i just uploaded a new question, could you please have a look
Question
Have you used this estimator, or know of it being used? Are you familiar with its properties? Do you agree or disagree with the analysis given in the attached notes, and why or why not? When would you use a chain ratio-type estimator or related estimator?
(Note that this is a replacement for a question submitted a couple of days ago, which received a good reply regarding use of the chain ratio-type estimator for estimating crop yield with biometric auxiliary data, but ResearchGate dropped the question and Naveen's reply. The crop yield reply seemed to me to reinforce the idea of a nonlinear model, as noted in the attached paper.)
Many modifications of the classical ratio estimator have been made in the past quarter century. The alternative ratios discussed in Sarndal, Swensson, and Wretman and in Brewer's work and others rely on differing degrees of heteroscedasticity, but I refer here to the chain ratio-type estimator, exponential estimator, statistics added to terms in numerators and denominators, and combinations of these ratio and product estimators, and the like, which have been produced in many papers, and as far as I have seen, it appears that little thought has been given to interpretation and logic behind these modifications of the classical ratio estimator. The 'exponential' ratio-type estimator appears to have appeared in 1991 where it seems to have been justified in terms of a transformation and its MSE, but the logic of the estimator is not clear to me.
As an example, note that in STATISTICS IN TRANSITION-new series, June 2008, Vol. 9, No. 1,
there is an estimator on page 106 which is said to be a linear combination (Stein/shrinkage type) of a ratio and a product estimator (exponential type). However, note that the same auxiliary variable is being used in each term. A ratio estimator requires a positive correlation between x and y and a product estimator requires a negative correlation, here for the same x and y. How can the same two data sets be both positively and negatively correlated?
Many estimators seem to be justified by optimizing the variance estimate, but I suspect that this may be at the expense of bias. Also note that just because a calculated variance estimate is low, does not mean that the estimator was correct in the first place. (Many use an OLS estimator when the data are demonstrably and logically WLS.)
When more than one auxiliary variable is used in a multiplicative manner, it seems that often, not enough attention might be given to interaction between these variables.
Perhaps I am missing something, but the logic escapes me for these modified classical ratio estimators. In the notes attached through the link from my question above, I show that a 'repeated substitution' justification for the chain ratio-type estimator does not make sense, but perhaps considering it as based on nonlinear regression instead of linear regression does make sense. Are there other justifications?
Any thoughts on any of these modifications of the classical ratio estimator, and how they should be interpreted?
Question
Let X distributed as Laplace distribution with (a,b) where a , b are the location and scale parameter respectively. If we want to estimate the scale parameter (b) we can assume that one of:
1. The location parameter is a constant and known
2. The location parameter is unknown so, we can use median as an estimator of the location (median is the Maximum-likelihood estimator for location of Laplace distribution) then, derive the scale estimator for laplace distribution.
Do you have another approach or suggestion please.?
Dear Huda
The maximum likelihood estimatior of the scale, is the average of the absolute deviation from the random variables and the median of the sample (the maximum likelihood estimatior of the location parameter). See http://en.wikipedia.org/wiki/Laplace_distribution.
I hope this help.
Question
In the article „Does climate change affect period, available field time and required capacities for grain harvesting in Brandenburg, Germany?“ by A. Prochnow et al. (Agricultural and Forest Meteorology 203, 2015, 43–53), the authors calculate (simple) moving averages of the considered quantities („number of hours per year available for harvesting grain with defined moisture content“; coefficient of variation; total sunshine duration; etc.) for time periods of 30 years in steps of one year (i.e., 1961-1990, 1962-1991, …, 1984-2013) first. After that, they derive the trend of these averaged values and use these values to estimate the significance with the help of the „t-test for the regression slope“ (see their section 2.4 and Table 5 and Table 6). This way most of these trends are proofed to be significant with p<0.01.
I am convinced that this procedure is wrong. I learned that the values y(i) and y(j) (or the residuals) which are entering the regression procedure (especially if the significance is supposed to be tested with a t-test) must be statistically independent (see, e.g., http://en.wikipedia.org/wiki/Student%27s_t-test#Slope_of_a_regression_line). But the moving averaged values are highly correlated and not at all independent. This violation of the precondition results in a much to small estimate for the standard deviation s of the y(i). The reduction factor is even smaller than 1/sqrt(30)=1/5.5 (because the computation interval is shorter than the correlation length). My own tests (see attached figures) showed that the standard deviation is reduced by a factor of about 1/15. The computation of the significance level with this very small standard deviation gives very high significant results (i.e., very small p-values). But the truth of the matter is that there is no significance at all and all trends could be emerged by pure accident.
Does anybody agree with my belief? (I can not believe that four authors and two or three reviewers/referees did not recognize this pitfall!) I also appreciate hints of the authors in case that I understood anything wrong.
I have written a review for the above mentioned article because there are more shortcomings in this paper (e.g., no consideration of multiplicity at p=0.05 but 12 or 16 test in one table (see their Table 3 and 4) with only a few significant results // very sophisticated regression functions (their Table 1) for estimating the „hours within classes for grain moisture contents“ with r^2 up to 0.99 (given references are not downloadable); I assume that these regressions are derived by stepwise multiple regression but are overfitted and could not withstand an external validation) // The probabilities of the results of Figure 4 („inclusive all more severe cases“) can be derived by means of the Binomial distribution and are not small enough (all greater than 0.1) to indicate any significance // et cetera).
To the best of my knowledge, the significance of the slope of moving averages (mavs) is NOT the significance of a trend in the data. The mavs will be correlated and this correlation will vastly over-estimate the significance of a trend in the data. I am not sure if there exists a method to infere the uncertainty of the trend in the data based on given mavs. But even if such thing exists it remains unclear why the authors did not use the available original data straight away. That they did not looks very suspicious.
I have no formal proof that their method is wrong, but I can show by simulation that the p-values obtained from linear regressions on mavs will NOT keep the type-I error rate. The simulatrion (see R code below) calculates the regression of the mavs for different window withs (based on 100 data points). The original data is only noise. There is no trend. Therefore, the simulation studies the results obtained under the TRUE null hypothesis. The results are plotted (see attachment). The plots show:
1) the standard errors of the slope of the mavs decrease with increasing window size.
2) the distribution of the t-values gets wider with with increasing window size (the "correct" distribution of t-values is plotted in red).
3) Correspondingly, there are increasingly many tiny p-values with increasing window size.
4) And finally, the rate of p-values below the nominal 5% (the effective rate of rejecting H0) is clearly above 5% for any width > 1. This rate approches 1 for width->Inf (hence, you can make SURE to get a "significant" trend in the mavs simpply be chosing a sufficiently wide window).
# R CODE ---------------------------------------------------
mav = function(x,n=5){filter(x,rep(1/n,n), sides=2)}
x = 1:100
nx = length(x)
window.widths = c(1,3,5,7,9,11,13,15,20,30)
nww = length(window.widths)
coefs = list()
for (i in 1:nww) coefs[[i]] = replicate(1000,  summary(lm(mav(rnorm(nx),window.widths[i])~x ))\$coef)
se.values = lapply(coefs, function(tab) apply(tab,3,"[",2,2))
t.values = lapply(coefs, function(tab) apply(tab,3,"[",2,3))
p.values = lapply(coefs, function(tab) apply(tab,3,"[",2,4))
sig.values = lapply(p.values, function(p) -log10(p))
par(mfrow=c(3,1), mar=c(0,3,1,1), oma=c(5,0,0,0))
boxplot(se.values, xaxt="n")
title(main="Standard Errors of the Slope", line=-2)
boxplot(t.values, xaxt="n")
title(main="t-values for the Slope", line=-2)
t = lapply(window.widths, function(n) rt(1000, df=nx-n-1))
boxplot(t, xaxt="n", add=TRUE, col=2, border=2, outline=FALSE, boxwex=0.25, at=1:10+0.2)
boxplot(sig.values)
title(main="-log(p) for the Slope", line=-2)
mtext(1, adj = 0.5, line=3, text="window width")
Question
Scatterplots are a useful tool here.
Terry -
Thank you for your very informative answer. I am also retired, but this should be useful to people I know where I used to work, as well as interesting to me and others. Regarding where I worked, they are going through some development now, hopefully, so the timing of this discussion is particularly good, I think.
" This even allows domains with no data which some might regard as black magic."
I have run into that problem in cases covered by the paper in the first attached link, which has been used since 1999 to produce results for perhaps a thousand or more estimated totals each month for official energy statistics. There have been many categories/cells that were empty or nearly empty for n, and sometimes N. There was pressure to increase the sample to cover those with very small or no n and small N, but they were for categories of little importance, and the overall increase in sample size would have very likely made for a bigger nonsampling error in the form of measurement error. The sample size at one time became particularly unwieldy and had to be managed better, so more attention came back to this methodology. Total survey error is the overall consideration.
For cases where n is zero, there are still estimated totals and estimated standard errors for the prediction errors for those totals in all case, when using this small area approach. That is good for being able to obtain higher aggregates, but I did not want management tempted to report the estimates for the empty sample cells (categories). The variance is measureable, but not the bias due to any level of model misspecification. However, in such cases, the estimated variance was generally huge - even without separately considering bias. Thus I was able to convince management not to publish data for those cells, because of the very large estimated standard errors for those estimated totals, and they just figured in to the higher levels of aggregation.
So the point is that one can avoid publishing the "black magic" estimated totals and standard errors, because the estimated relative standard errors are generally hundreds or thousands of percent - what I said was effectively 'infinite.' The concern still is that managers often want to publish data with relative standard errors that are too high, and with small n, using small area estimation (SAE), there is also bias to consider. Even though we speak of model-unbiased results, that does not account for model failure, and when it comes to SAE, that is generally more of a concern. That is why a good deal of testing and monitoring can be helpful - but a couple of decades or so of successful use has showed this to be worthwhile.
Jim
Question
Hi !
I would like to know if there is inconvenients in using aAUC to stydy tumor growth in xenograft mice ? I also would like to know if bootstrap method is good to estimate a confident interval, knowing the samples are small (100 mice per group).
Yael
You may want to read the following paper for some additional context and ideas
Question
In 2007 I did an Internet search for others using cutoff sampling, and found a number of examples, noted at the first link below. However, it was not clear that many used regressor data to estimate model-based variance. Even if a cutoff sample has nearly complete 'coverage' for a given attribute, it is best to estimate the remainder and have some measure of accuracy. Coverage could change. (Some definitions are found at the second link.)
Please provide any examples of work in this area that may be of interest to researchers.
Question
Many might first think of Bayesian statistics.
"Synthetic estimates" may come to mind. (Ken Brewer included a chapter on synthetic estimation: Brewer, KRW (2002), Combined survey sampling inference: Weighing Basu's elephants, Arnold: London and Oxford University Press.)
My first thought is for "borrowing strength." I would say that if you do that, then you are using small area estimation (SAE). That is, I define SAE as any estimation technique for finite population sampling which "borrows strength."
Some references are given below.
What do you think?
Dear Jim, if I may,
Roughly, small area estimation is one of the statistical techniques (including the estimation of parameters for sub-populations), used when the sub-population in question is integrated in a larger context.
Question
Hi! I'm Betzy, Mathematics student from University of Indonesia. I want to ask something related to Jackknife method. The question is:
could anyone explain me the idea of how Quenouille defines the estimate of bias? What's the reason he defined the estimate of bias like that? I attach a file from Bradley Efron. The Quenouille estimate of bias stated in equation (2.8). Could you please explain me the general idea? And also do you know how to expand the bias of estimator as a Taylor expansion? In the pdf that I attached, it's stated in equation (2.9) Thanks before.
Quenouille's bias estimate is stated in (2.7), not (2.8).
theta-hat is the 'regular' estimate for theta.
theta-hat(.) is the mean of the leave-one-out estimates theta-hat(i).
Bias is always defined as the difference between "what you measure" and "what you expect to measure". If you expect that the leave-one-out estimation procedure should be close to the regular estimation procedure, then (2.7) would be a good estimate for the bias.
Eq. (2.9) is nothing more than the 'definition' of a Taylor expansion. "Everything" is Taylor-expandable, thus also the expected value. For computing the functions a_i(theta), you need to have a specific model: in a general context, you can't elaborate further on these functions. But if you, for instance, work with a simple linear regression with OLS estimation, then it is possible to work out the formulae.
Question
To my knowledge, you can gain more than 2 cut offs if you performed a scatterplot. I don't think the same can be achieve using a ROC analysis, since it has a binary function. Any suggestions?
Many thanks Paul, I will look into that as well.
Rabin
Question
I have a scenario where I get negative R2 values for a linear fit. My algorithm relies on a goodness of fit measure to improve the model. But in the specific cases that I am working on,  the R2 values are never becoming positive, let alone close to 1. What are the other measures by which I can quantify the goodness of fit?
PS: If it helps, then I am getting negative values because the data is non-stationary. I am trying to break the data to stationary chunks by looking at R2 values. But they seem to never become positive.
ISSUE: How to resolve negative R2?
BASIC DEFINITIONS: The following definition applies in R2 calculation for time series:
(1)   SST = Sum(yt - y*)2
... where t = 0 to T; T = sample size; yt = observed value at period t; y* = mean value. No possibility of negative number here because of the square.
(2)   SSE = Sum(yt - yt^)2
... where yt^ = expected value. No possibility for case of negative value here either.
(3)   MSE = [(Sum(yt - yt^)2] / T - k               or          SSE / T - k
... where T = sample size; and k = number of parameters. There is a possibility to have negative value when the sample is too small and there are many parameters, i.e. T < k. The problem may come from the denominator term: (T - k).
(4)   R2 = 1 - (SSE / SST)
Here, we see that R2 would be negative if (SSE / SST) is larger than 1. This problem come from two possibilities: (i) SSE is positive and large, or (ii) using MSE instead of SSE for equation (4) and MSE is negative due to sample size smaller than number of parameters: T - k if k > T. SST and SSE cannot be a negative because they are results of squaring the difference: (yt - y*)2 and (yt - y^t)2.
SOLUTION BY AMEMIYA ADJUSTED R2: The adjusted R2 proposed by Amemiya is given by:
(5)   R2adjust = 1 - [[(T - 1) / (T - k)](1 - R2)]
Assume that your do not have a problem of "unusually small sample" so that T < k, then the problem should be solved. Suppose that the current problem of negative R2 is, say R2 = -0.80 (negative 0.8). Plug this into Amemiya's adjusted R2 equation; assume T = 20 observations and k = 5 parameters, then ...
R2adjust = 1 - [[(20 - 1) / (20 - 5)](1 - (-0.80)]
= 1 - (19 / 15)(1.8)
= (1 - 1.27)(1.8)
=0.27(1.8)
= 0.48 ....... a positive number.
Notice that under this adjustment, the sign changes and the level of R2 also changes.
Question
Anyone have the reference for logarithmic transformation for both dependent and independent variables on child growth (mixed effect models)?
If you need to log-transform both the outcome and predictor variables, you certainly have a nonlinear relationship between the two, most probably a power relationship, i.e.:
Y=aX^b
Where "a" is a coefficient and "b" the exponent of the predictor X. When you log transform both sides of the equation, you obtain log(Y) = log(a) + b*log(X), a linear equation.
The best way to handle this is to analyze the data with a nonlinear mixed model, where the above power equation is specified as the structural component of the model. This analysis can be done with a specific software for nonlinear mixed models such as NONMEM, or with advanced statistical software such as SAS, S-plus, R, etc.
As for references, you may find lots of them in the field of interspecies allometric scaling. I suggest you the following that comes from my field of work:
Martín-Jiménez T, Riviere JE. Mixed-effects modeling of the interspecies
pharmacokinetic scaling of oxytetracycline. J Pharm Sci. 2002 Feb;91(2):331-41.
PubMed PMID: 11835193.
Question
I have the formula Y = (A-B)/m where A and B are averages from samples with sizes nA and nB, and m is a "slope" determined from a linear regression from q points. There are standard errors given for A, B and m (sA,sB,sm). I can calculate the standard error of Y by error-propagation as
sY = 1/m * sqrt( (sm)²*(A-B)/m² + (sA)² + (sB)² )
Now I want to get a confidence interval for Y, so I need the degrees of freedom for the t-quantile. A rough guess would be nA+nB+q-3.
However, somehow I doubt this, because if m would be known theoretically, sY would be simply sqrt ( (sA)²+(sB)² ) with nA+nB-2 d.f. - But when m would be known because q -> Infinity, then sm->0 and sY -> sqrt( (sA)² + (sB)² ) but, following the guess above, with infinitely many d.f. (df = nA + nB + Infinity - 3). Both cannot be correct at the same time.
So what is the correct way to get the d.f. and, hence, the confidence interval for Y?
(please assume that the errors of A, B and m are all normally distributed; please do not discuss alternatives to or applicabilities and problems of confidence intervals. You may well assume that this is a stupid question, because I may have overlooked some simple fact or made a wrong derivation... this can easily be the case, and I still would be thankful for any help)
Thanks!
Dear Jochen and Fabrice,
I do not know if you are still interested in this question, but some clarifications seem motivated.
(1) The error propagation formula for sy contains an error, presumably just a misprint (A-B should be squared).
(2) Inverting the regression probably does not solve any problem, because if m is Gaussian with the right expected value, the inverse regression will yield a biased estimator that is not Gaussian. You must consider what your regression is about.
(3) The error propagation corresponds to approximating the ratio by a linear function of A-B and m. When you talk about degrees of freedom it appears that you think of the t distribution as adequate for a normalized version of this linear function, in particular scaled by the standard error sy as given. However the t distribution only appears exactly if all your three estimated variances, sA^2, sB^2 and sm^2 are proportional to a common chi-2 type variance estimate with given degrees of freedom (which is then the t df). This does not seem to be the case here, but think about it, for this description is often adequate in regression calibration type problems.
(4) If you want to be on the safe side you could use the smallest of the three degrees of freedom of the three standard errors. If you want something less conservative you must derive an approximate formula for the degrees of freedom analogous to the formula in Welch's test, that is the approximate t-test replacing the t-test in the two-sample problem when sample variances are known not to be the same under the null hypothesis.
(5) If you assume you know all three error variances involved, an exact confidence region can be derived by Fieller's method. But I think you know this, already. And it does not help you with the degrees of freedom, unless (again - see (4) above) you have a situation where all three estimated variances are proportional to one and the same estimated variance of chi-2 type. In this case you can introduce the t distribution with the adequate degrees of freedom in the derivation of Fieller's region.
Best wishes
RS
Question
There are variables increasing vulnerability while others decrease it. What are the best pre-processing methods to account for this relationship to ensure adequate results? Is it always necessarily a max-normalization or another rescaling procedure? How can we interpret negative and positive signs in eigenvectors?
you can use varimax as a rotation method . negative and positive signs in eigenvectors depends on the type of relationship if the sign is positive means the relationship between component with others is Direct correlation, and vice versa. To facilitate interpretation of results, you can use sorted coefficients by size.
Question
I am looking for some references on the truncated Gamma density function that include its formula and moments with their proofs.
Dear Mahmoud
I found an old paper that may be useful "Estimating the parameters of a Truncated Gamma Distribution", of Chapman
It has an interesting bibliography too.
I hope this help.
Question
When we fit a distribution of a sample in a classification problem with multiple groups, we will get more than one peak in the distribution plot. So, it would be difficult to identify which distribution function really fit the sample.