Statistics

Statistics

  • Sharad S Malavade added an answer:
    How to test multicollinearity in logistic regression?
    I want to check multicollinearity in a logistic regression model, with all independent variables expressed as dichotomous.
    Given that I can not use VIF, is the correlation matrix the only possible procedure? If so, what is the threshold for a correlation to be unacceptable? 0.60? 0.70?...
    Sharad S Malavade · Brandon Regional Hospital, Brandon, FL

    I am also testing for multicollinearity using logistic regression. I have all outcomes and predictors as categorical variables. I am using a method described by Paul Allison in his book Logistic regression Using the SAS System. He also gives the SAS code that you can adapt for your use. Adrian mentioned in his post, this method applies weights. The interpretation is then exactly like in linear regression.

  • Debra Sharon Ferdinand added an answer:
    What are the best data mining tools for health care data?

    We have moved into the era of "big data", and tools that have traditionally been applied to other industries are now being considered in health care. Data sources include claims data, survey data, data derived from biometric monitoring, etc.

    Given the wealth of data that is being made available, what are the best data-mining tools available to apply to these data. This means not only the type of algorithms (ie., regression trees, etc.), but also the best software available for conducting these analyses.

    Debra Sharon Ferdinand · The University of the West Indies, Trinidad and Tobago

    Hi Ariel:

    Data mining is an area I am interested in but have not studied/researched it, so I looked up what is available on ResearchGate:

    https://www.researchgate.net/publicliterature.PublicLiterature.search.html?type=keyword&search-keyword=data+mining+tools+for+health+care+data&search-abstract=&search=Search

    Best regards,

    Debra

  • Larisa Alagić-Džambić added an answer:
    How can accuracy profile integrate three parameters of validation as linearity, trueness and precision?

    Validation of analytical method following the total error approach use the accuracy profile, this tool is considered by certain researchers as a statistic which intergrate several parameters of validation trueness, precision and linearity 

    if it's possible that someone make more explanation about this point  

    Larisa Alagić-Džambić · Bosanlijek

    Point of accuracy can be the part of linearity and precision. 

  • Geoffrey Chin-hung Chu added an answer:
    Is there any software available for circular statistics?
    I am looking for a software for analyzing the orientations of astigmatism in different treatment groups.
    Geoffrey Chin-hung Chu

    Thank you! e-version is available in our library.

  • Subhash Chandra added an answer:
    Why do we measure 30 pollen grains per species?

    Good Evening, I'm doing my thesis in palinology and I was told that we always measure 30 pollen grains per species, but I can't find the reason for this sample size anywhere. Can someone help me with this question? Thank you.

    Subhash Chandra · Agriculture Research Division

    Interesting discussion. The key criterion to choose a sample size n (=30 here) needs to be the inherent variability on the population. In many situations, as probably here, this knowledge is not available or difficult to get. The rule of 30 is then often used for reasons already highlighted in the previous posts.    

  • Haitham Hmoud Alshibly added an answer:
    Which numpy syntax can be used to select specific elements in a numpy array?

    How can I select some specific elements from a numpy array?

    Say I have imported numpy as np

    y = np.random.uniform(0,6, 20)

    I then want to select all elements (i.e. y < = 1)from y. Thanks in advance

    Haitham Hmoud Alshibly · Al-Balqa' Applied University

    Thank you for bringing this matter to our attention!. It is still a great help and a real pleasure to read your posts

  • George Stoica added an answer:
    Do we need a new definition of fractals for big data? Or must fractals be based on power laws?

    So far definitions of fractals are mainly from mathematical point of view for the purpose of generating fractal sets or patterns, either strictly or statistically; see illustrations below (Figure 1 for strict fractals, while Figure 2 for statistical fractals; Fig. 4 for fractals emerged from big data):

    Big data are likely to show fractal because of the underlying heterogeneity and diversity. I re-defined fractal as a set or pattern in which the scaling pattern of far more small things than large ones recurs multiple times, at least twice with ht-index being 3. I show below how geographic forms or patterns generated from twitter geolocation data bear the same scaling property as the generative fractal snowflake.

    Jiang B. and Yin J. (2014), Ht-index for quantifying the fractal or scaling structure of geographic features, Annals of the Association of American Geographers, 104(3), 530–541, Preprint: http://arxiv.org/ftp/arxiv/papers/1305/1305.0883.pdf
    Jiang B. (2015), Head/tail breaks for visualization of city structure and dynamics, Cities, 43, 69-77, Preprint: http://arxiv.org/ftp/arxiv/papers/1501/1501.03046.pdf

    The new definition of fractals enables us to see the fractals emerged from big data. The answer to the question seems obvious. Yes, we need the new definition. BUT, some of my colleagues argued that the newly defined fractals are not fractal anymore, because they do not follow power laws.

    George Stoica · Canada

    I understand, so a new definition is in order. The metric aspect is less important. 

  • Jochen Wilhelm added an answer:
    Is it possible to statistically compare two values if we do not have information on replicates?

    E.g. Is it possible if we want to check statistical significance in the difference between two separately published LC50 for a particular chemical in two separate species? To elaborate lets take an example of Cu. Suppose, Cu has LC50 of 2 mg/L in species A and 4 mg/L in species B. Can we compare it to say which species is more sensitive to Cu?

    Please reply!

    Jochen Wilhelm · Justus-Liebig-Universität Gießen

    @Liviu:

    "Justification: Statistics starts at two numbers up." - I do not agree here. Statistics starts where we must consider uncertainty, and this has noting to do with the amount of values or sample sizes (n) one has. You are correct that some methods or calculations do not work when n=1, but this is not the whole of statistics. Meaningful and constructive statistics is possible even for n=1. Comparing 2 and 4 (to pick up your example) while assuming stochastic processes resulting in these observations is at the very heart of statistics and not just algebra. It is algebra to calculate the difference, but statistics provides tools to estimate the uncertainties associated with these observations and with the difference.

    To make the example with your values: Given these are counts from a Poisson process, then the log-likelihood contour for the model E(x) = exp(b0 + b1*x) with E(x)~Pois(lamda) is attached below. You can see the profile and select likelihood or confidence ellipses. Don't tell me that this is "just algebra". There is a way to infere reasonable (and unreasonable) model parameters based on the data, and this involves all the core of inferential statistics, what is, to my opinion, way beyond "just algebra".

    This example may or may not be sensible (it may simply be not reasonable to assume an non-overdisperdes Poisson process, who knows?) - but at least it shows that even for n=1 there can a lot of statistics and inference be done!

    I would even be more provocative and say that statistics does not at all need data. Statistics can be based only on sensible assumptions from which yet sensible concusions can be derived. Statistics is about thinking and knowledge. Data just adjusts this, so to say, to empirical findings, so that we can use empirical data to modify our knowledge.

  • Allen G Harbaugh added an answer:
    How do I calculate statistical significance for ChIP fold enrichment?

    I have some mean amount of DNA from my ChIP assay (A), and some mean amount from the IgG (B).

    Call fold enrichment C. C=A/B

    I have standard deviations for both A and B. However, If I calculate the standard deviation for C according to basic propagation of error rules for sample quotients, my standard deviation becomes impossibly large. 

    I'm looking for a different way to calculate a standard deviation, standard error, relative error or confidence interval for fold enrichment (C). I've heard that jackknife resampling or bootstrapping a CI may be my best bet, but both of those methods seem silly given that I have only 3 technical replicates (n). 

    Many thanks for your help!

    Allen G Harbaugh · Boston University

    I would suggest the following Taylor series approximation for the variance of C:

    Numerator = E(A^2)(E(B))^2 - 2*E(A*B)*E(A)*E(B) + E(B^2)(E(A))^2
    Denominator = [E(B)]^4

    where E(x) = the average for the variable x.

    Then use a VERY conservative critical value:  alpha=0.05 with df=1:  t_(alpha/2) = 12.71 (that seems really high, but with df = 3 - 2, you don't have a lot of wiggle room)

    then you could produce a CI as:

    E(A/B) ± t_(alpha/2) * sqrt(numer/denom)

    This won't be the most elegant solution, but it would be mildly defensible with a very small sample size.

    Hope this helps.

  • Pilhoon Jang added an answer:
    How many bootstraps can I perform on a sample of N elements?
    I am using bootstrapping analysis for a set of data that I obtained from a Monte-Carlo simulations.
    Bootstrapping (statics) allows random sampling with replacement from the original data set that I obtain from a Monte-Carlo simulation. Thanks!
    Pilhoon Jang · Seoul National University
  • Ahmad Bazzi added an answer:
    Antenna Calibration for DoA estimation in the presence of multipath?

    Dear All,

    Assume i have an array of uniform linear antenna array of 3 antennas, distance uncertainties and other imperfections might perturb the steering vector away from the true one. Thus, DoA estimation using ML or subspace techniques would fail. 

    I would like to know if it is possible to calibrate when the receive number of signals are more than 3 (Due to severe multipath)???? 

    Thank you in advance.

    Ahmad Bazzi · Institut Mines-Télécom

    Dear Mr. Shahriar,

    Thank you for this,
    Could you please share a document for this  ?

    Best regards

  • Glory Enaruvbe added an answer:
    What steps are involved in statistical analysing and variogram modelling of soils samples collected at three layers using gstat in R?

    What are the steps involved in statistical analysing and variogram modelling of soils using gstat. I have collected samples at three layers and want to examine spatial variation of 18 soil parameters and also determine the semivariogram model to enable me determine the spatial variability of these parameters and predict their values at unsampled locations using kriging in R?.

    I am having issues with database preparation as values have to be repeated for each parameter at each point in a CSV file. Kindly suggest the most efficient solution round this issue

    Glory Enaruvbe · Regional Centre for Training in Aerospace Surveys (RECTAS), Obafemi Awolowo University, Ile-Ife, Nigeria

    Thank you Prof Myers. Your answers are always very insightful and thought provoking. I sincerely appreciate your response

  • Chitra Baniya added an answer:
    What is the difference between random (probability) sampling and simple random sampling?
    Sampling procedure
    Chitra Baniya · Tribhuvan University

    These are nice and informative discussion even for non-statistician as me.

    Chitra

  • André François Plante added an answer:
    What is the meaning of negative coefficient of kurtosis obtained in my specific AFM sample?

    Kurtosis moment is the fourth moment of profile amplitude probability function and corresponds to a measure of surface sharpness. Even than negative value ?

    Table is attached with this question.

    André François Plante · Université du Québec à Montréal

     I am using R. A. Fisher's definition, not Karl Pearson's.

  • M. A. Aghajani added an answer:
    Free Software for Curve fitting or best fit equation
    We are using TableCurve2D for fitting our data. Problem with this software it is windows based and commercial software. We need a free software equivalent TableCurve2d (I mean similar functions) which can be run in command mode.

    I will highly appreciate if some one suggest free software which take my data and fit it in large number of equations by regression or non-regression. Finally it give me equation in which my data fit best.
    M. A. Aghajani · Agriculture and Natural Resources Research Center of Golestan Province

    SigmaPlot 13 now is working well. Its model library is very full and it is possible to add and edit models.

  • Patrick A Green added an answer:
    Alternatives of Fisher's exact test for more than 2 groups?
    I am doing a chi square test on a 3X3 contingency table. However, there are some cells with expected value <5. I know Fisher's exact test is used for 2X2 table only. Is there any alternative test in my case? Thanks,
    Patrick A Green · Royal Liverpool and Broadgreen University Hospitals NHS Trust

    you can do it in SPSS, if you go to crosstabs->Exact-->then click the exact box you get the Fisher's Exact result in the stats box.

  • James R Knaub added an answer:
    How can we use scatterplots to solve a problem in data analysis?

    Specific examples are of interest, as well as general approaches.

    As examples re data editing for continuous data for establishment surveys, the first link below leads to some slides for a seminar presentation, which are illustrative. It was found on several occasions over a number of years, that data analysts accustomed to pouring over data tables, became very enthused by the clarity that selected scatterplots gave them in seeing relationships between data sets.

    The second link is to a procedure and spreadsheet for graphing confidence intervals about predicted y-values when points from a scatterplot are considered, using the model-based classical ratio estimator.

    Your comments, other uses, and in particular, links to other specific instances of uses or general uses of scatterplots for problem solving of any kind are solicited.

    Data editing and analyses are illustrated through the attached links, but model selection and statistical analyses in general would also be among potential uses. Others on ResearchGate have noted that for selection of regressors, theory must be considered to avoid use of spurious associations, and this must be considered, but scatterplots may help to explore possibilities. Further, scatterplots may be used to explore whether or not a relationship may be considered linear.

    Any examples you can show are welcomed. Anecdotes may be instructive and entertaining.  - Thank you. 

    James R Knaub · N/A

    There are two links I attached to this question at the outset. One contains some slides showing scatterplots useful for editing of continuous data from establishment surveys, and the second shows how to create confidence intervals about predicted y-values for a simple regression through the origin.

    Does anyone have a comment, and/or similar graphics to share?  

  • Hao Chen added an answer:
    Is it necessary to repeat grip strength tests on subsequent days?

    I am preparing grip strength test for the first time. According to previous papers, people give three or five trials for each mouse and then these results are averaged to one. Seemingly these results are acquired in one single day. But there is suggestion that experimenter needs to repeat all of these tests in next two to three days, maybe to assure accuracy I guess. So could anybody give any advice? Is it really necessary to test for several days? Under what conditions should we do that?

    Thank you very much.

    Hao Chen · South University of Science and Technology of China

    Thanks so much for all of your kind suggestions and informative explanations.

    Because our assessed mice were injected with some drug before test, in addition to the reliability problem which is important, some researcher needs to check if there is any fluctuation of motor function during recovery after injection. So I recently realized that might be part of the reason why a series of tests over days was suggested. Based on your responce and literature, most people may not test for that long.

    It will take me some time to digest the kindly provided references.

    Hao

  • Manuel Nepveu added an answer:
    From a Bayesian perspective, what is the equivalent of Akaike weights (AICw)?
    When comparing models Akaike information criterion (AIC), Schwarz Bayesian information criterion (BIC) or Deviance information criterion (DIC), which is the Bayesian generalization of AIC and BIC are used. AICw is found by calculating the AIC differences and then from that model likelihoods, which are normalized across the set of candidate models to sum to one. Hence, AICws are interpreted as probabilities and thus provides a relative weight of evidence for each model. I have not come across literature how the equivalent of AICw can be calculated for example from DIC. Your advice or literature will be highly appreciated.
    Manuel Nepveu · TNO

    Bayesian Model Comparison possesses an in-built Occam's razor. I can refer to Jaynes' Theory of probability (2003) or Gregory's book on Bayesian Logical Data Analysis. The Akaike stuff is, hence, not needed as a separate device to judge two models on their relative simplicity. I have always found it strange that some Bayesians invoke  it. On top of that, it assumes Gaussianity (lots of data).

  • Franco Pavese added an answer:
    Can Fisher’s controversial idea of fiducial inference, in the 20th century, be accepted by the statistical community in the 21st century?

    Fisher introduced the concept of fiducial inference in his paper on Inverse probability (1930), as a new mode of reasoning from observation to the hypothetical causes without any a priori probability. Unfortunately as Zabell said in 1992: “Unlike Fisher’s many original and important contributions to statistical methodology and theory, it had never gained widespread acceptance, despite the importance that Fisher himself attached to the idea. Instead, it was the subject of a long, bitter and acrimonious debate within the statistical community, and while Fisher’s impassioned advocacy gave it viability during his own lifetime, it quickly exited the theoretical mainstream after his death”.

    However during the 20th century, Fraser (1961, 1968) proposed a structural approach which follows the fiducial closely, but avoids some of its complications. Similarly Dempster proposed direct probability statements (1963), which may be considered as fiducial statements, and he believed that Fisher’s arguments can be made more consistent through modification into a direct probability argument. And Efron in his lecture on Fisher (1998) said about the fiducial distribution: “May be Fisher’s biggest blunder will become a big hit in the 21st century!”

    And it was mainly during the 21st century that the statistical community began to recognise its importance. In his 2009 paper, Hannig, extended Fisher’s fiducial argument and obtained a generalised fiducial recipe that greatly expands the applicability of fiducial ideas. In their 2013 paper, Xie and Singh propose a confidence distribution function to estimate a parameter in frequentist inference in the style of a Bayesian posterior. They said that this approach may provide a potential conciliation point for the Bayesian-fiducial-frequentist controversies of the past.

    I already discussed these points with some other researchers and I think that a more general discussion seems to be of interest for Research Gate members.

    References

    Dempster, A.P. (1963). On direct probabilities. Journal of the Royal Statistical Society. Series B, 25 (1), 100-110.

    Fisher, R.A. (1930).Inverse probability. Proceedings of the Cambridge Philosophical Society, xxvi, 528-535.

    Fraser, D. (1961). The fiducial method and invariance. Biometrika, 48, 261-280.

    Fraser, D. (1968). The structure of inference. John Wiley & Sons, New York-London-Sidney.

    Hannig, J. (2009). On generalized fiducial inference. Statistica Sinica, 19, 491-544.

    Xie, M., Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: A review. International Statistical Review, 81 (1), 3-77.

    Zabell, S.L. (1992). R.A. Fisher and the fiducial argument. Statistical Science, 7 (3), 369-387.

    Franco Pavese

    I have been quite uncertain about joining this discussion among highly competent professional statisticians, because I am not. I am (have been) a metrologist, now retired, for whom statistics (in the broader sense) is vital for high-precision data treatment.

    However, since I am presently assembling a paper about the different meaning of probability in exercises like throwing dices (or playing cards) and experimental science, I hope to have more light shed by this debate.

    First, be aware that a comparison between frequentist, Bayesian and fiducial approaches was done a few years ago by NIST (USA), in a chapter of the multiauthor book that you find in the references below. It was later transformed also into an ISO Technical Report.

    The issue is that many people is uncomfortable about the dispute between frequentists and Bayesians, where both try to prevail as the single frame, good for all seasons.

    Personally, from my long experience in a field where we try to get the maximum information out from a relatively small number of (very costly) experiments, but “without  torturing measurement results until they confess “ (see De Bièvre in the references), I formed an opinion against the possibility to have true objectivity (see Pavese & De Bièvre below) –you may see also at my most popular paper on ResearchGate in the references below.

    This does not mean that there is no difference between data and opinions (Jocken): however, data, not only are obviously all uncertain, but are ‘occasional’ and not necessarily ‘representative’ in many respects.

    One feature that I can state, as basically an experimentalist, is that they can be biased in many respects, and that almost never they are directly the instrumental indication, but the data used in the analyses undergo before it many steps of ‘torturing’ that are basically subjective, even in non-Bayesian treatments.  Good measurement skill is still an art.

    So data are not necessarily a fair sample of anything: we use them for necessity, as the only available evidence. This is why avoiding to omit any piece of information is important. In this respect, I certainly do not agree with frequentists, should they disagree about taking into account also previous knowledge on the same/similar issue. This is vital at least in metrology, where the back history of any standard is valuable, and the population is assumed to be the same until evidence is gained of the contrary. However, I do not consider this as necessarily being a Bayesian behaviour, nor needing a Bayesian treatment: in most cases, a mixed effect analysis is sufficient for the purpose of the analysis of pooled data. Analysing pooled data is different from assuming an a priori probability distribution (incidentally, why only the probability frame?).

    Saying that the simple modelling of the experiment is already a Bayesian feature is, in my opinion, an abuse. I may admit that Bayes first indicated the need to take all the current knowledge into account. However, after the choice of the prior, the full Bayesian method is then a standard engine, bringing to the posterior, ‘too simple to be always true’ (“there ain't no such thing as a free lunch” -Jochen).

    We really urge to come out from a biased dispute where there are no winners.

    Fischer was supposed to be one proposing one of these new routes: others would be welcome, not only about the stochastic part of measurement uncertainty. Even more, we need, in my opinion, a rethinking about that part, often prevailing, concerning systematic effects bringing to systematic errors. That part is sometimes labelled epistemic uncertainty, or better, epistemic and ontologic uncertainty. In my opinion, this is only partially sufficient. There is still another part of systematic effects: the one where the expectations of random variables are what is called ‘bias’, which in experimental science must be ‘corrected’ according to a universally-adopted procedure dating back to Gauss – not necessarily the best way-out.

    W. F. Guthrie, H-K Liu, A. L. Rukhin, B. Toman, J. C. M. Wang, Nien-fan Zhang, Three Statistical Paradigms for the Assessment and Interpretation of Measurement Uncertainty, in Advances in data modeling for measurements in metrology and testing (F. Pavese and A.B. Forbes, Editors), Series Modeling and Simulation in Science, Editor N. Bellomo, Birkhauser-Springer, Boston, 2009. ISBN: 978-0-8176-4592-2 (print) 978-0-8176-4804-6 (ebook), with 1,5 GB additional material in DVD, pp. 71–116

    P. De Bièvre, Measurement results should not be tortured until they confess, Accred Qual Assur (2010) 15:601–602

    F. Pavese and P. De Bièvre: “Fostering diversity of thought in measurement”, in Advanced Mathematical and Computational Tools in Metrology and Testing X, vol.10 (F. Pavese, W. Bremser, A.G. Chunovkina, N. Fischer, A.B. Forbes, Eds.), Series on Advances in Mathematics for Applied Sciences vol 86, World Scientific, Singapore, 2015, pp 1–8.  ISBN: 978-981-4678-61-2, ISBN: 978-981-4678-62-9(ebook) (slides available for downloading)

    F. Pavese, Subjectively vs objectively-based uncertainty evaluation in metrology and testing, IMEKO TC1-TC7 Workshop, 2008 (available for downloading)

  • Rachel Licker added an answer:
    Can someone answer some questions regarding survival models and assumption of proportionality?
    There are a number of published studies that utilize the Cox Proportional Hazards model to estimate the hazard rate of developing the outcome of interest given an exposure of interest and after adjusting for known confounders. The problem that I have been noticing is that the Kaplan-Meir curves provided in these studies (for the exposed and non-exposed groups) overlap significantly (the survival functions are not parallel, i.e. the curves cross each other). It is my understanding that when the survival functions are not parallel, the assumption of proportionality is likely violated. How important is this when estimating the hazard rate of the outcome of interest? Should I believe the results of articles that report overlapping Kaplan Meier Curves for their exposed and non-exposed groups?
  • Edgars Alksnis added an answer:
    How can earthquakes be predicted? Which statistical models work best?

    I would like to forecast the and predict the earth quake in Nepal. A devastating (7.9 ritcher scale) earthquake hit Nepal in the area near Barpak, a mountain village between capital Kathmandu and tourist town Pokhara. Around 10000 deaths reported. Historic buildings, temples and houses collapsed. The earthquake was followed by many powerful aftershocks and a new earthquake (7.4) hit Nepal on Tuesday May 12.

    Edgars Alksnis · World Institute for Scientific Exploration (WISE)

    If You understand, that so-called electromagnetic earthquake precursors are not really  electromagnetic, it might be half of victory. Should be best method, when elaborated. Good luck!

  • Zohre Parvane added an answer:
    What are the priors for parameters of truncated normal distribution at zero?

    In bayesian analysis problem, The priors of the truncated normal distribution at zero distribution's parameters (location and scale parameters), I need to find (if exist) the informative priors. I read some papers about this topic, but they could not solve it (in MCMC algorithm, it was difficult).

    Zohre Parvane · Ferdowsi University Of Mashhad

    Who has the full-text article with title of "Approximate formulas for the confidence intervals of process capability indices" writen by Nagata, Y. and Nagahata, H. (1992)?

  • A.K. Singh added an answer:
    Can someone help with a difficulty in selecting correct stats tests?

    Hello

    I'm conducting a study looking at the use of computer imaging to predict the outcome of facial surgery. 

    I have basically 3 sets of pictures - i) Pre op Pictures ii) Computer Imaged Pictures that predict outcome iii) Actual Post Op Pictures of the outcome.

    I'm doing a study that basically looks at how accurate the Computer Images predict the outcome. My study has 2 parts:

    Part I - 3 blinded surgeons familiar with the procedure are given 2 sets of the photos - Either the Computer Imaged Pictures or the Actual Post op Pictures - They are randomized to side in advance.

    The 3 surgeons are asked:

    1) Which set is the Actual Post Op Images?

    Proposed Test: McNemar - based on Binary Paired Data.

    2) Which set looks better?

    Proposed Test: McNemar - based on Binary Paired Data.

    3) How similar are the two sets of images (on a 5 point likert scale)?

    Proposed Test : Wilcoxon Rank Sum - based on Non Parametric Ordinal Data

    Part II

    Same 3 Surgeons - given All 3 sets of photos (Pre Op, Computer Imaged, and Post Op) and each are clearly labeled.

    The 3 Surgeons are asked - 

    1) How well does the Computer Imaged photo predict the Post Op Photo outcome (10 point likert scale)

    Proposed Test : Wilcoxon Rank Sum - based on Non Parametric Ordinal Data

    2) Which image set is the most useful overall (Preop vs Computer Imaged vs Post Op)

    Proposed Test: ??

    Any help with which tests I should use would be appreciated. Stats are not my strong suit. Specifically - I'm assuming that the likert scale question should be treated as non-parametric data? And I'm assuming I should use Paired data tests because I'm using the same set of photos for each question, correct?  I have no idea how to compare the last question of Part II of the study.

    Any and all comments welcomed! Thanks in advance.

    A.K. Singh · University of Nevada, Las Vegas

    Not sure if I am understanding your experiment correctly, but I think you don't have enough samples, I would be willing to take a look at your data and give you some advice. My email is aksingh@unlv.nevada.edu

  • Stephan C. Kaiser added an answer:
    Is there a way to calculate the variance of a measured sauter diameter?

    From the definition of Sauter mean diameter in the below link:

    http://www.thermopedia.com/content/1108/

    Can we assume the variance to be in the following form?:

    variance = (d_64)^2 - (d_32)^2

    OR, if you are not too familiarized with the notation, see gif file below.

    THANKS IN ADVANCE!

    Stephan C. Kaiser · Zurich University of Applied Sciences

    When comparing your measurements with a literature model, the R^2 should be at least a good starting point to assess the goodness of the fitting. If you compare the model predictions with the measurements in a parity plot (i.e. measured values vs. predictions), the plot should give you also some idea about systematic deviations - for example, if your Sauter diameter is well-predicted for one surface tension, but the prediction fails for a second surface tension. Furthermore,  I recommend to look at the deviations between replicate measurements, if you have some, (e.g. expressed by the standard deviation) and to compare it with the deviations from the model predictions (i.e. is the deviation/agreement statistically significant related to the measurement reproducibilty?).

    Good luck.

    Regards,

    Stephan

  • Ken Andrew Sikaris added an answer:
    What is the correct way to convert HbA1c unit in calculating standard deviation (SD) from standard error (SE)?

    Is the unit conversion formula applicable to convert the standard error of HbA1c (unit is given as mmol/mol) to standard deviation of HbA1c (unit is %).

    The unit conversion formula is HbA1c_IFCC = (HbA1c_DCCT - 2.15) x 10.929

     ( unit of HbA1c_DCCT is  % ;unit of HbA1c_IFCC is mmol/mol).

    Reference: http://www.al-nasir.com/www/PharmCalc/mob_exec_calc.php?ID=HbA1c

    A randomized trial gave the standard error value for HbA1c as zero, mean 62 mmol/mol, number of participants=15.

    Is it correct to convert standard error "0"(mmol/mol) to Standard deviation 2.15 (%) using this conversion formula?

     Kindly advise please. Thanks.

    Ken Andrew Sikaris · Melbourne Pathology

    If they did give a reasonable SD eg 62 mmol/L +/- 2.0 mmol/L, you would just divide the SD by 10.929 (2.0/10.929 = 0.18). Therefore this would become 7.8% +/- 0.18%.

    [Note that the intercept (2.15) has no effect on the SD calculation, but the CV% calculation is different (2.0/62 = 3.2% whereas 0.18/7.82=2.3%).]

  • Daniela E. Winkler added an answer:
    Is Watson’s U² test applicable for both dependent and independent samples?
    I am analysing angular distributions of several samples, using Watson’s U² test to compare if angle distribution patterns differ. Is this test appropriate for both independent and dependent samples? If not, which would be the analogous test for dependent data?
    Daniela E. Winkler · University of Hamburg

    Thank you, Robin, for that extensive answer! That sums it all up very comprehensively and I think I am now prepared to analyse the data. Thanks again!

  • Shree Nanguneri added an answer:
    Is it necessary to learn statistics for writing original research articles for publishing in high impact factor journals?
    Sometimes BMJ reviewers reject the articles because of poor statistical analysis. If the data of a manuscript submitted can be analyzed with advanced statistical techniques, then the editor will be saying that the manuscript should be shown to a statistician first. Is it necessary for the researcher to also know the basic concept of analysis as well as the statistician?
    Shree Nanguneri · University of Southern Mississippi


    Dear Dr, Shree and Other colleagues, hello

    How can we operationalize the definition of due credit and acknowledgement. - One cannot operationalize it as it purely voluntary and to some extent integrity. If it were plagiarism, it is an entirely different issue, however, it is more of an acknowledgement.

    When the sample size estimation was required, to get the significant result, approaching for commenting on the observations raised by the peer reviewers - This is irrelevant as sample size determination is part of the problem solving strategy and should be designed as one structures the project and/or research charter. Most folks don't even realize this and approach the statistician for sample size calculations toward the end of their work (thesis or dissertation). At that time if they find that the samplesize is too large they compromise by increasing risk and settling for a smaller sample size. They then say, that the statistician recommended it without porting over the entire story.

    First of all we need to understand that Statistics is also one of the professional discipline which demands lots of expertise. - Not sure if I would agree with that. Your statement is not statistically valid as this is the case with any field. They all require a certain level of devotion. I have coached candidates in applied statistics (as a non-statistician myself) across different backgrounds and found that they just need to know a few basic tools to begin with. Once they appreciate the value, they would be drawn to learn the next level of tools and techniques. More importantly the assimilation of applied statistics should be a direct function of the complexity of problem they are trying to solve or the research they are trying to conduct. The objective of the project and/or research efforts should be a prerequisite and not the other way around which is usually the case. Usually, it is either too boring or dull or seems complex by the way they teach or publish it. An effective instructor would be able to make it fun like any other subject.

    Industry specific terminology is applicable to all industries and one shouldn't be hung up such terminologies.

    In the competitive world involving economic, social, health, and other related dimensions will the journals think on these lines. - The time has come when people would take publishing into their own hands and initiate self publishing scientific journals wresting away the power from a few power hungry editors to the hands of the public or the professional community. We have seen published articles whetted by professional and knowledgeable editors and then found that information published was falsified. Not all of them are falsified, however the ones that lacked evidence is enough to believe that the current system is brittle.

    I was speaking with a group of post graduate students and most of them didn't know why the traditional ter thesis is used for a Master's level degree. There is a hypothesis involved and that means something is being rejected or failed to reject. There is probably a math model predicting something, and in many cases, even the supervisor is unaware of such tools and techniques. One of my former Green Belts from 17 years ago didn't know when to perform the normality test for a given distribution, and has come far away from applying those tools in his profession till the customer demanded them. If the customer or industry research sponsor doesn't ask for it, chances that they would be used is dismal.

    Time has arrives to start independent self publishing of technical articles in journals. Thoughts and comments would be welcomed!

  • Sven Preusser added an answer:
    What are the best resources for learning regression analysis in SPSS?

    While an internet search in google shows millions of resources, it is difficult to find which resources should be helpful for learning a specific topic. I am interested to know your experiences in looking for best available resources to learn statistical data analysis for health.

    Sven Preusser · Max Planck Institute for Human Cognitive and Brain Sciences

    ANDY FIELD is a really good writer. Start with him!
    But you can also watch his tutorials/lectures on youtube.com.

  • Jairo Raúl Chacón Vargas added an answer:
    Calculating a weighted kappa for multiple raters?
    I have a dataset comprised of risk scores from four different healthcare providers. The risk scores are indicative of a risk category of low, medium, high or extreme. I've been able to calculate an agreement between the four risk scorers (in the category assigned) based around Fleiss' kappa but unsurprisingly it's come out very low - actually I managed to achieve negative kappa value. I've looked back at the data and there are many cases where, for example, three of the scorers have said 'extreme' and one has said 'high'. Based on normal kappa, this comes out as disagreement, but of course the cases are adjacent so whilst its not agreement is an awful lot better than, say two scorers saying 'extreme' and two scorers saying 'low' as agreement does not fall into adjacent cases.

    I understand the basic principles of weighted kappa and I think this is the approach I need to take but I'm struggling a little with weighted kappa given its multiple raters. Does anyone have any experience on this and advice how is it best to tackle this?
    Jairo Raúl Chacón Vargas · Escuela Colombiana de Ingeniería

    High Claire:

    I suggest you to see the following web site:

    "Cohen’s kappa is a measure of the agreement between two raters, where agreement due to chance is factored out. We now extend Cohen’s kappa to the case where the number of raters can be more than two. This extension is called Fleiss’ kappa. As for Cohen’s kappa no weighting is used and the categories are considered to be unordered"  (taken from http://www.real-statistics.com/reliability/fleiss-kappa/)

About Statistics

Statistical theory and its application.

Topic Followers (61,023) See all