Science topic

Robustness - Science topic

Explore the latest questions and answers in Robustness, and find Robustness experts.
Questions related to Robustness
  • asked a question related to Robustness
Question
1 answer
I have noticed that there are single microscopic slide/slip chambers (Cytodyne, Flexflow, IBIDI) and many studies have used these chambers. I wondered how it is possible to have more robust data by using a single fluid flow chamber (1 replicate) and a control?
Relevant answer
Answer
Hi Mustafa,
our tech support team is happy to help with your question but would need a bit more info on your research question, experimental setup, etc. Please get in touch via Email: techsupport@ibidi.com
  • asked a question related to Robustness
Question
1 answer
Dear Researchers,
I am looking for a research paper that is published in a good journal and confirms the reliability of using NASA-POWER data in hydro-climatic studies.
Best wishes,
Mohammed
Relevant answer
Answer
  • asked a question related to Robustness
Question
5 answers
Hello all! As part of my master's thesis, I did an experiment. I now have 5 groups with n=45 participants each. When I look at the data for the manipulation checks, they are not normally distributed. In theory, however, an ANOVA needs normally distributed data. I know that an ANOVA is a robust instrument and that I don't have to worry about it with my group size. But now to my question: Do I in fact need normally distributed data at all for a manipulation check measure or is it not in the nature of the question that the data is skewed? E.g. If I want to know if a Gain-Manipulation worked, do i not want to have data skewed either to the left (or right - depending on the scale)?
Would be great if somebody could give me feedback on that!
Best
Carina
Relevant answer
Answer
I want to emphasize Daniel Wright's comment about assumptions applying to the populations from which you sampled. Textbooks often present the F-test for one-way ANOVA as if it is an exact test. But in order for it to be an exact test, you would need to have random samples from k populations that are perfectly normally distributed with exactly equal variances. (In addition to that, each observation would have to be perfectly independent of all other observations.) Even if it was possible to meet those conditions (which it is not if you are working with real data), the samples would not be perfectly normal, and would not have exactly equal sample variances.
Because it is not possible to meet the conditions described above (at least when you are working with real data, not simulated data), the F-test for ANOVA is really an approximate test. And when you are using an approximate test, the real question is whether the approximation is good enough to be useful.* That's how I see it. YMMV. ;-)
* Yes, I am borrowing "useful" from George Box's famous statement(s) about all models being wrong, but some being "useful". Several variations on that statement can be found here:
  • asked a question related to Robustness
Question
10 answers
I'm trying to do a robust one-way ANOVA to compare whether there's an effect of my vignette on my dependent variable (learn_c). I need to use the robust version because I don't have equal variances across groups.
When I run a turkey-test on my regular anova, the contrasts seem to make sense.
There "growth" and "placebo" conditions are not significantly different, but both of them are significantly different from "fixed".
However, when I run it using the robust method (using WRS2 package in R), it seems to "misread" the labels and runs the contrasts differently. Now it insists that "growth" and "fixed" are not different but "placebo" and "growth" are.
Does the WRS2 order something differently? Or am I misunderstanding what it does?
Relevant answer
Answer
Not only are the SDs for the 3 groups fairly similar, the sample sizes do not vary all that much either. I mention that, because ANOVA is very robust to heterogeneity of variance when all sample sizes are the same, or nearly so. In any case, given those descriptive stats, I would not be at all uncomfortable with Welch's F-test and a multiple comparison method that is designed for unequal variances--e.g., Games-Howell. You may find this simulation study helpful:
Sauder, D. C., & DeMars, C. E. (2019). An updated recommendation for multiple comparisons. Advances in Methods and Practices in Psychological Science, 2(1), 26-44.
HTH.
  • asked a question related to Robustness
Question
3 answers
I am working on time series about COVID-19 Data interested in the subject
"Robust Forecasting with Exponential and Holt -Winter " and compute all my results in R packages, therefor I need help in:-
1- any new paper in this filed
2- Codes of Holt-winter smoothing in r
Thank you for any help
With Best Wishes
Relevant answer
Answer
1) is there any reason why you used Forecasting with Exponential and Holt -Winter ?
2) is there any pros and cons of using Forecasting with Exponential and Holt -Winter for covid 19 time series dataset ?
3) is there any computational problem ?
  • asked a question related to Robustness
Question
3 answers
Currently I am studying VAR methodologies in the hope of constructing a model for a future project, and have a fairly limited understanding of the necessary criteria to be met to generate robust results. My readings of recent literature make few mentions of residual diagnostics, specifically the joint normality of the residuals.
I have found through trial and error that a small number of exogenous spike ( blip ) dummy variables at key dates such as financial crises or policy changes, through visual inspection of residuals, have corrected the non-normality issue, I have found almost no evidence of similar studies doing the same, which leads me to wonder whether such measures are in fact misspecifications.
Tests for co-integration, in my case the Johansen test, give warnings (Eviews 12) against adding any exogenous variables, so as not to invalidate critical values. However, lag selection criteria for a VAR model differs when correcting residual non-normality with exogenous dummies. My current understanding of creating a VEC model is that, as a preliminary measure, both lag length selection and co-integration tests are performed on the model in levels, assuming all series are I(1) processes. Therefore, my question is whether one should:
1) Perform a cointegration test without dummies and select the lag length with dummies.
2) Abandon the dummies altogether, and subsequently violate the normality assumption.
3) Perform both Lag length selection and co-integration testing on the VAR in levels, then add dummies to the VECM.
My intention is to follow the common empirical approach in analysing both IFR and VDC of the VECM model (assuming there is cointegration), should this have any bearing on the matter. My understanding is that normality impacts only the validity of hypothesis testing however, in my reading, I have found no evidence to suggest that IRF and VDC standard errors are robust to non-normality.
many thanks,
Andrew Slaven
(Undergraduate Student at Aberystwyth University)
Relevant answer
Answer
I don't think that there is an easy answer to your question. The problems you discuss are covered in Juselius (2006), The Cointegrated VAR Model, Oxford. When I last used Eviews their Johansen routines would not have covered all the routines in this book. That was some time ago and things may have changed. Cats in Rats (estima.com) or the equivalent in Oxmetrics were better. For a more modern survey, you might look at Killian and LUtkepohl, Structural Vector Autoregressive Analysis, Oxford. These are graduate-level tests and are not easygoing.
  • asked a question related to Robustness
Question
3 answers
We know that in space the compact size and light weight are key design features. The compact size can sometimes be constraint for high antenna performance. Deployable Origami antennas can be a good candidate to solve this problem. But is it robust enough to work in space environment.
Relevant answer
Answer
You might read my paper:
Origami based ultraviolet C device for low cost portable disinfection- using a parametric approach to design
Thanks.
  • asked a question related to Robustness
Question
4 answers
the small-signal stability of a two-area power system with and without DFIG
Relevant answer
Answer
  • asked a question related to Robustness
Question
2 answers
In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set.
In the context of nonlinear multi-stage max-min robust optimization problems:
What are the best robustness models such as Strict robustness, Cardinality constrained robustness, Adjustable robustness, Light robustness, Regret robustness, and Recoverable robustness?
How to solve max-min robust optimization problems without linearization/approximations efficiently? Algorithms?
How to approach nested robust optimization problems?
For example, the problem can be security-constrained AC optimal power flow.
Relevant answer
Answer
To tractably reformulate robust nonlinear constraints, you can use the Fenchel duality scheme proposed by Ben Tal, Hertog and Vial in
"Deriving Robust Counterparts of Nonlinear Uncertain Inequalities"
Also, you can use Affine Decision Rules to deal with the multi-stage decision making structure. Check for example: "Optimality of Affine Policies in Multistage Robust Optimization" by Bertsimas, Iancu and Parrilo.
  • asked a question related to Robustness
Question
1 answer
As it is explained that exosome are robust in nature as they can withstand pH change or temperature change and can be stable in various buffer, so can we suspend exosome in pure water or distilled water and if we do, does its affects the markers present on it and if it does what are that changes occurs?
Relevant answer
Answer
Structural stability in different pHs doesn't mean that the exosomes will be stable enough and constant in hypotonic conditions (e.g pure distilled water). Various reactions and changes would be expected.
  • asked a question related to Robustness
Question
2 answers
Hi, today I came across a strange problem. I found a panel data set online today (for the period 1980-1987). I first estimated a model with no fixed effects, but with time fixed effects (year dummies). As expected, one of the dummy variables was removed from the model (1980). I then used the fixed-effects estimator and observed something odd. Now, in addition to the 1980 dummy variable, the 1987 variable was also removed, as was the education variable. The education was removed because it is time invariant, but I can't explain the removal of the 1987 dummy variable. I had initially inferred mulitcollinearity, but then the 1987 variable should have been removed in the first regression, right? Also, the VIF values do not indicate a problem of mulitcollinearity. So what could be the reason for this? Could it have something to do with the -xtreg- command or fixed-effects transformation in general?
These are my commands:
reg lnwage union educ exp i.year, robust
xtreg lnwage union educ exp i.year, fe robust
Relevant answer
Answer
Dear Lorenz
The best way to understand this problem is to program it in a matlab procedure (for instance) and then observe what happens when you use all the created dummy variables in the regression. Brajesh´s answer is near the core of the problem, but not very close. The problem is related to including ALL those DUMMY independent variables. As the picture shows, you are not including a column of ones for the intercept. You mentioned that, for the no-fixed-effects time-fixed-effects case, "as expected", one of the dummy variables was removed from the model. Do you know exactly why? Besides, why was 1980 the removed variable? First, whenever you have a set of dummy variables, you cannot include a column of ones as an independent variable because then you will get PERFECT COLLINEARITY and thus inv(X'X) does not exist (ie, any results you may get -from the routine you are using- are completely wrong). Secondly, the routine is getting rid of one of those dummies (in this case, 1980) just to be able to provide "correct results" WITHOUT a column of ones. This implies that the routine is computing the degree of multicollinearity and it is somehow detecting "the specific variable generating that problem" (subroutine?) and thus getting rid of it. Let's consider the second case, the fixed-effects time-fixed-effects case. Now you got two sets of dummy variables and thus the problem gets worse because now you have PERFECT COLLINEARITY (because just one of those sets of dummy variables is now working as a column of ones wrt the other set of dummy variables). To tell you the true, "the specific variable generating that problem" (subroutine?) is running twice, thus getting rid of two variables. CLEARLY, I suggest you should code a matlab program and then consider those two sets of dummy variables as "the whole set of dummy variables" (excluding "union", "educ" & "exp"). Then get rid of the FIRST dummy variable in the whole set of dummy variables, and save the results. Then do it again getting rid of the SECOND dummy variable, and save the results, ... and so on until getting rid of the LAST dummy variable. Then you must compare all those saved results wrt the SSR's, the AIC's, or any other econometric criterion in order to choose the best estimated model. You should NOT exclude any variable in the set { "union", "educ", "exp"}.
  • asked a question related to Robustness
Question
8 answers
Hi, I'm currently searching for a rigorous approach to show that my estimated regression coefficients are robust to sampling procedures.
I have performed a fixed-effect IV regression on a full sample and obtained coefficients. I need to show that my regression coefficients are invariant or robust to different subsets of the sample. How can I test to show that my coefficients are invariant to say the coefficients from a regression after dropping 5, 10, 15, 20%...% of the sample?
Relevant answer
Answer
Moonwon Chung Thank you for providing additional context. This sounds like you would be interested in comparisons across fixed/specific groups of companies rather than a random selection of companies. That is, do you want to formally compare the regression coefficients across companies with, for example, three versus four products (or different ranges, say 3 to 5 versus 6 to 8 products etc.)? If that is the case, then multigroup regression analysis would allow you to formally test whether the regression coefficients differ significantly across those (independent) groups. You don't need to turn your model into an SEM with latent variables for that. All you need is a program for SEM that allows you to run multigroup analysis (most SEM programs do). You can then specify your regression model as a multigroup model and test whether the coefficients differ significantly across groups.
  • asked a question related to Robustness
Question
4 answers
I am conducting a research on quality in higher education by using system dynamics approach.
How can I determine the number and components of improvement scenarios in the future.
Are there robust selection criteria of number or components of scenarios ?
Are there references for identifying the number and components of scenarios?
Thank you so much
Relevant answer
Answer
System dynamics models help you to understand the interrelationships between variables. System dynamics can be an effective tool in a variety of settings. It lets you develop a fairly complex system model. You can also, for example, look at the interactions of feedback loops to help see how a system reacts over time. You asked about how to determine the number and components of improvement scenarios, robust selection criteria, and identifying the number and components of scenarios. Your questions are good, but might be premature.
Since you are doing a study, you need to first start with a research question/hypothesis. That will suggest the best methodology to use. Your method and research design will help you to answer the questions you asked. The professional literature can help you gain a better focus while showing you what has been done and what research is needed.
  • asked a question related to Robustness
Question
1 answer
Natural Language Processing
Relevant answer
Answer
Sketch Engine is quite robust
  • asked a question related to Robustness
Question
3 answers
In case of not, what is the non parametric test could be used?
Relevant answer
Answer
Try Aligned Rank Transform (ART) method via the R
package ARTool proposed by Wobbrock et al. (2011) to prepare data for a non-parametric
ANOVA. There is also a non-R based version as well.
Wobbrock, J. O., Findlater, L., Gergle, D., & Higgins, J. J. (2011, May). The aligned rank
transform for nonparametric factorial analyses using only anova procedures.
In Proceedings of the SIGCHI conference on human factors in computing systems (pp.
143-146).
  • asked a question related to Robustness
Question
1 answer
How do I check the Endogenity in AMOS if i have one IV (Knowledge sharing) and one DV (Performance). And how do I check the robustness if I have Knowledge sharing as IV and Performance as DV and Gender as a moderator.
Relevant answer
Answer
there are various methods to do this in AMOS or SPSS
check this link:
Also this video explains the concept very well which might be helpful for you:
test whether coefficient on your regression is significant. If it is, conclude that X and error term are indeed correlated; there is endogeneity.
  • asked a question related to Robustness
Question
4 answers
This is a very important question, because I have not found a validated scale so far that can be considered as the most robust/reliable
Relevant answer
Answer
Congratulations colleagues. Thanks for the question. An interesting topic. But we have a war... First the victory, then the assessment of losses and reconstruction… And branding later.
  • asked a question related to Robustness
Question
3 answers
I mean something that could do a work equvalent to what MAXQDA or Atlasti do?
Relevant answer
Answer
Versions of this question have been asked here several times, so I suggest that use the Search function (at the top of the page) to locate those answers.
  • asked a question related to Robustness
Question
1 answer
I read some articles about statistical robustness of SmartPLS. However, I am not sure about the appropriateness of SmartPLS in the case of survey study involving a representative sample with adequate sample size. Any suggestions?
Thank you!
Relevant answer
Answer
It's my understanding that PLS works better with smaller samples. But, let's hear it from the experts.
  • asked a question related to Robustness
Question
2 answers
Dears,
Do we need exploit further the genetic robustness aroused from distant hybridization? Mule is an excellent example in this regard.
Relevant answer
Answer
Good luck.
  • asked a question related to Robustness
Question
13 answers
I intended to integrate system dynamics simulation with Machine Learning to enable it to quickly react to changes and withstand the impact of the changes. However, if there is a better ML type that is robust,p/se suggest it to me. Added to this, from my preliminary study, the following algorisms seem good to me :1. Adaptive Machine Learning 2. Deep reinforcement learning 3. RO model and PYOMO robust optimization
To sum up, I would be so delighted if you suggest to me a tool that can easily integrate with ML to make it more robust.
Thank you in advance!
Relevant answer
Answer
Well, I would not be so overconfident.
  • asked a question related to Robustness
Question
1 answer
Hi my dynamic model is
Gender Inequality Index(GII) = a+GIIt-1+bFDI+ (ControlV)+U
My control variables are 7. I have used all the control variables and my main explanatory variable as the strictly exogenous ivstyle instruments. Is this correct. I have read somewhere that we can treat all the regressors in ivstyle but i still don't understand why?
xtabond2 GII lag_GII log_FDIinflowreal NaturalresourceRent Generalgovernmentexpenditure GDPGrowth Schoolsecondaryfemale UrbanPopulationControl polity2 Fertilityrate Y*, gmm(GII, lag (0 5) collapse) iv( log_FDIinflowreal NaturalresourceRent Generalgovernmentexpenditure GDPGrowth Schoolsecondaryfemale UrbanPopulationControl polity2 Fertilityrate Y*, equation(level)) nodiffsargan two
> step robust orthogonal small
Dynamic panel-data estimation, two-step system GMM
------------------------------------------------------------------------------
Group variable: countrycode Number of obs = 239
Time variable : Year Number of groups = 49
Number of instruments = 24 Obs per group: min = 1
F(17, 48) = 22.88 avg = 4.88
Prob > F = 0.000 max = 9
----------------------------------------------------------------------------------------------
| Corrected
GII | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-----------------------------+----------------------------------------------------------------
lag_GII | .4063319 .1660706 2.45 0.018 .0724247 .7402392
log_FDIinflowreal | .0052016 .004571 1.14 0.261 -.0039891 .0143923
NaturalresourceRent | .0001336 .0007056 0.19 0.851 -.0012852 .0015523
Generalgovernmentexpenditure | -.0011517 .0027406 -0.42 0.676 -.0066621 .0043588
GDPGrowth | .0000538 .0011326 0.05 0.962 -.0022235 .0023311
Schoolsecondaryfemale | -.0015661 .0005599 -2.80 0.007 -.0026918 -.0004405
UrbanPopulationControl | .0002386 .000501 0.48 0.636 -.0007687 .0012459
polity2 | .0029176 .00107 2.73 0.009 .0007662 .005069
Fertilityrate | .0172748 .0121555 1.42 0.162 -.0071655 .0417151
Year | -.0002603 .0066672 -0.04 0.969 -.0136656 .013145
Yeardummy1 | .1047832 .1552767 0.67 0.503 -.2074215 .4169879
Yeardummy17 | -.006658 .0432925 -0.15 0.878 -.0937034 .0803874
Yeardummy18 | -.0006796 .0359611 -0.02 0.985 -.0729842 .071625
Yeardummy19 | -.0071339 .0330241 -0.22 0.830 -.0735332 .0592655
Yeardummy20 | -.0066488 .0261336 -0.25 0.800 -.0591938 .0458963
Yeardummy21 | .0021421 .0180578 0.12 0.906 -.0341655 .0384498
Yeardummy22 | .0005937 .0097345 0.06 0.952 -.0189789 .0201663
_cons | .8149224 13.41294 0.06 0.952 -26.15361 27.78345
----------------------------------------------------------------------------------------------
Instruments for orthogonal deviations equation
GMM-type (missing=0, separate instruments for each period unless collapsed)
L(0/5).GII collapsed
Instruments for levels equation
Standard
log_FDIinflowreal NaturalresourceRent Generalgovernmentexpenditure
GDPGrowth Schoolsecondaryfemale UrbanPopulationControl polity2
Fertilityrate Year Yeardummy1 Yeardummy2 Yeardummy3 Yeardummy4 Yeardummy5
Yeardummy6 Yeardummy7 Yeardummy8 Yeardummy9 Yeardummy10 Yeardummy11
Yeardummy12 Yeardummy13 Yeardummy14 Yeardummy15 Yeardummy16 Yeardummy17
Yeardummy18 Yeardummy19 Yeardummy20 Yeardummy21 Yeardummy22 Yeardummy23
Yeardummy24
_cons
GMM-type (missing=0, separate instruments for each period unless collapsed)
DL.GII collapsed
------------------------------------------------------------------------------
Arellano-Bond test for AR(1) in first differences: z = -1.70 Pr > z = 0.090
Arellano-Bond test for AR(2) in first differences: z = 0.43 Pr > z = 0.669
------------------------------------------------------------------------------
Sargan test of overid. restrictions: chi2(6) = 18.25 Prob > chi2 = 0.006
(Not robust, but not weakened by many instruments.)
Hansen test of overid. restrictions: chi2(6) = 5.55 Prob > chi2 = 0.475
(Robust, but weakened by many instruments.)
.
Relevant answer
Answer
You should think about how plausible is exogeneity of each of the variables in your framework. It is something related to economic theory.
What happens in dynamic models is that the lagged dependend variable in right hand side of the equation is extremely likely to be endogenous (specifically, correlation with the error term). That's the reason you try to apply Arellano-Bond.
  • asked a question related to Robustness
Question
8 answers
When using Wasserstein balls to describe the uncertainty set in distributionally robust optimization, can multiple sources of uncertainty be considered at the same time, such as wind power and solar power forecast error?
  • asked a question related to Robustness
Question
6 answers
I am implementing Distributionally robust optimization using bender decomposition in GAMS. Can anyone provide me with any helping material or source code for the implementation of expansion planning problems?
  • asked a question related to Robustness
Question
6 answers
I am looking for a robust method to aggregate the 8-day MOD16A2 (or MOD16A2GF) ET_500m values to monthly ET. Since ET_500m and PET_500m, are the summation of 8-day total water loss (0.1 kg/m2/8day). I tried with a sum of all values within a month but when I saw the dates (i.e. 01/25/2000 02/02/2000 02/10/2000 02/18/2000 02/26/2000 03/05/2000 )
It seems that I'm summing more values at the beginning of the month and less at the end. So the monthly summation loss reliability.
Because of that, this method doesn´t seem appropriate. Is there any other method to do this task or to improve this method?
The monthly MODIS ET dataset will be used to validate SWAT ET.
Relevant answer
Answer
Hi Albert,
I dont know if the question is still relevant but I wanted to share my solution for calculation. I had the same question and could not find code.
So here is my code for GEE that works quite alright. Anyway, always open for better solutions!
//define start and end dates
//!! has to be from 1st day of 1st month till last day of last month
var start = ee.Date('2020-05-01')
var end = ee.Date('2020-07-31')
var end_ET = end.advance(+7,'days'); //7 days in case end of month is not covered with data
//import data
var geometry = 'YOUR GEOMETRY'
var ET = ee.ImageCollection('MODIS/006/MOD16A2')
.filter(ee.Filter.date(start, end_ET))
.filter(ee.Filter.bounds(geometry))
.select('ET');
//number of months to calculate
var nMonths = ee.Number(end.difference(start,'month')).round();
//add exact day value to data
var ET = ET.map(function(ET){
return ET
.set('system:day', ET.date().format('dd'))});
//convert to list for easier calculation
var ET_list = ET.toList(ET.size());
//Correction of the ET data:
//each ET image is 8-day sum of days before, therefore it is necessary to locate the images
//that overlap months: These images are corrected to get the actual value
var ET_new = ee.ImageCollection(
ee.List.sequence(0,ET.size().subtract(2)).map(function (n){
var ET1 = ee.Image(ET_list.get(n));
var ET2 = ee.Image(ET_list.get(ee.Number(n).add(1)));
var ET_corr = ET1.expression('day1 < 8 ? (ET1 / 8) * day1 : (day2 < 8 ? ET1 + ((ET2 / 8) * (8 - day2)) : ET1)',{
'day1': ee.Number.parse(ET1.get('system:day')),
'day2': ee.Number.parse(ET2.get('system:day')),
'ET1': ET1.select('ET'),
'ET2': ET2.select('ET')
}).rename('ET_corr');
return ET1.rename(['ET_corr']).toFloat()}));
//at last the sum of the month
var ET_monthly = ee.ImageCollection(
ee.List.sequence(0,(nMonths).subtract(1)).map(function (n) {
var ini = start.advance(n,'month');
var end = ini.advance(1,'month');
return ET.filterDate(ini,end)
.select(0).sum().multiply(0.1) //correction factor
.set('system:time_start', ini.format('YYYY-MM'));
}));
  • asked a question related to Robustness
Question
1 answer
Here are my Matlab files for the paper we recently published in IET Control Theory and Applications. The paper is about designing robust controllers for networked systems. It is hoped that these codes will help students to understand how a robust approach would be coded.
Code Matlab a Robust Controller +LMI+Multi-agent systems
Yalmip Toolbox must be added to Matlab.
How to Install a MATLAB toolbox?
After that, you can run the attached codes.
Relevant answer
Answer
Farshad Rahimi Great to have collaboration with you.
  • asked a question related to Robustness
Question
4 answers
A reviewer suggested there are "more advanced and robust methods" to compare groups in my study that involves three groups comparison - two with Augmented Reality tools and one with a conventional marketing tool, same brand and same product type though.
The reviewer said "ANOVA is a useful and robust analysis tool if we compare directly measurable items. Unfortunately, this is not the case for this manuscript. Therefore, the findings of this comparison are questionable."
Relevant answer
Answer
Ye Chen Thanks for the clarification. Researchers have been using ANOVA on psychological and behavioral factors all the time, and so this should not be a problem. Please do investigate your data distributions, though. My collaborator used Welch's ANOVA for one of our studies where the DV data had unequal variances across three groups (i.e. when the homogeneity of variances assumption is violated). And there are other parametric tests for different use cases. Good luck!
  • asked a question related to Robustness
Question
4 answers
Weights derived from PCA analysis to calculate the water quality index. ?
Is a robust PCA analysis always required to derive the weights of the parameters, or is it also possible through a classical PCA analysis in the same way ?
Thanks.
Relevant answer
Answer
Robust PCA analysis is not always required, but its a justification for assigning weights to the selected parameters in WQI models. Weight are assigned based on parameter importance, and are calculated using different ways/methods.
  • asked a question related to Robustness
Question
8 answers
The assumption of robust least square regression and supporting scholars.
Relevant answer
Answer
  • asked a question related to Robustness
Question
6 answers
Hi,
I'm looking for a small (less than 0.5m of max length) underwater sound source to do some experimental measurements, any recommendations?
I'm looking for something robust and reliable but with prices ranging from lowcost to lab equipment.
All the best
Relevant answer
Answer
Hello Rolf,
I plan to use it on an aquarium / lab during days, a couple of weeks maximum.
We were using this (https://dnhloudspeakers.com/loudspeakers/underwater/aqua-30-2/) during a time but didn't work very good and easily get broken when we use more than 100 dB. I'm looking for something similar but more durable.
Thanks for your help!
  • asked a question related to Robustness
Question
4 answers
Can we use the NPCR and UACI to test the robustness of audio encryption against differential attacks? What is the range of these two for a good encryption system?
Relevant answer
  • asked a question related to Robustness
Question
4 answers
Give your suggestions for robust framework for a water sensitive policy.
Relevant answer
Answer
References on water resources policies in the arid region are provided within the projects :
NATIONAL WATER SECURITY | Jamel Chahed | Research Project (researchgate.net)
THE HOLISTIC WATER BALANCE: BLUE, GREEN & VIRTUAL WATER | Jamel Chahed | Research Project (researchgate.net)
WATER MANAGEMENT AND GOVERNANCE | Jamel Chahed | 6 publications | Research Project (researchgate.net)
  • asked a question related to Robustness
Question
12 answers
The models that are used to check the robustness of the main econometric model may not always provide 100% parallel outcomes. Does it mean that there are flaws in the main estimation outcomes?
Relevant answer
Answer
I think the similarity of your robustness check to the main method depends on the properties of both estimators. For instance, using augmented ardl as a robust test for bootstrap Ardl could yield similar result. Same might go for LSDV and GMM because they are both dynamic model method and can also correct for endogeneity problem. So the similarity depends on the similarity of the estimators. The coefficient doesn't really matter though but the similarity of the signs of the coefficient is very important..
Regards
  • asked a question related to Robustness
Question
5 answers
Dear colleagues,
Currently I'm doing research using ARDL-ECM method processed through Eviews 10. Can somebody mention or explain what are the robustness test used for ARDL-ECM analysis?
Thank you
Relevant answer
Answer
Except for normality, I am in agreement with Kehinde Mary Bello. Please I am not arguing that normality isn't important but it is minimal according to monte Carlo studies.
  • asked a question related to Robustness
Question
4 answers
Resently, I study robust MPC based on LMI. A number of literatures use Lyapunov-Krasovskii function to research time-delay system. I don't know I can use Lyapunov-Krasovskii function to study non time-delay system?
Relevant answer
Answer
Usually a function Lyapunov Krasovskii together with a Razumikhin theorem it's used to obtain stability condition for systems with delay. For delay =0 are obtain results (but not new results) for systems without delay
  • asked a question related to Robustness
Question
7 answers
What sorts of robust quality measuring tools do exist to support institutional-self-evaluation in a university context?
Relevant answer
Answer
These things are usually paid for
  • asked a question related to Robustness
Question
6 answers
I have been searching but haven't find any robust material on this topic. Can anyone provide me a paper or book that explains this?
Relevant answer
Answer
Tomasz Fraczyk , you got me curious enough to plot some data. I think Mohamed Ali's question stands irrespective of the plotting method. Here are two plots of the same fluorescence data for Acridine Yellow I took from this website: https://omlc.org/spectra/PhotochemCAD/html/035.html
I just clicked on a random compound, but Acridine Yellow gives a typical spectrum to my eyes. Plotting the data against wavelength (low to high) and wavenumber (high to low), and superimposing the symmetric normal distribution, the fluorescence spectrum is pretty asymmetric on both scales.
I agree with you the asymmetry is a little more exaggerated in the wavelength plot, and plotting in wavenumber makes more physical sense, but I don't want people to get the impression the asymmetry is only an artifact of plotting against wavelength. It is a common feature of fluorescence spectra with a physical explanation.
  • asked a question related to Robustness
Question
4 answers
Dear all, I have some real data (about 32 equidistant points), and I fitted it a Fourier transform function using the FFT method. Indeed I get 32 complex Fourier coefficients, which correspond to the obtained 16 positive frequencies. I want to apply a low pass filter to smooth the obtained fitted function. Actually I take the fifth frequency as a low pass threshold (so I take only the first five frequencies which correspond to 30% of the total frequencies). I have chosen this threshold, basing on a visual interpretation of the fitted curve. Can anyone suggest a more robust or efficient method to choose the threshold frequency for low pass filter?
Relevant answer
Answer
I suggest filter design Matlab tool
  • asked a question related to Robustness
Question
4 answers
Can the distributionally robust optimization (DRO) method deal with the problems related to multiple uncertain variables? Are there any related literature recommendations?Thanks for your kind help.
  • asked a question related to Robustness
Question
7 answers
Hi everyone.
I have nonnormal data. I want to do confirmatory factor analysis. So I'm using a robust technique (MLM) with R. When I use MLM estimator without deletion anything, it results bad fit indices. If I firstly delete influential outliers according to mahalanobis distance(p < 0.001) then use MLM, I get acceptable fit indices. But I am confused about whether is true method or overfitting.
On the other hand, as I understand I should report results with and without deletion outliers. I want to ask, if I report with/without, can I say the study provides validation?
I'll be glad to hear your advice.
Best regards!
Relevant answer
Answer
I would recommend looking at this book which is available in the z-library, the second author is a world class expert on such things. The software is available in the R cluster package.. BTW Peter will probably answer questions by email. He is on Researchgate. Best wishes, David Booth
  • asked a question related to Robustness
Question
3 answers
Many empirical papers in economics do have a separate section for robustness analysis after the main estimation which might be the use of another estimator or perhaps using new sets of variables. The questions that follows this is, how do researcher choose the appropriate robustness estimator after the main analysis, what is the implications of differing result between the main estimator and robust estimator.
Relevant answer
Answer
Dear Hamid,
I would like to suggest below articles
Best regards,
Kasun
  • asked a question related to Robustness
Question
6 answers
i have results of two different techniques, and now i want to compare the results. i have huge data to compare. so i need to work on more clear and robust method to compare the results apart from making plots of result data of both techniques. is there any method anyone know i will be thankful for this if he share with me. i am going to add the comparsion on thesis.
thanks
Relevant answer
Answer
You must highlight the nature of your research.
Is it qualitative, quantitative, or mixed-method?
  • asked a question related to Robustness
Question
4 answers
Many empirical papers in economics do have a separate section for robustness analysis after the main estimation which might be the use of another estimator or perhaps using new sets of variables. The questions that follows this is, how do researcher choose the appropriate robustness estimator after the main analysis, what is the implications of differing result between the main estimator and robust estimator.
Relevant answer
Answer
Dear Hamid Muili,
Perhaps it will be good if you read about dr G.Taguchi's "Robust Optimization". The robust design shall work efficiently without failures. It is critical to define "noise condition" and validate design to those conditions to confirm robustness.
Krzysztof (Chris) Michalowski
  • asked a question related to Robustness
Question
5 answers
I want to evaluate the robustness of the clustering algorithm to noise ... How can I add noise to the data... Is there a well known method (such as salt and pepper in the image data)?
Relevant answer
Answer
Simply, you may make some values to empty.
  • asked a question related to Robustness
Question
3 answers
Hello every one.
I am PhD candidate in finance. I using this command in stata. The sample is compose by N = 88 country and T= 37 years
xtabond2 Index L1.Index L2.Index Savings private Value FDI MO invest Governst PowerD Individ mas Uncert Longter , gmm(L1.Index L2.Index, laglimits(2 .) collapse) iv( PowerD Individ mas Uncert Longter, equation(level)) twostep orthogonal small
Arellano-Bond test for AR(1) in first differences: z = -0.99 Pr > z = 0.323
Arellano-Bond test for AR(2) in first differences: z = -1.03 Pr > z = 0.304
------------------------------------------------------------------------------
Sargan test of overid. restrictions: chi2(27) = 2.83 Prob > chi2 = 1.000
(Not robust, but not weakened by many instruments.)
Hansen test of overid. restrictions: chi2(27) = 36.62 Prob > chi2 = 0.102
(Robust, but weakened by many instruments.)
Difference-in-Hansen tests of exogeneity of instrument subsets:
GMM instruments for levels
Hansen test excluding group: chi2(25) = 27.78 Prob > chi2 = 0.318
Difference (null H = exogenous): chi2(2) = 8.84 Prob > chi2 = 0.012
wait for usefull help
Relevant answer
Answer
Yes it's exactly you can apply the PV and FV formula
  • asked a question related to Robustness
Question
1 answer
The structural and functional behaviour of metabolites
Relevant answer
Answer
  • asked a question related to Robustness
Question
19 answers
I have many groups - from 10 to 30 with 200-500 in each. (not normal distribution)
After Friedman's ANOVA test I have a significance P< 0.0000.
Which Post-hoc test is more robust and appropriate for pairwise comparisons???
Am I need to use a Bonferroni correction in this situation?
Maybe the Wilcoxon rank-sum test is the best?
Relevant answer
Answer
I am trying to think of a hypothesis that would require pairwise testing of 10 to 30 groups!
First, note that the Friedman test isn't is a pretty low power test, and is not, as people think, an extension of the Wilcoxon Mann-Whitney test – it's a sort of sign test max. Thom Baguley has some useful remarks here : https://seriousstats.wordpress.com/2012/02/14/friedman/
He points out that you are better off doing and anova on ranks than going with the Friedman.
However, for ten groups you have 45 possible pairwise comparisons, while for thirty you have over four hundred. Step back a bit and ask yourself if these are hypothesis tests. My general principle is no hypothesis – no test.
If the groups can be simplified by merging categories that are a) similar to each other and b) different from the other categories, then you can reduce the number of groups, which will reduce the post hoc testing, but it still won't give you a hypothesis. If the groups were, for example, geographical districts you could pool them into regions that would be recognisable as meaningful entities and different one from the other. If they were diagnoses, you could group them by body system or by procedure type etc etc.
If, on the other hand, the groups are of actual interest, for example the grouping variable is a school, then multilevel modelling is more appropriate because you can then text hypotheses about the effect of context (group) on individual observations.
I would comment that Béatrice Marianne Ewalds-Kvist 's surprise that your outcome variable is not normally distributed assumes that it ought to have been. In real life there are many other distributions – if you are counting things, for example, Poission or negative binomial distributions make more sense, and many distributions are zero-inflated because the variable doesn't apply to some people – so I'm not convinced that you need to run from the data in Friedman's direction just yet.
Tell us more about your study and about your hypothesis, and critically about the outcome variable.
  • asked a question related to Robustness
Question
15 answers
I have collected data on autistics and non autistics on various measures. The number of participants is around 2,000, 1,000 approx in each group. I have removed extreme outliers, transformed the data (log, SQRT, Reciprocal) and Windsorizing but the data for the different scales as a whole but also for the individual groups is not normally distributed.
Visually the histograms look normally distributed for some of the variables but the KS and shapiro wilk statistics suggest otherwise.
I want to run MANCOVA, ANCOVA, mediation and some other parametric tests - I know I am violating assumptions it should be non-parametric but can it be justified they are robust tests, large data set etc that all of this can be overlooked?
Is there something else I could try to get the data to be normally distributed?
Relevant answer
Answer
1. Why do you want the data to be either normal or close to normal? Those stats that you mentioned make assumptions about the residuals, not the variables.
2. Why are you removing outliers? Do you not want your inference to generalize to this group? There are ways to lessen their impact like robust methods. There are also methods like bootstrapping that are less dependent on distributional assumptions.
3. Ranking and applying the inverse normal function will get continuous data (i.e., no ties) to approximate the normal distribution (or other distribution if you use another function). Here is an under review paper on this, and it cites some of the older work on this: .
  • asked a question related to Robustness
Question
22 answers
I want to know which model works well with a small sample size of say 100 and under what condition will each of the models perform better
Relevant answer
Answer
The purpose and research questions, measurement scale, type and distribution of the data being analysed, and nature of the observations all influence which statistical method or test should be employed. It is important to consider the assumptions of the model and assessment of suitability of the data set for various analysis: Probit Model Analysis: https://www.scirp.org/journal/paperinformation.aspx?paperid=107292
  • asked a question related to Robustness
Question
3 answers
I have a VAR(2) model which has autocorrelations (since lag = 8 mostly), even when number of lags for this model are bigger. I got and advice that robust estimators of covariance matrix will help with this and there should be no autocorrelation. How can I change normal estimators of covariance matrix to robust estimators of covariance matrix in this model using python?
Relevant answer
  • asked a question related to Robustness
Question
6 answers
hello,
i work on a controlled Microgrid and i want to test the robustness of my controller againt a white noise that may be added to the output or the input. Is there is any specific condition to follow in order to take a good choise of a noise power ? or it is somthing random ?
- Actually i tried to take it about 3% of the nominal measurement value, is this enough to be good choice ?
- in addition, i tried the two types of noises, but i noticed that the one applied on the output affects much more the system than the one applied on the output (in such a way, my system looses its stability with the output noise, but gives an acceptable performance with the input noise ) , is this reasonnable ? if yes, why ?
thank you in advance
Relevant answer
Answer
Hi Sarah
The Microgrid controller design and its' robustness testing is different from communication system or control system. The white noise concept will not work here. The controlled Microgrid testing depends on operational scenarios and several robustness metrics are proposed by researchers for those scenarios.
One of the testing protocol is published by IEEE Standards--
2030.8-2018 - IEEE Standard for the Testing of Microgrid Controllers
DOI: 10.1109/IEEESTD.2018.8444947
It is useful to simulate operational scenarios and testing of designed controlled Microgrid.
  • asked a question related to Robustness
Question
4 answers
Dear colleagues,
Does anyone know a method for predicting (with more or less uncertainty) the contribution of organisms (community, species, functionnal group...) to a function/process?
I'm not a mathematical modeler and I don't pretend to create something robust. Rather, I am in an exploratory process to enable a better understanding among stakeholders.
For the moment I have found a few publications on plants that can help me (below) but I wonder if there are any others publications, on others organisms, ecosystems?
Best regards,
Kevin Hoeffner
References:
-Garnier, E., Cortez, J., Billès, G., Navas, M. L., Roumet, C., Debussche, M., ... & Toussaint, J. P. (2004). Plant functional markers capture ecosystem properties during secondary succession. Ecology, 85(9), 2630-2637.
-Suding, K. N., Lavorel, S., Chapin Iii, F. S., Cornelissen, J. H., DIAz, S., Garnier, E., ... & Navas, M. L. (2008). Scaling environmental change through the community‐level: A trait‐based response‐and‐effect framework for plants. Global Change Biology, 14(5), 1125-1140.
-Zwart, J. A., Solomon, C. T., & Jones, S. E. (2015). Phytoplankton traits predict ecosystem function in a global set of lakes. Ecology, 96(8), 2257-2264.
Relevant answer
Answer
Hello Kevin,
I think you can use trait-based apporach to understanding the contribution of organism to function
  • asked a question related to Robustness
Question
6 answers
I am looking for a robust LC MS/MS method for steroids.
Relevant answer
Answer
Dear Georgia Charkoftaki
Thanks for your interest to analyze steroids. Actually, for searching the analytical method using LC-MS/MS it is also necessary to specify your matrix, for which you are interested to analyze steroids. Any way , you may visit the following link:
  • asked a question related to Robustness
Question
4 answers
Dear Community,
I am doing a Fractional Outcomes regression (Logit) for my thesis and can't find information on what assumptions the model makes. I suppose I would need to test those assumptions are met in my sample in order to be able to conduct the analysis. Furthermore, I wanted to know if there is any possibility of doing a robustness test on such a model.
Additionally, in a Fractional Outomes regression do my independent variables have to be between 0 and 1 as well?
Thank you very much,
Best,
Jan
Relevant answer
Answer
Fractional responses concern outcomes between zero and one.
The most natural way fractional responses arise is from averaged 0/1 outcomes. In such cases, if you know the denominator, you want to estimate such models using standard probit or logistic regression. For instance, the fractional response might be 0.25, but if the data also include that 4 out of 36 had a positive outcome, you can use the standard estimation commands.
Fractional response models are for use when the denominator is unknown. That can include averaged 0/1 outcomes such as participation rates, but can also include variables that are naturally on a 0 to 1 scale such as pollution levels, patient oxygen saturation, and Gini coefficients (inequality measures).
Fractional response estimators fit models on continuous zero to one data using probit, logit, heteroskedastic probit, and beta regression. Beta regression can be used only when the endpoints zero and one are excluded.
I copied can cited from https://www.stata.com/features/overview/fractional-outcome-models/. Hope this can help. Thanks.
  • asked a question related to Robustness
Question
1 answer
I have carried out robustness test by adjusting the calibration threshold, and the consistency of some configurations in the test results is slightly lower than 0.75 (such as 0.73/0.72). Can this result be regarded as robust?
Relevant answer
Answer
Dear Chao Zhou,
I suggest you to see links and attached files on topic.
  • asked a question related to Robustness
Question
7 answers
Long story short:
I use a long unbalanced panel data set.
All tests indicate that 'fixed effects' is more appropriate than 'random effects' or 'pooled OLS'.
No serial correlation.
BUT, heteroskedasticity is present, even with robust White standard errors.
Can someone suggest a way to either 'remove' or just 'deal' with heteroskedasticity in panel data model?
Relevant answer
Answer
Using GLS (than OLS) is the solution for your heteroscedasticity. Also, Gujarati and Porter suggested this option in their book of econometrics.
Fyi, if you are using STATA, the syntax of "xtgls depvar indepvars, igls" will be useful. For further, you may visit this link
  • asked a question related to Robustness
Question
5 answers
Hello! I'm running a Friedman two-way analysis because my sample is not normally distributed.
I've performed the analyses on different groups, paired on two periods. One of them, although having a considerable difference between the two periods, is not significant (Friedman 1.3 on 1 degree of freedom). I wonder if this is because this group is smaller (n=22) in regards to the others.
I've looking for evidence on Friedman robustness according to sample size but I haven't found anything substantial.
Thanks!
Relevant answer
Answer
First David Morse is correct in everything he said. I would like to add that robustness of nonparametric tests to similar parametric tests is oftentimes just not true. You might want to look at the attached google search for some studies on the topic. Best wishes, David Booth
  • asked a question related to Robustness
Question
5 answers
In one of my paper, I have applied Newey-West standard error model in panel data for robustness purpose. I want to differentiate this model from FMOLS and DOLS model. So, on what ground can we justify this model over FMOLS and DOLS model.
Relevant answer
Answer
This estimator allows you for controlling for heteroskedasticity and serial correlation of the form AR, not every type of serial correlation. Clustered standard errors are robust to heroskecasticity and any form of serial correlation over time.
  • asked a question related to Robustness
Question
8 answers
Hi,
For my mastersthesis, I am doing an OLS-regression (sample size 75 with 4 independent variables). There seems to be a heteroskedasticity problem and I tried to fix this with robust std errors but afterwards my F-statistic was not significant, before it was.
Anyone tips to fix this?
Relevant answer
Answer
Hi,
Here is a reference for the resolution of Heteroskedasticity
This particular video might be helpful.
  • asked a question related to Robustness
Question
3 answers
It is my first time working with IV regression and I need some help with understanding the process. Specifically, I am looking at the effect of female presidents/prime ministers on health/education expenditure. Following a paper by Chen(2020), I use the electoral rule as an instrumental variable for the the gender of the leader. The problem is, when I run the baseline regression in Stata:
xtivreg2 educationexpenditurelag (male=majoritarianrule) i.year, fe robust first
the p-value of the F test of excluded instruments is not significant (first stage) , but once I include my control variables it becomes significant.
Does that mean that my instrument is not appropriate since the baseline regression has a not significant relationship between instrumental and instrumented variable?
Relevant answer
Answer
I am not sure. If, for instance, the problem appears without the fixed-effects, you can show the results directly with them. In any case, you should show results wihout the IV.
You could show the results of several specifications with different set of covariates without the IV and then a complete specification with IV.
Tecnically, my view is that there is no problem in what you mention. A weak isntrument when controlling for more variables would be a more serious concern...
  • asked a question related to Robustness
Question
2 answers
Actually, I want to compare robustness between ADRC & H infinity controller. ADRC is a sort of model free control whereas H infinity is model based. Suppose, a system is prone to cyber attack. Then, is ADRC enough to prevent cyber attack or we need to add H infinity controller to provide sufficient robustness against cyber attack? Please provide explanation along with clear justification.
Relevant answer
Answer
Yes. But can we achieve same robustness in ADRC as H infinity through proper tuning of ADRC controller parameters? However, H infinity demands explicit mathematical model of the plant which is sometimes difficult to derive for non linear complex dynamical system. Is there any way out to get rid of deriving such kind of detailed mathematical model?
  • asked a question related to Robustness
Question
25 answers
We have a Traditional Medicinal Product which has robust evidence of immune boosting capability. We have identified 2 novel compounds isolated, elucidated and characterized and they are derivatives of 3-deoxyanthocyanidins .
We are trying to assess whether this product would have potential for treating the corona-virus disease.
We attach a few of the peer-reviewed articles on this product and would be grateful for any advise on how we can go further.
Relevant answer
Answer
If the natural product inhibits the ACE receptors and other receptors that by allowing the Corona virus to enter the lung cells, then an important treatment is important.
  • asked a question related to Robustness
Question
3 answers
I am trying to estimate the population of small burrowing mammals with the use of camera traps but I am lacking a robust methods for it. I found the use of Mark Capture - Recapture in many of the studies but I am looking for other methods than this. Also, I am looking for availability of model as well.
Relevant answer
Answer
welcome!
I think you need to consult the books or experts on the behavior of such animals and where they prefer to live. This study is important to distribute the surveillance cameras to cover the intended areas.
The other issue, I think, is that to choose the most proper locations of the cameras.
Then you can build a so-called wireless sensor networks WSN where every camera will be considered a node in the WSN network.
In this way you can cover the area with the sensors. The rate of image capture is determined to full observe the animals.
The images taken by the cameras are sent to some to a control center where the collected pictures are processed to identify the animals.
In summary my proposal is to establish a WSN in the existing areas of such animals
Best wishes
  • asked a question related to Robustness
Question
6 answers
Hi,
My experimental data (2x2x2 between-subjects design) violates multiple assumptions (normality, homogeneity, too many outliers) of a 'normal' anova, so I'm conducting a robust three-way anova (t3way) in R using package WRS2.
I can't figure out how I'm supposed to do a post hoc test on this robust three-way anova with trimmed means.
Can someone help me out?
Relevant answer
Answer
Salvatore S. Mangiafico Thank you for these extensive answers, they are really helpful!
I will be checking out if a generalized linear model is more appropriate for my data, thank you for the tip. Indeed, I also prefer not to cut any data if not absolutely necessary. I think I'll be able to figure it our from here on, thanks again for your help!
For other people reading this with similar questions: I sent an email to the creator of the WRS2 package (Patrick Mair) and he also redirected me to the older WRS package using the mcp3atm post hoc function. He also let me know that a three-way posthoc might be on the roadmap for WRS2 in the future.
  • asked a question related to Robustness
Question
5 answers
There are various works robust performance and mixed H-2/H-inf robust control. What is the difference between them?
Relevant answer
Answer
Mixed ℋ 2 and ℋ ∞ performance objectives. II: Optimal control
DOI: 10.1109/9.310031
  • asked a question related to Robustness
Question
1 answer
Asking to confirm my knowledge in R. When conducting SEM analysis, is it enough to report Robust indices to account for possible outlier detection or whether running Mahalanobis distance method (e.g.) still appears essential? Thanks in advance
Relevant answer
Answer
In conducting SEM, you need to make sure that the assumptions of parametric test are met.
  • asked a question related to Robustness
Question
12 answers
When performing non-linear correlation, I have been using AIC to perform a preliminary selection of what models could be a potential good fit for my correlations.
I was toying around with the idea of statistically removing outliers from the data based on robust non-linear regression (been using graphpad prism for this) for each model, independently. Essentially, a point could be an outlier in model X but not model Y so it would only be removed for model X, creating a new model (Xo)
However, once you remove the outlier and refit the data, the value of AIC is going to change both because the model "fits better" but also because the actual correlation data changed, making it impossible to distinguish both components and preventing comparison of the AIC of data where no outliers where removed.
Is there some sort of mathematical way that I could determine whether model X, Y or Xo is the best fit for my data?
Essentially, some of my data might be correlated with an unknown model, and some might not. Some of the data might have outliers that would disqualify one model, but I cannot be sure of that.
I know people are just going to say to plot the data and see, but I am trying to do a lot of correlations (over thousands) and I cannot plot every single graph, so having some sort of value that I could use as a first selection criteria prior to checking the residuals would be much appreciated!
Edit:
I have included a very crude drawing of what I am trying to understand. Model 1 works best if not outliers are removed, but if the one outlier is removed, the best fitting model changes completely (model2). How to show which of these 2 choices is the best fit?
Relevant answer
Answer
I'd say AIC are somewhat analogous to p-values. One alone tells you little to nothing. (See https://towardsdatascience.com/introduction-to-aic-akaike-information-criterion-9c9ba1c96ced.) But you can compare them. However, it has to be on the same data set. The penalty for more independent variables does not make up for a thorough cross-validation, but helps. But a bigger problem here is not comparing models on the same data.
  • asked a question related to Robustness
Question
8 answers
Hello,
using Cholesky decomposition in the UKF induces the possibility, that the UKF fails, if the covariance matrix P is not positiv definite.
Is this a irrevocable fact? Or is there any method to completely bypass that problem?
I know there are some computationally more stable algorithms, like the Square Root UKF, but they can even fail.
Can I say, that problem of failing the Cholesky decomposition occurs only for bad estimates during my filtering, when even an EKF would fail/diverge?
I want to understand if the UKF is not only advantagous to the EKF in terms of accuarcy, but also in terms of stability/robustness.
Best regards,
Max
Relevant answer
Answer
If I understand your question correctly, it concerns not the initial covariance matrix rather the updated covariance matrix you get at the end of each Kalman iteration.
If such a condition arises you may use Higham's method to find an approximate positive-definite covariance matrix.
Reference:
Computing a nearest symmetric positive semidefinite matrix - ScienceDirect
  • asked a question related to Robustness
Question
4 answers
Does FAO's KC= KCb + Ke robust?
After independent reference ET (ET0) and crop ET (ETc ) estimation and partitioning (T0, E0 and Tc, Ec) both at standard and non-standard conditions, I found that 2KC= KCb + Ke under standard and non-standard conditions. Is this results reasonable? Impossible?
Relevant answer
Answer
ET0 is the reference or potential ET under controlled condition, while ETc is crop ET comes from crop coefficient factors. ETa is actual ET under non controlled condition and it may vary from stage of crop. Kindly referred De Jonge paper attached for your consideration.
  • asked a question related to Robustness
Question
2 answers
Dear all,
We want to purchase a new cytometer. However, we need a robust cytometer that won't break down all the time. Something easy to maintain but with reliable results. Can you propose something out of experience?
Thank you in advance
Cordially
Houda
Relevant answer
Answer
There are few cytometers with minimal maintenance and most reliable results.
Like Beckman Coulter Navios (clinical instrument), BD Accury (small foot print, need not voltage setting), Thermo Attune (based on Acoustic technology: may run sample on high speed without increasing CV).
All of those have their own benefits.
If you brief about your purpose, it will be easier for someone to suggest you properly.
Hope this is helpful.