Science topic

# Robustness - Science topic

Explore the latest questions and answers in Robustness, and find Robustness experts.

Questions related to Robustness

I have noticed that there are single microscopic slide/slip chambers (Cytodyne, Flexflow, IBIDI) and many studies have used these chambers. I wondered how it is possible to have more robust data by using a single fluid flow chamber (1 replicate) and a control?

Dear Researchers,

I am looking for a research paper that is published in a good journal and confirms the reliability of using NASA-POWER data in hydro-climatic studies.

Best wishes,

Mohammed

Hello all! As part of my master's thesis, I did an experiment. I now have 5 groups with n=45 participants each. When I look at the data for the manipulation checks, they are not normally distributed. In theory, however, an ANOVA needs normally distributed data. I know that an ANOVA is a robust instrument and that I don't have to worry about it with my group size. But now to my question: Do I in fact need normally distributed data at all for a manipulation check measure or is it not in the nature of the question that the data is skewed? E.g. If I want to know if a Gain-Manipulation worked, do i not want to have data skewed either to the left (or right - depending on the scale)?

Would be great if somebody could give me feedback on that!

Best

Carina

I'm trying to do a robust one-way ANOVA to compare whether there's an effect of my vignette on my dependent variable (learn_c). I need to use the robust version because I don't have equal variances across groups.

When I run a turkey-test on my regular anova, the contrasts seem to make sense.

There "growth" and "placebo" conditions are not significantly different, but both of them are significantly different from "fixed".

However, when I run it using the robust method (using WRS2 package in R), it seems to "misread" the labels and runs the contrasts differently. Now it insists that "growth" and "fixed" are not different but "placebo" and "growth" are.

Does the WRS2 order something differently? Or am I misunderstanding what it does?

I am working on time series about COVID-19 Data interested in the subject

"Robust Forecasting with Exponential and Holt -Winter " and compute all my results in R packages, therefor I need help in:-

1- any new paper in this filed

2- Codes of Holt-winter smoothing in r

Thank you for any help

With Best Wishes

Currently I am studying VAR methodologies in the hope of constructing a model for a future project, and have a fairly limited understanding of the necessary criteria to be met to generate robust results. My readings of recent literature make few mentions of residual diagnostics, specifically the joint normality of the residuals.

I have found through trial and error that a small number of exogenous spike ( blip ) dummy variables at key dates such as financial crises or policy changes, through visual inspection of residuals, have corrected the non-normality issue, I have found almost no evidence of similar studies doing the same, which leads me to wonder whether such measures are in fact misspecifications.

Tests for co-integration, in my case the Johansen test, give warnings (Eviews 12) against adding any exogenous variables, so as not to invalidate critical values. However, lag selection criteria for a VAR model differs when correcting residual non-normality with exogenous dummies. My current understanding of creating a VEC model is that, as a preliminary measure, both lag length selection and co-integration tests are performed on the model in levels, assuming all series are I(1) processes. Therefore, my question is whether one should:

1) Perform a cointegration test without dummies and select the lag length with dummies.

2) Abandon the dummies altogether, and subsequently violate the normality assumption.

3) Perform both Lag length selection and co-integration testing on the VAR in levels, then add dummies to the VECM.

My intention is to follow the common empirical approach in analysing both IFR and VDC of the VECM model (assuming there is cointegration), should this have any bearing on the matter. My understanding is that normality impacts only the validity of hypothesis testing however, in my reading, I have found no evidence to suggest that IRF and VDC standard errors are robust to non-normality.

many thanks,

Andrew Slaven

(Undergraduate Student at Aberystwyth University)

We know that in space the compact size and light weight are key design features. The compact size can sometimes be constraint for high antenna performance. Deployable Origami antennas can be a good candidate to solve this problem. But is it robust enough to work in space environment.

the small-signal stability of a two-area power system with and without DFIG

In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set.

In the context of nonlinear multi-stage max-min robust optimization problems:

What are the best robustness models such as Strict robustness, Cardinality constrained robustness, Adjustable robustness, Light robustness, Regret robustness, and Recoverable robustness?

How to solve max-min robust optimization problems without linearization/approximations efficiently? Algorithms?

How to approach nested robust optimization problems?

For example, the problem can be security-constrained AC optimal power flow.

As it is explained that exosome are robust in nature as they can withstand pH change or temperature change and can be stable in various buffer, so can we suspend exosome in pure water or distilled water and if we do, does its affects the markers present on it and if it does what are that changes occurs?

Hi, today I came across a strange problem. I found a panel data set online today (for the period 1980-1987). I first estimated a model with no fixed effects, but with time fixed effects (year dummies). As expected, one of the dummy variables was removed from the model (1980). I then used the fixed-effects estimator and observed something odd. Now, in addition to the 1980 dummy variable, the 1987 variable was also removed, as was the education variable. The education was removed because it is time invariant, but I can't explain the removal of the 1987 dummy variable. I had initially inferred mulitcollinearity, but then the 1987 variable should have been removed in the first regression, right? Also, the VIF values do not indicate a problem of mulitcollinearity. So what could be the reason for this? Could it have something to do with the -xtreg- command or fixed-effects transformation in general?

These are my commands:

reg lnwage union educ exp i.year, robust

xtreg lnwage union educ exp i.year, fe robust

Hi, I'm currently searching for a rigorous approach to show that my estimated regression coefficients are robust to sampling procedures.

I have performed a fixed-effect IV regression on a full sample and obtained coefficients. I need to show that my regression coefficients are invariant or robust to different subsets of the sample. How can I test to show that my coefficients are invariant to say the coefficients from a regression after dropping 5, 10, 15, 20%...% of the sample?

I am conducting a research on quality in higher education by using system dynamics approach.

How can I determine the number and components of improvement scenarios in the future.

Are there robust selection criteria of number or components of scenarios ?

Are there references for identifying the number and components of scenarios?

Thank you so much

In case of not, what is the non parametric test could be used?

How do I check the Endogenity in AMOS if i have one IV (Knowledge sharing) and one DV (Performance). And how do I check the robustness if I have Knowledge sharing as IV and Performance as DV and Gender as a moderator.

This is a very important question, because I have not found a validated scale so far that can be considered as the most robust/reliable

I mean something that could do a work equvalent to what MAXQDA or Atlasti do?

I read some articles about statistical robustness of SmartPLS. However, I am not sure about the appropriateness of SmartPLS in the case of survey study involving a representative sample with adequate sample size. Any suggestions?

Thank you!

Dears,

Do we need exploit further the genetic robustness aroused from distant hybridization? Mule is an excellent example in this regard.

I intended to integrate system dynamics simulation with Machine Learning to enable it to quickly react to changes and withstand the impact of the changes. However, if there is a better ML type that is robust,p/se suggest it to me. Added to this, from my preliminary study, the following algorisms seem good to me :1. Adaptive Machine Learning 2. Deep reinforcement learning 3. RO model and PYOMO robust optimization

To sum up, I would be so delighted if you suggest to me a tool that can easily integrate with ML to make it more robust.

Thank you in advance!

Hi my dynamic model is

Gender Inequality Index(GII) = a+GIIt-1+bFDI+ (ControlV)+U

My control variables are 7. I have used all the control variables and my main explanatory variable as the strictly exogenous ivstyle instruments. Is this correct. I have read somewhere that we can treat all the regressors in ivstyle but i still don't understand why?

xtabond2 GII lag_GII log_FDIinflowreal NaturalresourceRent Generalgovernmentexpenditure GDPGrowth Schoolsecondaryfemale UrbanPopulationControl polity2 Fertilityrate Y*, gmm(GII, lag (0 5) collapse) iv( log_FDIinflowreal NaturalresourceRent Generalgovernmentexpenditure GDPGrowth Schoolsecondaryfemale UrbanPopulationControl polity2 Fertilityrate Y*, equation(level)) nodiffsargan two

> step robust orthogonal small

Dynamic panel-data estimation, two-step system GMM

------------------------------------------------------------------------------

Group variable: countrycode Number of obs = 239

Time variable : Year Number of groups = 49

Number of instruments = 24 Obs per group: min = 1

F(17, 48) = 22.88 avg = 4.88

Prob > F = 0.000 max = 9

----------------------------------------------------------------------------------------------

| Corrected

GII | Coef. Std. Err. t P>|t| [95% Conf. Interval]

-----------------------------+----------------------------------------------------------------

lag_GII | .4063319 .1660706 2.45 0.018 .0724247 .7402392

log_FDIinflowreal | .0052016 .004571 1.14 0.261 -.0039891 .0143923

NaturalresourceRent | .0001336 .0007056 0.19 0.851 -.0012852 .0015523

Generalgovernmentexpenditure | -.0011517 .0027406 -0.42 0.676 -.0066621 .0043588

GDPGrowth | .0000538 .0011326 0.05 0.962 -.0022235 .0023311

Schoolsecondaryfemale | -.0015661 .0005599 -2.80 0.007 -.0026918 -.0004405

UrbanPopulationControl | .0002386 .000501 0.48 0.636 -.0007687 .0012459

polity2 | .0029176 .00107 2.73 0.009 .0007662 .005069

Fertilityrate | .0172748 .0121555 1.42 0.162 -.0071655 .0417151

Year | -.0002603 .0066672 -0.04 0.969 -.0136656 .013145

Yeardummy1 | .1047832 .1552767 0.67 0.503 -.2074215 .4169879

Yeardummy17 | -.006658 .0432925 -0.15 0.878 -.0937034 .0803874

Yeardummy18 | -.0006796 .0359611 -0.02 0.985 -.0729842 .071625

Yeardummy19 | -.0071339 .0330241 -0.22 0.830 -.0735332 .0592655

Yeardummy20 | -.0066488 .0261336 -0.25 0.800 -.0591938 .0458963

Yeardummy21 | .0021421 .0180578 0.12 0.906 -.0341655 .0384498

Yeardummy22 | .0005937 .0097345 0.06 0.952 -.0189789 .0201663

_cons | .8149224 13.41294 0.06 0.952 -26.15361 27.78345

----------------------------------------------------------------------------------------------

Instruments for orthogonal deviations equation

GMM-type (missing=0, separate instruments for each period unless collapsed)

L(0/5).GII collapsed

Instruments for levels equation

Standard

log_FDIinflowreal NaturalresourceRent Generalgovernmentexpenditure

GDPGrowth Schoolsecondaryfemale UrbanPopulationControl polity2

Fertilityrate Year Yeardummy1 Yeardummy2 Yeardummy3 Yeardummy4 Yeardummy5

Yeardummy6 Yeardummy7 Yeardummy8 Yeardummy9 Yeardummy10 Yeardummy11

Yeardummy12 Yeardummy13 Yeardummy14 Yeardummy15 Yeardummy16 Yeardummy17

Yeardummy18 Yeardummy19 Yeardummy20 Yeardummy21 Yeardummy22 Yeardummy23

Yeardummy24

_cons

GMM-type (missing=0, separate instruments for each period unless collapsed)

DL.GII collapsed

------------------------------------------------------------------------------

Arellano-Bond test for AR(1) in first differences: z = -1.70 Pr > z = 0.090

Arellano-Bond test for AR(2) in first differences: z = 0.43 Pr > z = 0.669

------------------------------------------------------------------------------

Sargan test of overid. restrictions: chi2(6) = 18.25 Prob > chi2 = 0.006

(Not robust, but not weakened by many instruments.)

Hansen test of overid. restrictions: chi2(6) = 5.55 Prob > chi2 = 0.475

(Robust, but weakened by many instruments.)

.

When using Wasserstein balls to describe the uncertainty set in distributionally robust optimization, can multiple sources of uncertainty be considered at the same time, such as wind power and solar power forecast error？

I am implementing Distributionally robust optimization using bender decomposition in GAMS. Can anyone provide me with any helping material or source code for the implementation of expansion planning problems?

I am looking for a robust method to aggregate the 8-day MOD16A2 (or MOD16A2GF) ET_500m values to monthly ET. Since ET_500m and PET_500m, are the summation of 8-day total water loss (0.1 kg/m2/8day). I tried with a sum of all values within a month but when I saw the dates (i.e. 01/25/2000 02/02/2000 02/10/2000 02/18/2000 02/26/2000 03/05/2000 )

It seems that I'm summing more values at the beginning of the month and less at the end. So the monthly summation loss reliability.

Because of that, this method doesn´t seem appropriate. Is there any other method to do this task or to improve this method?

The monthly MODIS ET dataset will be used to validate SWAT ET.

Here are my Matlab files for the paper we recently published in IET Control Theory and Applications. The paper is about designing robust controllers for networked systems. It is hoped that these codes will help students to understand how a robust approach would be coded.

Code Matlab a Robust Controller +LMI+Multi-agent systems

Yalmip Toolbox must be added to Matlab.

How to Install a MATLAB toolbox?

After that, you can run the attached codes.

A reviewer suggested there are "more advanced and robust methods" to compare groups in my study that involves three groups comparison - two with Augmented Reality tools and one with a conventional marketing tool, same brand and same product type though.

The reviewer said "ANOVA is a useful and robust analysis tool if we compare directly measurable items. Unfortunately, this is not the case for this manuscript. Therefore, the findings of this comparison are questionable."

Weights derived from PCA analysis to calculate the water quality index. ?

Is a robust PCA analysis always required to derive the weights of the parameters, or is it also possible through a classical PCA analysis in the same way ?

Thanks.

The assumption of robust least square regression and supporting scholars.

Hi,

I'm looking for a small (less than 0.5m of max length) underwater sound source to do some experimental measurements, any recommendations?

I'm looking for something robust and reliable but with prices ranging from lowcost to lab equipment.

All the best

Can we use the NPCR and UACI to test the robustness of audio encryption against differential attacks? What is the range of these two for a good encryption system?

Give your suggestions for robust framework for a water sensitive policy.

The models that are used to check the robustness of the main econometric model may not always provide 100% parallel outcomes. Does it mean that there are flaws in the main estimation outcomes?

Dear colleagues,

Currently I'm doing research using ARDL-ECM method processed through Eviews 10. Can somebody mention or explain what are the robustness test used for ARDL-ECM analysis?

Thank you

Resently, I study robust MPC based on LMI. A number of literatures use Lyapunov-Krasovskii function to research time-delay system. I don't know I can use Lyapunov-Krasovskii function to study non time-delay system?

What sorts of robust quality measuring tools do exist to support institutional-self-evaluation in a university context?

I have been searching but haven't find any robust material on this topic. Can anyone provide me a paper or book that explains this?

Dear all, I have some real data (about 32 equidistant points), and I fitted it a Fourier transform function using the FFT method. Indeed I get 32 complex Fourier coefficients, which correspond to the obtained 16 positive frequencies. I want to apply a low pass filter to smooth the obtained fitted function. Actually I take the fifth frequency as a low pass threshold (so I take only the first five frequencies which correspond to 30% of the total frequencies). I have chosen this threshold, basing on a visual interpretation of the fitted curve. Can anyone suggest a more robust or efficient method to choose the threshold frequency for low pass filter?

Can the distributionally robust optimization (DRO) method deal with the problems related to multiple uncertain variables? Are there any related literature recommendations？Thanks for your kind help.

Hi everyone.

I have nonnormal data. I want to do confirmatory factor analysis. So I'm using a robust technique (MLM) with R. When I use MLM estimator without deletion anything, it results bad fit indices. If I firstly delete influential outliers according to mahalanobis distance(p < 0.001) then use MLM, I get acceptable fit indices. But I am confused about whether is true method or overfitting.

On the other hand, as I understand I should report results with and without deletion outliers. I want to ask, if I report with/without, can I say the study provides validation?

I'll be glad to hear your advice.

Best regards!

Many empirical papers in economics do have a separate section for robustness analysis after the main estimation which might be the use of another estimator or perhaps using new sets of variables. The questions that follows this is, how do researcher choose the appropriate robustness estimator after the main analysis, what is the implications of differing result between the main estimator and robust estimator.

i have results of two different techniques, and now i want to compare the results. i have huge data to compare. so i need to work on more clear and robust method to compare the results apart from making plots of result data of both techniques. is there any method anyone know i will be thankful for this if he share with me. i am going to add the comparsion on thesis.

thanks

Many empirical papers in economics do have a separate section for robustness analysis after the main estimation which might be the use of another estimator or perhaps using new sets of variables. The questions that follows this is, how do researcher choose the appropriate robustness estimator after the main analysis, what is the implications of differing result between the main estimator and robust estimator.

I want to evaluate the robustness of the clustering algorithm to noise ... How can I add noise to the data... Is there a well known method (such as salt and pepper in the image data)?

Hello every one.

I am PhD candidate in finance. I using this command in stata. The sample is compose by N = 88 country and T= 37 years

xtabond2 Index L1.Index L2.Index Savings private Value FDI MO invest Governst PowerD Individ mas Uncert Longter , gmm(L1.Index L2.Index, laglimits(2 .) collapse) iv( PowerD Individ mas Uncert Longter, equation(level)) twostep orthogonal small

Arellano-Bond test for AR(1) in first differences: z = -0.99 Pr > z = 0.323

Arellano-Bond test for AR(2) in first differences: z = -1.03 Pr > z = 0.304

------------------------------------------------------------------------------

Sargan test of overid. restrictions: chi2(27) = 2.83 Prob > chi2 = 1.000

(Not robust, but not weakened by many instruments.)

Hansen test of overid. restrictions: chi2(27) = 36.62 Prob > chi2 = 0.102

(Robust, but weakened by many instruments.)

Difference-in-Hansen tests of exogeneity of instrument subsets:

GMM instruments for levels

Hansen test excluding group: chi2(25) = 27.78 Prob > chi2 = 0.318

Difference (null H = exogenous): chi2(2) = 8.84 Prob > chi2 = 0.012

wait for usefull help

The structural and functional behaviour of metabolites

I have many groups - from 10 to 30 with 200-500 in each. (not normal distribution)

After Friedman's ANOVA test I have a significance P< 0.0000.

Which Post-hoc test is more robust and appropriate for pairwise comparisons???

Am I need to use a Bonferroni correction in this situation?

Maybe the Wilcoxon rank-sum test is the best?

I have collected data on autistics and non autistics on various measures. The number of participants is around 2,000, 1,000 approx in each group. I have removed extreme outliers, transformed the data (log, SQRT, Reciprocal) and Windsorizing but the data for the different scales as a whole but also for the individual groups is not normally distributed.

Visually the histograms look normally distributed for some of the variables but the KS and shapiro wilk statistics suggest otherwise.

I want to run MANCOVA, ANCOVA, mediation and some other parametric tests - I know I am violating assumptions it should be non-parametric but can it be justified they are robust tests, large data set etc that all of this can be overlooked?

Is there something else I could try to get the data to be normally distributed?

I want to know which model works well with a small sample size of say 100 and under what condition will each of the models perform better

I have a VAR(2) model which has autocorrelations (since lag = 8 mostly), even when number of lags for this model are bigger. I got and advice that robust estimators of covariance matrix will help with this and there should be no autocorrelation. How can I change normal estimators of covariance matrix to robust estimators of covariance matrix in this model using python?

hello,

i work on a controlled Microgrid and i want to test the robustness of my controller againt a white noise that may be added to the output or the input. Is there is any specific condition to follow in order to take a good choise of a noise power ? or it is somthing random ?

- Actually i tried to take it about 3% of the nominal measurement value, is this enough to be good choice ?

- in addition, i tried the two types of noises, but i noticed that the one applied on the output affects much more the system than the one applied on the output (in such a way, my system looses its stability with the output noise, but gives an acceptable performance with the input noise ) , is this reasonnable ? if yes, why ?

thank you in advance

Dear colleagues,

**Does anyone know a method for predicting (with more or less uncertainty) the contribution of organisms (community, species, functionnal group...) to a function/process?**

I'm not a mathematical modeler and I don't pretend to create something robust. Rather, I am in an exploratory process to enable a better understanding among stakeholders.

For the moment I have found a few publications on plants that can help me (below) but I wonder if there are any others publications, on others organisms, ecosystems?

Best regards,

Kevin Hoeffner

References:

-Garnier, E., Cortez, J., Billès, G., Navas, M. L., Roumet, C., Debussche, M., ... & Toussaint, J. P. (2004). Plant functional markers capture ecosystem properties during secondary succession.

*Ecology*,*85*(9), 2630-2637.-Suding, K. N., Lavorel, S., Chapin Iii, F. S., Cornelissen, J. H., DIAz, S., Garnier, E., ... & Navas, M. L. (2008). Scaling environmental change through the community‐level: A trait‐based response‐and‐effect framework for plants.

*Global Change Biology*,*14*(5), 1125-1140.-Zwart, J. A., Solomon, C. T., & Jones, S. E. (2015). Phytoplankton traits predict ecosystem function in a global set of lakes.

*Ecology*,*96*(8), 2257-2264.I am looking for a robust LC MS/MS method for steroids.

Dear Community,

I am doing a Fractional Outcomes regression (Logit) for my thesis and can't find information on what assumptions the model makes. I suppose I would need to test those assumptions are met in my sample in order to be able to conduct the analysis. Furthermore, I wanted to know if there is any possibility of doing a robustness test on such a model.

Additionally, in a Fractional Outomes regression do my independent variables have to be between 0 and 1 as well?

Thank you very much,

Best,

Jan

I have carried out robustness test by adjusting the calibration threshold, and the consistency of some configurations in the test results is slightly lower than 0.75 (such as 0.73/0.72). Can this result be regarded as robust?

Long story short:

I use a long unbalanced panel data set.

All tests indicate that 'fixed effects' is more appropriate than 'random effects' or 'pooled OLS'.

No serial correlation.

BUT, heteroskedasticity is present, even with robust White standard errors.

Can someone suggest a way to either 'remove' or just 'deal' with heteroskedasticity in panel data model?

Hello! I'm running a Friedman two-way analysis because my sample is not normally distributed.

I've performed the analyses on different groups, paired on two periods. One of them, although having a considerable difference between the two periods, is not significant (Friedman 1.3 on 1 degree of freedom). I wonder if this is because this group is smaller (n=22) in regards to the others.

I've looking for evidence on Friedman robustness according to sample size but I haven't found anything substantial.

Thanks!

In one of my paper, I have applied Newey-West standard error model in panel data for robustness purpose. I want to differentiate this model from FMOLS and DOLS model. So, on what ground can we justify this model over FMOLS and DOLS model.

Hi,

For my mastersthesis, I am doing an OLS-regression (sample size 75 with 4 independent variables). There seems to be a heteroskedasticity problem and I tried to fix this with robust std errors but afterwards my F-statistic was not significant, before it was.

Anyone tips to fix this?

It is my first time working with IV regression and I need some help with understanding the process. Specifically, I am looking at the effect of female presidents/prime ministers on health/education expenditure. Following a paper by Chen(2020), I use the electoral rule as an instrumental variable for the the gender of the leader. The problem is, when I run the baseline regression in Stata:

xtivreg2 educationexpenditurelag (male=majoritarianrule) i.year, fe robust first

the p-value of the F test of excluded instruments is not significant (first stage) , but once I include my control variables it becomes significant.

Does that mean that my instrument is not appropriate since the baseline regression has a not significant relationship between instrumental and instrumented variable?

Actually, I want to compare robustness between ADRC & H infinity controller. ADRC is a sort of model free control whereas H infinity is model based. Suppose, a system is prone to cyber attack. Then, is ADRC enough to prevent cyber attack or we need to add H infinity controller to provide sufficient robustness against cyber attack? Please provide explanation along with clear justification.

We have a Traditional Medicinal Product which has robust evidence of immune boosting capability. We have identified 2 novel compounds isolated, elucidated and characterized and they are derivatives of 3-deoxyanthocyanidins .

We are trying to assess whether this product would have potential for treating the corona-virus disease.

We attach a few of the peer-reviewed articles on this product and would be grateful for any advise on how we can go further.

I am trying to estimate the population of small burrowing mammals with the use of camera traps but I am lacking a robust methods for it. I found the use of Mark Capture - Recapture in many of the studies but I am looking for other methods than this. Also, I am looking for availability of model as well.

Hi,

My experimental data (2x2x2 between-subjects design) violates multiple assumptions (normality, homogeneity, too many outliers) of a 'normal' anova, so I'm conducting a robust three-way anova (t3way) in R using package WRS2.

I can't figure out how I'm supposed to do a post hoc test on this robust three-way anova with trimmed means.

Can someone help me out?

There are various works robust performance and mixed H-2/H-inf robust control. What is the difference between them?

Asking to confirm my knowledge in R. When conducting SEM analysis, is it enough to report Robust indices to account for possible outlier detection or whether running Mahalanobis distance method (e.g.) still appears essential? Thanks in advance

When performing non-linear correlation, I have been using AIC to perform a preliminary selection of what models could be a potential good fit for my correlations.

I was toying around with the idea of statistically removing outliers from the data based on robust non-linear regression (been using graphpad prism for this) for each model, independently. Essentially, a point could be an outlier in model X but not model Y so it would only be removed for model X, creating a new model (Xo)

However, once you remove the outlier and refit the data, the value of AIC is going to change both because the model "fits better" but also because the actual correlation data changed, making it impossible to distinguish both components and preventing comparison of the AIC of data where no outliers where removed.

Is there some sort of mathematical way that I could determine whether model X, Y or Xo is the best fit for my data?

Essentially, some of my data might be correlated with an unknown model, and some might not. Some of the data might have outliers that would disqualify one model, but I cannot be sure of that.

I know people are just going to say to plot the data and see, but I am trying to do a lot of correlations (over thousands) and I cannot plot every single graph, so having some sort of value that I could use as a first selection criteria prior to checking the residuals would be much appreciated!

Edit:

I have included a very crude drawing of what I am trying to understand. Model 1 works best if not outliers are removed, but if the one outlier is removed, the best fitting model changes completely (model2). How to show which of these 2 choices is the best fit?

Hello,

using Cholesky decomposition in the UKF induces the possibility, that the UKF fails, if the covariance matrix P is not positiv definite.

Is this a irrevocable fact? Or is there any method to

**completely**bypass that problem?I know there are some computationally more stable algorithms, like the Square Root UKF, but they can even fail.

Can I say, that problem of failing the Cholesky decomposition occurs only for bad estimates during my filtering, when even an EKF would fail/diverge?

I want to understand if the UKF is not only advantagous to the EKF in terms of accuarcy, but also in terms of stability/robustness.

Best regards,

Max

Does FAO's KC= K

_{Cb}+ Ke robust?After independent reference ET （ET0） and crop ET (ETc ) estimation and partitioning (T0, E0 and Tc, Ec) both at standard and non-standard conditions, I found that 2KC= K

_{Cb}+ Ke under standard and non-standard conditions. Is this results reasonable? Impossible?Dear all,

We want to purchase a new cytometer. However, we need a robust cytometer that won't break down all the time. Something easy to maintain but with reliable results. Can you propose something out of experience?

Thank you in advance

Cordially

Houda