Questions related to Variability
I am trying to downscale GCM-(CanCM4) data to bhima basin catchment(finer scale) for projecting future scenarios. Further I have used following variables ta,ua,zg,va(all these @ 925,850,700,200 pressure levels) and pr,psl(total 6 variables). I am attaching image which I got from working on GCM, now considering mid point of these GCM grid points only 2 station lie on periphery(+ mark) for down scaling. Can I downscale these GCM points to the 0.5deg grid points? If yes, how to consider the weights?
I would highly appreciate it if my fellow ecologists (biologists) provide their opinion on the thoughts below [esp., shortly tell us which path may be more effective, if they know another way, if there is a recent breakthrough toward this goal].
To consider the effects of Acclimation and Directional Selection on populations' thermal sensitivity in the (mechanistic or phenomenological) modeling of ecological impacts of temperature variability (and climate change), we can follow two general paths:
(1) To produce enough empirical data to define simplistic indices of warm adaptation capacity (based on exposure temperature and duration) for at least some keystone species [a simple e.g., ARR; Morley et al., 2019]. Such indices can only be applied to models' outputs.
(2) To understand the GENERAL mechanisms (principal functional components) defining the heat sensitivity of various taxa [e.g., OCLTT, Pörtner, 2010], define how the component (quantitatively) relates to the capacity for rapid warm adaptation [no Ref.], and set (adaptive) feedback loops in existing models [a simple e.g., Kingsolver et al., 2016].
I am conducting an interobserver variability study where we have 12 raters who are going to rate samples of lesions. They will rate up to 10 variables per sample. Although the size of the population is quite limited due to the nature of the lesions, it is a bit of a headache to find a well-written method to estimate the sample size needed for this study. Several searches on the internet have overwhelmed me with many possibilities, most of them being quite complicated.
Is there any statistical method you can recommend? Any help is appreciated!
I want to import a 2-dimensional data to comsol from a CSV file. Data format of CSV file is as: nx3-matrix with first column as data values, and 2nd and 3rd as x and y-coordinates.
I want to use these values as initial values of a variable in comsol (variable is a 2D field variable).
I tried to use interpolation function to import this data but that did not work. I do know how to import a variable with "single value", but don't know to import a variable with many values at different point in the field i.e. field variable.
Kindly help me out, i will be very thankful.
How do you conduct a regression analysis in SPSS using 1 predictor variable and 2 dependent variables?
We try to simulate a 3D structure in COMSOL (optic wave) but this error occurs:
"variables property does not include all variables in xmesh."
Can anyone help regarding this matter.
Variability of pesticide residues are inevitable. A deafult variability factor (Vf) of 3 are used but incase of medium and large size commodity crops a wide range of Vfs are estimated for several researchers. Therefore, I think that it shuold be reconsidered when more data are available. So, I think that it is an emerging need to conduct research for the medium and large size commodity crops especially for which crops, that did not selected till today.....
I mean, when using change from baseline, is it valid to include a baseline measure as a control variable when testing the effect of an independent variable on change scores?
I used optimal scaling analysis and set 2 dimensions for the variables, with VAF of 36.6% obtained.
I set 3 dimensions, it's around 46%.
I don't know how much VAF is acceptable.
To me, 2 dimensions is easier to explain and define the dimension.
I have several categorical variables (binary or with more levels), and several multiple response variables as well. I mean those multiple choice questions in questionnaire (not a test). I'd like to classify the data or reduce the dimension, but I'm not sure how these multiple responses should enter the analysis.
I should specify the variables, they are, for example:
categorical: gender, education level, ethnicity, etc.
multiple response: Where do you purchase clothes? A. online B. In shopping malls C. In the markets or fairs D. In boutiques E.In the tailor's
I have tried searching and find that the articles and books I came across do not give a clear explanation on determining the level of streamflow variation based on CV.
I am trying to distinguish the variation level of my mean annual flow data series, which value ranges from 0.31 to 1.49 (+ one outlier: 3.59). I was just going to consider <0.5 is low, 0.5-1.0 is moderate, >1.0 is high, but I found a paper that said the general rule use <0.1 as the low level and >1.0 as the high level, and another source that said <0.3 is the low level.
I hope someone can help me to find some references for variation / CV handling in the water resources study related to my problem.
I am also thinking about whether the solution actually is a relative determination based on the stream characteristics or region. In my case, it is the tropical region, which known to have higher variability than other regions. So, I am thinking that the variability level determination can be adjusted to be higher (especially for low-level threshold) considering it is only a regional study.
in phase space reconstruction and with a aim of unfolding a system dynamic (non linear) first we must determine 2 parameter , time lag and number of dimension. my main question is if our dimension more than 3 how can these dimensions be drawn in MATLAB?
I need to conduct a pilot study for my phd thesis questionnaire on workplace spirituality (dependent variable), personality, emotional intelligence and ocb (independent variables) along with demographic profile. I seek experts who can review my questionnaire and advice me with their comments and opinions. Can some experts help me in this regard?
I have a long term point data of occurrence of a spatial event. I want to analyze the long tern spatial variability of these events.
can you suggest any Spatio-statistical method for such variability analysis.
You Suggestion will be appreciate.
I am beginning an experiment assessing timing-related behavior in adults with ADHD and the perceptual measures I plan to use are adaptive, and determine perceptual thresholds using standard adaptive algorithm procedures (e.g. staircase method). However, I'm concerned about the inevitable impact of attentional lapses on thresholds. I am interested in suggestions for how best to tune the staircase parameters and/or suggestions for other adaptive algorithms that may be more resilient to lapses of attention. Any thoughts?
Is it possible to obtain a positive correlation between dependent and independent variable, after that, obtain a negative coefficient in the fitted regression model for the same?
Does anyone have SPSS syntax (or suggestions) for running a nonparametric analysis of covariance? My dependent variable is not normally distributed, my independent variables are categorical, and I have 2 covariates I would like to include in the analysis. The drop down nonparametric options in SPSS do not allow for this analysis.
I have an SPSS file that has several variables with some missing data for a number of cases. I have a second file that contains some of that missing data but not a complete set of data for those respective variables. If I merge the files by "adding cases", it creates duplicate cases. Given that there are over 1000 cases in the overall data set, it would take way too long to delete the duplicate cases. Is there a way to insert the missing values from one file to the other without creating a duplication?
Thank you. Long run cointegration was not significant when I used log of the dependent variable but very significant for non-log of the dependent variable in ARDL Bound Test. The estimation model I am using is Non-linear Auto-distribution Lag (NARDL) developed by Shin et al. .
In NARDL, we first take the positive and negative values of the independent variables after taking their first difference. For instance, my GDP as an independent variable becomes GDPP for GDP plus and GDPM for GDP minus.
My models with log of the dependent variable (LOIP) are:
1) LOIP = GDPP + GDPM + XCR + INFL+ M2
2) LOIP = GDP + XCRP+ XCRM +INFL + M2
3) LOIP = GDP + XCR+ INFLP +NFLM + M2
4) LOIP = GDP + XCR + XCR + INFL + M2P + M2P
Using these models ARDL, the long-run cointegration is not significant.
The models without Log of the dependent variable (OIP) are
1) OIP = GDPP + GDPM + XCR + INFL+ M2
2) OIP = GDP + XCRP+ XCRM +INFL + M2
3) OIP = GDP + XCR+ INFLP +NFLM + M2
4) OIP = GDP + XCR + XCR + INFL + M2P + M2P.
Using these models now yielded significant lon-run cointegration.
Is it allowed to use non-log dependent variable data?
I am relativity new to vensim and seem to be unable to find a simple function that calculates the difference of a value for an auxiliary variable in the current time with the value of that same var. in a certain time in the past( previous steps)
some function like this perhaps y= (Variable at time t - Variable at time t-n)?
Any suggestion on how to handle and solve endogeneity problem in multivariate probit model: the choice variables are, for example, Y1, Y2, and Y3, which are binary variables. The independent variables include X1, X2, X3 and X4. However, X3 is also affected by X2 and X5
I have a panel data and am using fixed effect model which contain a dummy variable specific to region of countries[For instance, for Latin american countries i set this dummy variable LA to 1]....Since this dummy is time invariant, when i estimated fixed effect model, stata drops the dummy due to collinearity issue ...Can i still estimate the coefficient of this dummy through some other way?? Please guide..
I found that there is an inverted-U shape relationship between my moderation variable and dependent variable (y = b0 + b1*m + b2*m^2). Now, I would like to investigate the moderation role of m in the relationship between x and y.
So, which of the following equations should I use for moderation?
i. y = b0 + b1*x + b2*m + b3*x*m
ii. y = b0 + b1*x + b2*m^2 + b3*x*m^2
iii. or any other suggestions?
I am using the complex sampling analysis method within SPSS. I would like to use the cox regression for my variable under complex sample, as my variable has a prevalence rate of greater than 10%, thus logistic regression should not be used. When using cox regression under the complex sampling analysis - is robust variance already controlled for?
Hello everyone, I am currently working on my thesis where I am analyzing if some factors have impact on purchasing behavior. 15 of the questions in my survey were were Likert Scale (1-5) and I would like to use them as independent variable. Total money spend would be dependent variable and I would also like to use income and gender and some other stuff as independent variable. What statistical analysis would you recommend me to analyze my data? I converted my Likert scale data into factor data with 5 levels and now I am lost.
Can I use OLS or would you recommend something else?
Any help would be really appreciated
I have two variables that mediates relationships between IV and DV. I would like to put them into multiple mediation model. How should decide if it goes as serial mediation or parallel mediation? The mediators are correlated.
thank you in advance for any answers
I am designing a voltage source converter (VSC) for variable speed constant frequency operation i.e Grid connected DFIG,what should be the rating of the converter ,like which parameters of the grid together with the generator will influence the rating of VSC.
Waiting for your good response
It is well established that the EM signals from AGNs varies within days . What is the best model to explain this intriguing result.
While assessing intra-assay coefficient of variation (CV), it is generally advised that CVs should be calculated from the calculated concentrations rather than the raw optical densities.
If the raw optical densities of my samples are just in the middle of calibration curve (see attached illustration - situation A), the curve is quite steep here, and low variation of absorbance reflects low variation of concentrations, and reported intra-assay CV is satisfactory.
But in my case, all my samples are on the lower edge of calibration curve (see attached illustration - situation B). Although I have pretty low variations in raw optical densities of the duplicates (CVs about 5%), variations of the calculated concentrations are much worse (CVs about 20%), due to gentle slope of my calibration curve in this area.
If I report such high CV, it will be perceived as unacceptable, however, my pipetting technique is satisfactory.
What should I do? How should I report my intra-assay CV in order to make it misinformative for reviewers and readers of the article?
Suppose we have 2 variance breaks in first variable and 3 variance in second variable. How do you estimate the model now?
I am doing multiple ELISAs (>150) and I'm analysing 3 factors within human plasma, which requires ~52 plates each factor.
I have a quality control on each plate and run it in duplicate. This control is always the same. I don't have positive controls with a known high and low concentration.
For 2 of the factors with a 1/30 dilution, there is less than 10% variance across the plates. For one of the factors with a 1/3 dilution, the variance is ~30%. This is very high and the accepted cut off is generally <10%.
I don't have time to repeat the experiments at this stage. I am looking for a way to normalise the data so an increase in the quality control from one plate doesn't lead to an observed increase in the factors I'm testing.
Currently, I have been dividing the plate samples with the respective plate controls. This seems to be working at the moment, but is this a correct method?
Thank you in advance.
A perennial question from my students is whether or not they should normalize (say, 0 to 1) a numerical target variable and/or the selected explanatory variables when using artificial neural networks. There seems to be two camps: those that say yes, and those that say it is unnecessary. But what is your opinion ....and importantly, why?
I'm interested in determining the effect size (ES) for Friedman's test. Let's say that there are k levels of the independent variable and that the sample size is m. I could not find references for ES for this test, so any help will be very appreciated. :)
What is the best statistical test to evaluate an effect of an independent variable (of interest) among other independent variables who have a partial similar effect (effect modification), and the tested variable (outcome) is an ordinal dependent variable?
A polynomial can have constants, variables and exponents,
but never division by a variable.
IRR/NPV equation involves division by a variable. That being the case, how can it be a polynomial?
The independent variable is Parents occupation (Father occupation and Mother occupation) with 4 responses for each = Freelancer, business, employee, pensionary, unemployed
While the dependent variable is Entrepreneurial Intentions of the student on a 5 range scale.
I want to find a relationship between Parents occupation (both fathers' and mothers') and Entrepreneurial intentions, I hope it makes sense ;), Can anyone please advice how can I do this in SPSS???
In a multiple regression, we have three dependent variables (A, B and C) and we have five independent variables (V, W, X, Y and Z).
At the time of checking the Collinearity statistics of A, B and C, the value of tolerance and VIF of five independent variables (V, W , X, Y and Z) are same for all dependent variables (A, B and C) .
As an example
tolerance and VIF of (V, W ,X,Y and Z) with A are (0.716, 0.964, 0.645, 0.741,0.862) and (1.397,1.037,1.551,1.349,1.160).
tolerance and VIF of (V, W ,X,Y and Z) with B are (0.716, 0.964, 0.645, 0.741,0.862) and (1.397,1.037,1.551,1.349,1.160).
tolerance and VIF of (V, W ,X,Y and Z) with C are (0.716, 0.964, 0.645, 0.741,0.862) and (1.397,1.037,1.551,1.349,1.160).
The five point likert scales are used to collect data for all the dependent and independent variables.
From the above result what types of interpretation can I make on A, B and C?
(Pearson correlation matrix or it will be better if I can use multiple regression analysis to measure the relationship between dependent and independent variables.
Let's say, I have modeled variable x with forcing data from the time period 1990-2000. The observed ‘x’ I have is spanning only the time period 2000-2010. How can I make an objective comparison of the modeled and observed x ? Or, should I stick to visual comparison, which is not only computationally inefficient but also lacks objectivity.
I have a dataset for several years, I want to find that how percentages of women and men select between yes or no in different years. Is it necessary to calculate year by year in SPSS or there is a simpler way? can I cross tabulate?
The form of the function is
f=(convex in variable x1)+(convex in variable x2)-(convex in variable x3)
I am looking for a concise example of three pairwise independent (non-degenerate) random variables that are not mutually independent.
I have three time series, which should NOT be classified as independent to each other. I would like to examine the relationship of each of the time series however using a method such as regression assumes independence. How can i calculate this with dependant variables?
Good Evening to Dear Group members. Could anyone please answer my doubt, about the calculation of the value Exogenous/Endogenous variables in the Measurement model?
We can calculate the "Beta -coefficient" if we know the value of both Independent variable (Exogenous/Endogenous variable) & Dependent variable (Item).
But, here we do not know the value of the Exogenous/Endogenous variable. So, how do we determine the Beta coefficient b/w Exogenous/Endogenous variable & its Item?
Everything I've measured was based on a 5 point Likert scale.
Now I want to address the effect of variable X(trust) on the different dependent variables Y(privacy), which is an average of four dependent variables, Y1, Y2, Y3, Y4.
X -> Y1,Y2,Y3,Y4
Which tests are appropriate to do so? Hope you can clarify.
While performing EFA, if two variables are loading on same dimension in "Pattern Matrix", what does it means.
I have run Repeated measures Anova, and got strange output for Sphericity: all values are either: zero or one..for all of the variables and interactions. What does that mean? (sphericity does not hold or something else is might be wrong? all the other output tables seem fine ) Thanks!
I have three biomarkers that are used to predict postoperative death. One of the reviewers commented that I should adjust biomarkers with comorbidity CIRS G score. I used to try hierarchical binary logistic regression but I am not sure that this is the right way.
Is there a way that I can adjust biomarker values for every patient individually with CIRS score (continuus) through Compute variable option? And which sintax would it be?
Thank you in advance.
Hi all, I am looking for datasets containing variables of environmental behavior on a country level. Specifically, on any kind of behavior which can affect the environment, both positively and negatively.
For example, recycling rates (per country), percentage of people who commute by bicycle, amount of plastic used, % of renewable energies, deforestation in the last decades etc. Ideally, data should be available for at least 30 countries. It is easy to find data on carbon emissions, but I am struggling to find other variables. Any suggestions?
My Independent variables are Transactional and Transformational Leadership Styles and my Dependents are "Task Performance" and "Organizational Citizenship Behavior" so many scholars have used those typical variables like Orgn'l commitment, job satisfaction, motivation etc so I want to use some apart from these
I am currently examining the influence of speculation (I use index investment position as a proxy) on the stability of the commodity market. The structure of my analysis is as follows:
- I use a GARCH model to estimate the volatility of the crude oil price
- Next, I estimate a VAR model with the following variables: crude oil volatility, speculation and the following control variables: gold price, interest rate, inventory, and gobal economic activity index
I use first differences as not all variables are stationary in levels
Do you see any limitations or problems in this set up?
Now my first question: I have conducted all usual checks to test the robustness of my VAR. It shows that my residuals are non-normal distributed. What could be the reason for that? And does it affect the significance of further analysis?
Additionally I would like to do some granger causality testing. In particular I want to test whether speculation causes volatility or vice versa. However as I have a multivariate VAR model I haven't found a method to just test this bi-variate case (and at the same time considering the control variables). Any suggestions?
FYI: I am using R software
Thank you very much for your help.
I have never seen this notion, but i know E(X|Y). In this link i encounter the notion as X|Y and he said that X|Y has mean E(X|Y). He also give a property that if X and Y independent then X|Y=X. I only know that if X and Y indepentdent then E(X|Y)=EX and if X in \sigma(Y) (algebra generated by Y) then E(X|Y)=X.
Infact, the notation X|Y made me very confused.! What is the exactly definition of X|Y?
Thank so much!
I have one continuous dependent variable and three categorical independent variables(having 2, 2 and 3 categories). My questions are:
1. would multiple linear regression is good to explain relationship?
2. if yes, how can I test linearity of categorical variables? or I should not be bothered about linearity in the case of categorical data.
3. if multiple regression is not good to test than any alternative test?
4. please provide any guide/tutorial/paper/video about multiple regression having only categorical data.
I am using IBM-SPSS V24. I am experiencing a weird problem. When I copy any variable from 1 data set to another, the copying seems to work fine; but it reduces (and in some variables increases) the number of valid cases. For instance, in the original data set, I have 1028 valid cases. But when I copy it to a new data set the number of cases reduces to 978.
The data set is in this form, i.e., 1=Yes, and 2=0.
Any help in this regard would be highly appreciated.
I needed to mesh a cuboid with variable mesh size. In the centre region (2x8x1 mm), I need to have hexahedral elements of 0.05mm and coarse mesh in the outer region.
For this, I have made it in two separate volumes but how do I combine two bodies so that there is a gradual change of element sizes at the boundary of the inner cuboid?
In most of the voltage control loop applications, the input variables to a PI controller are reference voltage and actual voltage and the output is usually a current signal. How the controller converts the difference of two voltage signals to current signal?
I understand how does passive imputation works. However, I want to condition one variable so that the other variable will have plausible value.
So here is my problem. I have variables about No. HHM, No. HHM going to school, and No. HHM below 3 years old (just part of my other covariates). So when I run mice I got implausible values (e.g. No. HHM going to school > No. HHM).
HHM = Household members
Thank you very much in advance.