Science topics: Variability
Science topic

Variability - Science topic

Explore the latest questions and answers in Variability, and find Variability experts.
Questions related to Variability
  • asked a question related to Variability
Question
8 answers
What is the most efficient method of measuring the impact of volatility of a variable on another variable??
Relevant answer
Answer
Standard deviation is a measurement of investment volatility. It measures the performance variation from the average.
  • asked a question related to Variability
Question
2 answers
I am trying to downscale GCM-(CanCM4) data to bhima basin catchment(finer scale) for projecting future scenarios. Further I have used following variables ta,ua,zg,va(all these @ 925,850,700,200 pressure levels) and pr,psl(total 6 variables). I am attaching image which I got from working on GCM, now considering mid point of these GCM grid points only 2 station lie on periphery(+ mark) for down scaling. Can I downscale these GCM points to the 0.5deg  grid points? If yes, how to consider the weights?
Relevant answer
Answer
This is a good question.
  • asked a question related to Variability
Question
6 answers
I would highly appreciate it if my fellow ecologists (biologists) provide their opinion on the thoughts below [esp., shortly tell us which path may be more effective, if they know another way, if there is a recent breakthrough toward this goal].
To consider the effects of Acclimation and Directional Selection on populations' thermal sensitivity in the (mechanistic or phenomenological) modeling of ecological impacts of temperature variability (and climate change), we can follow two general paths:
(1) To produce enough empirical data to define simplistic indices of warm adaptation capacity (based on exposure temperature and duration) for at least some keystone species [a simple e.g., ARR; Morley et al., 2019]. Such indices can only be applied to models' outputs.
(2) To understand the GENERAL mechanisms (principal functional components) defining the heat sensitivity of various taxa [e.g., OCLTT, Pörtner, 2010], define how the component (quantitatively) relates to the capacity for rapid warm adaptation [no Ref.], and set (adaptive) feedback loops in existing models [a simple e.g., Kingsolver et al., 2016].
Relevant answer
Answer
Hello Jahangir; You will want to see this paper.
Riddell, et al. 2021. Exposure to climate change drives stability or collapse of desert mammal and bird communities. Science 371(6529); 553, 633-635.
It makes comparisons over a 100 year span of time in the Mojave Desert in California. Thermoregulation!
Best regards, Jim Des Lauriers
  • asked a question related to Variability
Question
3 answers
Hi
I am conducting an interobserver variability study where we have 12 raters who are going to rate samples of lesions. They will rate up to 10 variables per sample. Although the size of the population is quite limited due to the nature of the lesions, it is a bit of a headache to find a well-written method to estimate the sample size needed for this study. Several searches on the internet have overwhelmed me with many possibilities, most of them being quite complicated.
Is there any statistical method you can recommend? Any help is appreciated!
kind regards,
Roger
Relevant answer
Answer
I think it is calculated by dividing the smaller total count observed (from one observer, relative to the other) by the larger total count (from the other observer).
  • asked a question related to Variability
Question
3 answers
Hi everyone, 
I want to import a 2-dimensional data to comsol from a CSV file. Data format of CSV file is as: nx3-matrix with first column as data values, and 2nd and 3rd as x and y-coordinates. 
I want to use these values as initial values of a variable in comsol (variable is a 2D field variable).
I tried to use interpolation function to import this data but that did not work. I do know how to import a variable with "single value", but don't know to import a variable with many values at different point in the field i.e. field variable.
Kindly help me out, i will be very thankful.
Kind regards,
Saad Pasha
Relevant answer
Answer
The file named abc.txt contains temperature measurements in nine points in the plane:
10 3 310
20 3 309
30 3 314
10 6 302
20 6 307
30 6 311
10 9 307
20 9 308
30 9 314
The data columns contain x-coordinates, y-coordinates, and temperature values, respectively. To use this file as an interpolation function called testplot, perform the following steps.
1 Add Interpolation in Component 1 or Global defnition.
2 Select File from the Data source list.
2 Enter a Filename (the complete network path) or Browse to locate a file to import.
3 From the Data format list select Spreadsheet.
4 Enter a Number of arguments. In this example, enter 2.
5 Enter the Function name testplot.
6 Enter its Position in file as 1. The first function in the file has position 1, the following has position 2, and so on. The position in file for a function is the column after the spatial coordinates (or other function arguments) where it is defined. In this example with two arguments (spatial coordinates), the third column is Position 1 in the file.
  • asked a question related to Variability
Question
9 answers
dear members,
How do you conduct a regression analysis in SPSS using 1 predictor variable and 2 dependent variables?
Relevant answer
Answer
You want multivariate multiple regression see this link Applied Multivariate Statistical Analysis (6th Edition) | Richard A. Johnson, Dean W. Wichern | download (b-ok.cc) and
Multivariate multiple regression software, multivariate multiple regression and software - Bing Best, , David Booth
  • asked a question related to Variability
Question
5 answers
We try to simulate a 3D structure in COMSOL (optic wave) but this error occurs:
"variables property does not include all variables in xmesh."
Can anyone help regarding this matter.
Relevant answer
Answer
Hello,
You are using the wrong mesh type.
It appeared for my simulation when i used triangular mesh, for 3D structure. I changed it to tetrahedral, it worked out well.
Good luck,
  • asked a question related to Variability
Question
6 answers
Variability of pesticide residues are inevitable. A deafult variability factor (Vf) of 3 are used but incase of medium and large size commodity crops a wide range of Vfs are estimated for several researchers. Therefore, I think that it shuold be reconsidered when more data are available. So, I think that it is an emerging need to conduct research for the medium and large size commodity crops especially for which crops, that did not selected till today.....
Relevant answer
Answer
Yes, of course..
  • asked a question related to Variability
Question
5 answers
Are there theoretical lenses to analyze variability (identifying causes of variations) qualitatively? Not sure if e.g. complexity theory, organization theory, contingency theory may be helpful.
Relevant answer
Answer
Roseana Moura, Thank you very much. That was helpful.
  • asked a question related to Variability
Question
5 answers
I mean, when using change from baseline, is it valid to include a baseline measure as a control variable when testing the effect of an independent variable on change scores?
Relevant answer
Answer
Yes, in both cases - post values and gain scores - you can adjust for the baseline. If you work in the Clinical Research industry, it's covered by appropriate regulatory guidelines.
Note: you will find a lot of articles, full of simulations, special cases and general considerations, both advising pro or against the use of change from baseline, depending on the context, type of a trial (especially in observational trials), there is no agreement even across statistical reviewers.
POINTS TO CONSIDER ON ADJUSTMENT FOR BASELINE COVARIATES
issued by The European Agency for the Evaluation of Medicinal Products Page 5
II.7. ‘Change from baseline’ analyses
When the analysis is based on a continuous outcome there is commonly the choice of whether to use the raw outcome variable or the change from baseline as the primary endpoint. Whichever of these endpoints is chosen, the baseline value should be included as a covariate in the primary analysis. The use of change from baseline without adjusting for baseline does not generally constitute an appropriate covariate adjustment. Note that when the baseline is included as a covariate in the model, the estimated treatment effects are identical for both ‘change from baseline’ and the ‘raw outcome’ analysis. Consequently if the appropriate adjustment is done, then the choice of endpoint becomes solely an issue of interpretability.
Also: Adjusting for Covariates in Randomized Clinical Trials for Drugs and Biologics with Continuous Outcomes Guidance for Industry This is draft, but describes a common practice.
Many clinical trials use a change from baseline as the primary outcome measure. Even when the outcome is measured as a change from baseline, the baseline value can still be used advantageously as a covariate
Issued by the FDA
  • asked a question related to Variability
Question
3 answers
I used optimal scaling analysis and set 2 dimensions for the variables, with VAF of 36.6% obtained.
I set 3 dimensions, it's around 46%.
I don't know how much VAF is acceptable.
To me, 2 dimensions is easier to explain and define the dimension.
Thanks.
Relevant answer
Answer
The strength of the mediation can be determined from the value of Variance
Accounted For (VAF). VAF value represents the ratio of the Beta Coefficient of
the indirect effect to the total effect. A VAF value bigger than 80% represents full mediation, a VAF value of between 20% and 80% means a partial mediation, while a value below 20% means no mediation (Hair, Ringle & Sarstedt, 2011).
  • asked a question related to Variability
Question
7 answers
I have several categorical variables (binary or with more levels), and several multiple response variables as well. I mean those multiple choice questions in questionnaire (not a test). I'd like to classify the data or reduce the dimension, but I'm not sure how these multiple responses should enter the analysis.
Thanks.
I should specify the variables, they are, for example:
categorical:  gender, education level, ethnicity, etc.
multiple response:  Where do you purchase clothes?  A. online   B. In shopping malls   C. In the markets or fairs   D. In boutiques  E.In the tailor's
Relevant answer
Nice Contribution Giovanna Menardi
  • asked a question related to Variability
Question
9 answers
I have tried searching and find that the articles and books I came across do not give a clear explanation on determining the level of streamflow variation based on CV.
I am trying to distinguish the variation level of my mean annual flow data series, which value ranges from 0.31 to 1.49 (+ one outlier: 3.59). I was just going to consider <0.5 is low, 0.5-1.0 is moderate, >1.0 is high, but I found a paper that said the general rule use <0.1 as the low level and >1.0 as the high level, and another source that said <0.3 is the low level.
I hope someone can help me to find some references for variation / CV handling in the water resources study related to my problem.
Additional discussion:
I am also thinking about whether the solution actually is a relative determination based on the stream characteristics or region. In my case, it is the tropical region, which known to have higher variability than other regions. So, I am thinking that the variability level determination can be adjusted to be higher (especially for low-level threshold) considering it is only a regional study.
Relevant answer
Answer
I am not prepared to give you references. CV is a standard statistical calculation. My concern relates to outliers, which are often important in long term studies relative to characterizing droughts and floods, extremities of wet or dry years, but obviously will not plot well in short term data sets or if one desires estimates within the normal range, disregarding extremes. Whether the data set is 20 years or 100 years, it is possible to have a embedded a 500 or 1000 year event, that seems to throw standard curves off from what we want to see. We must realize that our projections may suggest the outlier return period, we are really only making an educated guess. And I remember wet year 2004 when the nearest stream gauge to our bank stabilization project recorded 8 events with bankfull and larger flow.
In my estimation, it is very important to disclose the presence of what may be outliers relative to the data duration, and how these outliers were included or excluded. Perhaps presenting results including and excluding outliers might be useful to presenting results or to others. Discarding outliers In data sets needs to be disclosed. Dr. Luna Leopold identified in one of his papers about SW USA, that droughts can be cyclic, lasting about 20 years. Amazingly, due to the loss in ground cover, these periods can have the highest erosion.
Hydrology studies typically use water years, with separation of each year in the dry season, such as October 1 to September 31. It is possible in some years to have an overload of rainfall near the end of September, resulting in an abnormal soil moisture storage in one year, that discharges into the next year. So sometimes separating water year dates can separate the severe rainfall event at least partially from its discharge event. For some types of studies, the outlier becomes an important part of the results. But it is not unusual for variable conditions in many areas.
  • asked a question related to Variability
Question
3 answers
Hi All
in phase space reconstruction and with a aim of unfolding a system dynamic (non linear) first we must determine 2 parameter , time lag and number of dimension. my main question is if our dimension more than 3 how can these dimensions be drawn in MATLAB?
Relevant answer
Answer
According to its quality and gender
  • asked a question related to Variability
Question
5 answers
I am using one variable as a moderator. The results are significant but i don't know the procedure of slope testing?
Relevant answer
Answer
How to check significance of higher slope and lower slope in moderation graph
  • asked a question related to Variability
Question
5 answers
I need to conduct a pilot study for my phd thesis questionnaire on workplace spirituality (dependent variable), personality, emotional intelligence and ocb (independent variables) along with demographic profile. I seek experts who can review my questionnaire and advice me with their comments and opinions. Can some experts help me in this regard?
Relevant answer
Answer
Would be happy to provide input. By all means send what you have and I'll have a look to provide suggestions from my experience in these domains. Good luck!
  • asked a question related to Variability
Question
21 answers
I am interested in both temperature and rainfall variability at a continental scale.
Relevant answer
Answer
Dear
I am sharing with you some work that addresses the issue in Senegal and in the Senegal River basin.
Best Regards
  • asked a question related to Variability
Question
5 answers
Hi,
I have a long term point data of occurrence of a spatial event. I want to analyze the long tern spatial variability of these events.
can you suggest any Spatio-statistical method for such variability analysis.
You Suggestion will be appreciate.
Thank You
Relevant answer
Answer
Dear Somnath,
I suggest to check the space-time pattern mining tool of Arc Map software.
Best,
Behzad
  • asked a question related to Variability
Question
3 answers
I am beginning an experiment assessing timing-related behavior in adults with ADHD and the perceptual measures I plan to use are adaptive, and determine perceptual thresholds using standard adaptive algorithm procedures (e.g. staircase method). However, I'm concerned about the inevitable impact of attentional lapses on thresholds. I am interested in suggestions for how best to tune the staircase parameters and/or suggestions for other adaptive algorithms that may be more resilient to lapses of attention. Any thoughts?
Relevant answer
Answer
2-the adaptive procedure: I think, indeed the most suitable are the staircase
  • asked a question related to Variability
Question
24 answers
Is it possible to obtain a positive correlation between dependent and independent variable, after that, obtain a negative coefficient in the fitted regression model for the same?
Relevant answer
Answer
Dear Seema ,
The case that you describe has been categorized by Falk and Miller (1992) in the Partial Least Squares (which is based on OLS algorithm, as MLR) field as a situation of suppressor effect. I copy the main idea:
"When the path coefficient [regression coefficient] and the correlation between latent constructs do not have the same sing, the original relationship between the two has been suppressed. In general, there are three reasons for suppressor effects. The first is due to the fact that the original relationship between the two variables is so close to zero that the difference in the signs simply reflects random variation around zero. The second reason for suppression is that there are two or more variables que contain the same information and are therefore redundant. The switching of signs in this case is due to the order of the variables in the equation. In this situation the redundancy is artificially changing the signs, and one or more of the redundant variables must be eliminated.
The third reason is what some call "real suppression". In this case suppression occurs because an important predictor variable, necessary in understanding the true relationship between the latent variables, suppresses the effect of another predictor variable. In the case of real suppression, if the necessary predictor is eliminated, a specification error occurs, i.e. all relevant variables are not in the model. With real suppressor effects the correct sign interpretation is that given by the path coefficient" (pp. 75-76).
The authors then continue deepening in the differences between the three effects and how to tackle them.
Best regards,
José L. Roldán
  • asked a question related to Variability
Question
6 answers
Does anyone have SPSS syntax (or suggestions) for running a nonparametric analysis of covariance? My dependent variable is not normally distributed, my independent variables are categorical, and I have 2 covariates I would like to include in the analysis. The drop down nonparametric options in SPSS do not allow for this analysis.
Thank you! 
Relevant answer
Answer
Jennifer,
I know this is long past the original question date, however in the event others come to this page for information on this question, I paste a link here for your use on how this can be accomplished in SPSS.
Kind regards,
Janet
  • asked a question related to Variability
Question
6 answers
How to use bandpass filter (say 30-60 day) with daily timeseries data (for 365 days) especially in MATLAB? How many data points are going to be reduced after filtering? Any suggestion would be greatly appreciated.
Relevant answer
Answer
Hi,
I am not sure if I got your question right but if you want to bandpass your data for frequencies where one cycle corresponds to 30 and 60 days and you collected data for 365 days this code is what you need in Matlab. Note that depending on your task a butterworth filter might not be the right/best one.
highcut = (365/30)/(365/2); %define highcut frequency 30 days in radians
lowcut =(365/60)/(365/2); %define lowcut frequency 60 days in radians
[b,a] = butter(6,[lowcut highcut]); %create 6th order butterworth band pass filter
freqz(b,a); %look at your filter characteristics
FilteredData = filtfilt(b,a,DataIn); %filter Data without phase shift
This will not reduce the data points however be careful interpreting filtered signals close to the beginning and ending of the dataset.
I hope this helps!
  • asked a question related to Variability
Question
6 answers
 I have an SPSS file that has several variables with some missing data for a number of cases.  I have a second file that contains some of that missing data but not a complete set of data for those respective variables.  If I merge the files by "adding cases",  it creates duplicate cases.  Given that there are over 1000 cases in the overall data set, it would take way too long to delete the duplicate cases. Is there a way to insert the missing values from one file to the other without creating a duplication? 
Relevant answer
Answer
To follow up on Julia B. Smith's response, here is how I did this sort of merge.
I renamed all of the variables in the secondary dataset by appending the names with an additional letter at the beginning. So, if a variable is "var1" I renamed it to "Tvar1" in the secondary dataset.
Then I completed a merge by adding variables. Select the key variable, which in my case was "ID". (Data>Merge Files>Add Variables) When you do the merge, make sure that you only add the renamed variables from the secondary dataset. Exclude the existing variables from the primary dataset during the merge; this way they remain unchanged in the primary dataset and you just add the appended ones.
Then once you have your datasets merged, you can use IF statements to merge the data one variable at a time. For example, using the "pass" variable example:
IF MISS(Tvar1)=0 var1=Tvar1.
EXECUTE.
This command will copy the values from "Tvar1" to "var1" for any cases that do not have missing data for "Tvar1".
I hope this helps someone!
  • asked a question related to Variability
Question
7 answers
Thank you. Long run cointegration was not significant when I used log of the dependent variable but very significant for non-log of the dependent variable in ARDL Bound Test. The estimation model I am using is Non-linear Auto-distribution Lag (NARDL) developed by Shin et al. .  
In NARDL, we first take the positive and negative values of the independent variables after taking their first difference. For instance, my GDP as an independent variable becomes GDPP for GDP plus and GDPM for GDP minus.
My models with log of the dependent variable (LOIP) are:
1) LOIP =  GDPP + GDPM + XCR + INFL+ M2
2) LOIP =  GDP + XCRP+ XCRM +INFL + M2
3) LOIP =  GDP +  XCR+ INFLP +NFLM + M2
4) LOIP =  GDP + XCR + XCR + INFL + M2P + M2P
Using these models ARDL, the long-run cointegration is not significant.
The models without Log of the dependent variable (OIP) are
1) OIP =  GDPP + GDPM + XCR + INFL+ M2
2) OIP =  GDP + XCRP+ XCRM +INFL + M2
3) OIP =  GDP +  XCR+ INFLP +NFLM + M2
4) OIP =  GDP + XCR + XCR + INFL + M2P + M2P.
Using these models now yielded significant lon-run cointegration.
Advice
Is it allowed to use non-log dependent variable data?
Relevant answer
Answer
I did not read what has been written above,. When I saw that this is from Chief Obaka I thought I would give an answer since this is my area of expertise
As long as your dependent variable is not I(2) or I(0), you can but you have to been careful in your interpretation of results
  • asked a question related to Variability
Question
3 answers
I am relativity new to vensim and seem to be unable to find a simple function that calculates the difference of a value for an auxiliary variable in the current time with the value of that same var. in a certain time in the past( previous steps)
some function like this perhaps y= (Variable at time t - Variable at time t-n)?
Relevant answer
Answer
Hello! I am also stuck on same question. Dear Niloofar Safavian did you find the answer?
  • asked a question related to Variability
Question
2 answers
How do I use Entropy to find the variability in electricity consumption shapes(time series data)?
Relevant answer
  • asked a question related to Variability
Question
6 answers
Any suggestion on how to handle and solve endogeneity problem in multivariate probit model: the choice variables are, for example, Y1, Y2, and Y3, which are binary variables. The independent variables include X1, X2, X3 and X4. However, X3 is also affected by X2 and X5
Relevant answer
Answer
thanks
  • asked a question related to Variability
Question
14 answers
I have a panel data and am using fixed effect model which contain a dummy variable specific to region of countries[For instance, for Latin american countries i set this dummy variable LA to 1]....Since this dummy is time invariant, when i estimated fixed effect model, stata drops the dummy due to collinearity issue ...Can i still estimate the coefficient of this dummy through some other way?? Please guide..
Relevant answer
Answer
Yes you can estimate them. You can do a two stage regression. In the first stage get the within estimation and get y_{it}-x_{it}*Beta_within=r_{it}. In the second stage, get the individual mean of these residuals r_{it} and regress over the time invariant variables and the individual mean of the time variant regressors.
  • asked a question related to Variability
Question
10 answers
I found that there is an inverted-U shape relationship between my moderation variable and dependent variable (y = b0 + b1*m + b2*m^2). Now, I would like to investigate the moderation role of m in the relationship between x and y.
So, which of the following equations should I use for moderation?
i. y = b0 + b1*x + b2*m + b3*x*m
ii. y = b0 + b1*x + b2*m^2 + b3*x*m^2
iii. or any other suggestions?
Thank you.
Relevant answer
Answer
Did you see Haans, Pieters and He (2016 SMJ)?
  • asked a question related to Variability
Question
9 answers
I am using the complex sampling analysis method within SPSS. I would like to use the cox regression for my variable under complex sample, as my variable has a prevalence rate of greater than 10%, thus logistic regression should not be used. When using cox regression under the complex sampling analysis - is robust variance already controlled for?
Relevant answer
Answer
Thank you for your comment
In fact, since 2015, things changed about heteroskedastic dealing, and now it has become almost mandatory (so I think that it was why you was asked by peer reviewer). Happily, the last versions of SPSS integrate it in cox regression through sandwich estimators and, more important, HC in general linear models.
Hope it helps,
Kind regards,
  • asked a question related to Variability
Question
4 answers
Dear Researcher Friends,
How can I perform variability test of a research instrument by R or R-studio software package?
Relevant answer
Answer
I assume you mean: how to test the "reliability". There are 3 kinds:
* Internal consistency reliability this is measured with the Cronbach's alpha.
*Test-retest measures the correlation between scores from one administration of an instrument to another, usually within an interval of 2 to 3 weeks.
* Inter-rater reliability checks the degree of agreement among raters (i.e., those completing items on an instrument).
  • asked a question related to Variability
Question
12 answers
Hello everyone, I am currently working on my thesis where I am analyzing if some factors have impact on purchasing behavior. 15 of the questions in my survey were were Likert Scale (1-5) and I would like to use them as independent variable. Total money spend would be dependent variable and I would also like to use income and gender and some other stuff as independent variable. What statistical analysis would you recommend me to analyze my data? I converted my Likert scale data into factor data with 5 levels and now I am lost.
Can I use OLS or would you recommend something else?
Any help would be really appreciated
Relevant answer
Answer
Hi Daniel:
You might begin with an exploratory factor analysis. If you get a two (or more) factor solution, you need to do some serious thinking about the items that form factors regarding whether or not you want to include them as variables. This is a theoretical problem. You should not combine items that belong to different factors in the same scale.
Assuming you are now analyzing your first, or only, factor, you can either use an index or a latent variables. If you use an index, a Likert-type scale, you should look at the distribution of scores and transform your index, especially if it is a dependent variable (meaning it has one or more arrows coming into it), so that it is roughly normal. If you dichotomize all items, you can do a Guttman-scale or cumulative scale measure.
If you prefer to use a latent variable, use EQS, SAS, or some other structural equations modeling program to assess the fit of your items to a single latent variable. Statistics such as the root mean square residual and the goodness of fit index will give you insight into whether your items measure a single dimension. If you do this, use maximum likelihood estimation rather than least squares so you get t-tests for the significance of the item coefficients. Delete non-significant items, one by one, until all left are significant.
Good luck with your project, Best regards, Warren
  • asked a question related to Variability
Question
6 answers
Dear All,
If I have one categorical dependent variable and more than two continuous independent variables with a sample size 12,then which test would be the most appropriate? Please kindly answer if you can.
Many Thanks
Relevant answer
Answer
EXACT LOGISTIC REGRESSION is used to model binary outcome variables in which the log odds of the outcome is modeled as a linear combination of the predictor variables. It is used when the sample size is TOO SMALL for a regular logistic regression (which uses the standard maximum-likelihood-based estimator) and/or when some of the cells formed by the outcome and categorical predictor variable have no observations. The estimates given by exact logistic regression do not depend on asymptotic results.
Thanks.
  • asked a question related to Variability
Question
13 answers
I have two variables that mediates relationships between IV and DV. I would like to put them into multiple mediation model. How should  decide if it goes as serial mediation or parallel mediation? The mediators are correlated.
thank you in advance for any answers
Relevant answer
Answer
Dear Katarzyna I was looking for answer to the same question recently and the following section from a book (Hayes,2013) helped me to be clear;
“A distinguishing feature of the parallel mediation model is the assumption that the no mediator causally influences each other. Typically two or more mediators that investigators locate as causally between X and Y will be correlated, if for no reason that they share a common cause, X, the causal agent of interest in the model itself. If they are correlated only because they are all affected by X, then they should be uncorrelated after accounting for this shared cause. Thus estimating the partial correlation between two indicators after controlling for X is one way of examining whether all of their association is accounted for by this common cause. If two or more mediators in a multiple mediator model remain correlated even after adjusting for X, this suggests that either they share an additional common cause other than X, the remaining association is epiphenomenal, or one mediator effects another. It is the latter explanation that is the focus of the serial mediation.”
(Bolin, J. H. (2014). Hayes, Andrew F.(2013). Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression‐Based Approach. New York, NY: The Guilford Press. Journal of Educational Measurement, 51(3), 335-337.)
  • asked a question related to Variability
Question
4 answers
social sciences and statistical studies
Relevant answer
Answer
Yes. More generally, the first k principal components (where k can be 1, 2, 3 etc.) explain the most variance any k variables can explain, and the last k variables explain the least variance any k variables can explain.
  • asked a question related to Variability
Question
3 answers
Hi all
I am designing  a voltage source  converter (VSC) for variable speed constant frequency operation i.e Grid connected DFIG,what should be the rating of the converter ,like which parameters of the grid together with the generator will influence the rating of VSC.
Waiting for your good response
Relevant answer
Answer
The capacity of the VSC is usually 30% of the machine rating, while its voltage rating is between 1.1 to 1.2 the grid voltage.
  • asked a question related to Variability
Question
8 answers
It is well established that the EM signals from AGNs varies within days . What is the best model to explain this intriguing result.
Relevant answer
Answer
The "Black Hole" is the “smile” of a Cheshire Cat left behind when the Cat itself has vanished! Any and every cosmic phenomena for official astrophysics and cosmology can be attributed to the mischief of the Cosmic Cats and other such Monsters which are derivatives of one single truth, namely GR. But such an all-encompassing “truth” (like the truth of God) that can be invoked for anything, anywhere, anytime and everywhere, is an ideology and is no truth at all, or is of no use to explain any specific phenomenon!
The absolute truth of official cosmology - a finite and an abstract “spacetime” manifold with tangible physical/mechanical attributes that forms a so-called objective reality of the world; is an axiomatic mathematical fiction and an idealist phantasm that has no basis in material Nature. Please try some other perspectives of objective reality and of the universe: https://www.researchgate.net/publication/320065643_BIG-BANG_CREATED_OR_AN_ETERNAL_AND_INFINITE_UNIVERSE
The “variability problem of active galactic nuclei” and some other cosmic phenomena like GRBs, Gamma Ray Halo, Quasars etc., could be due to random encounter and annihilation reaction of chance accumulated patches (of various sizes, including nebula, stars, globular clusters etc.) of matter and antimatter throughout the galaxies, specially at the core.
  • asked a question related to Variability
Question
5 answers
While assessing intra-assay coefficient of variation (CV), it is generally advised that CVs should be calculated from the calculated concentrations rather than the raw optical densities.
If the raw optical densities of my samples are just in the middle of calibration curve (see attached illustration - situation A), the curve is quite steep here, and low variation of absorbance reflects low variation of concentrations, and reported intra-assay CV is satisfactory.
But in my case, all my samples are on the lower edge of calibration curve (see attached illustration - situation B). Although I have pretty low variations in raw optical densities of the duplicates (CVs about 5%), variations of the calculated concentrations are much worse (CVs about 20%), due to gentle slope of my calibration curve in this area.
If I report such high CV, it will be perceived as unacceptable, however, my pipetting technique is satisfactory.
What should I do? How should I report my intra-assay CV in order to make it misinformative for reviewers and readers of the article?
Relevant answer
Answer
Hi Michal,
In my opinion, you cannot express the CV from the DO values because it's inaccurate. The fact that you do work in a standard curve zone with a low slope is itself a part of the measurement imprecision too and should be reflected in the results! It's not only about pippetting imprecision. The actual results are the concentrations so the imprecision expression should relate to the concentrations. You can check it in the different guidelines from FDA or EMA about ligand-binding assays such ELISAs, which are enclosed to this message.
Also my advice would be : (i) to less dilute your samples (if it's diluted) or to concentrate it by any preliminar pre-analytical step if possible; (ii) to increase the number of replicates (3 instead of 2 for instance) so that you may level down your CVs.
I know ELISA can be frustrating sometimes and you can feel like being in a cul-de-sac but an inaccurate expression of CV is not helpful in my opinion.
Good luck,
Cécile
  • asked a question related to Variability
Question
4 answers
Control variables for self efficacy
Relevant answer
Answer
Hello 
Teacher self-efficacy has  linked to a variety of teaching behaviors and student outcomes such as achievement (Ashton & Webb, 1986; Gibson & Dembo, 1984; Ross, 1992) and motivation (Midgley, Feldlaufer, & Eccles, 1989; Woolfolk, Rosoff, & Hoy, 1990).
Teacher efficacy would also be related with teacher’s classroom management approaches (Henson, 2001; Woolfolk & Hoy, 1990).
Educational courses such as teaching methods and practicum courses may effect their  their efficacy beliefs. Also teacher and candidate teachers attitudes towards teaching profession can be an important indicator. 
  • asked a question related to Variability
Question
5 answers
Suppose we have 2 variance breaks in first variable and 3 variance in second variable. How do you estimate the model now?
Relevant answer
Answer
Hi,
I want to estimate a bivariate Markov Switching BEKK-GARCH model with rats. Any body know how to do it on Rats please?
  • asked a question related to Variability
Question
7 answers
I am doing multiple ELISAs (>150) and I'm analysing 3 factors within human plasma, which requires ~52 plates each factor.
I have a quality control on each plate and run it in duplicate. This control is always the same. I don't have positive controls with a known high and low concentration.
For 2 of the factors with a 1/30 dilution, there is less than 10% variance across the plates. For one of the factors with a 1/3 dilution, the variance is ~30%. This is very high and the accepted cut off is generally <10%.
I don't have time to repeat the experiments at this stage. I am looking for a way to normalise the data so an increase in the quality control from one plate doesn't lead to an observed increase in the factors I'm testing. 
Currently, I have been dividing the plate samples with the respective plate controls. This seems to be working at the moment, but is this a correct method?
Thank you in advance.
Relevant answer
Answer
Just as an update, and thanks for the answers :)
I've spoken to a statistician, and to get a normalised value, I can divide my plate sample by the respective plate control and then multiply by the total mean across all plates I've done so far. This will bring the plate samples and mean closer to the true mean. This doesn't account for individual plate variability across the samples though, but it does decrease the variability seen when plotting the data. 
  • asked a question related to Variability
Question
15 answers
A perennial question from my students is whether or not they should normalize (say, 0 to 1) a numerical target variable and/or the selected explanatory variables when using artificial neural networks. There seems to be two camps: those that say yes, and those that say it is unnecessary. But what is your opinion ....and importantly, why?
Relevant answer
Answer
It is better to normalize data in order to have the same range of values for each of the inputs to the ANN model. This guarantees stable convergence of network weights and biases.
  • asked a question related to Variability
Question
6 answers
I'm interested in determining the effect size (ES) for Friedman's test. Let's say that there are k levels of the independent variable and that the sample size is m. I could not find references for ES for this test, so any help will be very appreciated. :)
Relevant answer
Answer
Hy Milos.
Find the file attached.. You will find answers to your queries.
Good Luck!!
  • asked a question related to Variability
Question
5 answers
What is the best statistical test to evaluate an effect of an independent variable (of interest) among other independent variables who have a partial similar effect (effect modification), and the tested variable (outcome) is an ordinal dependent variable?
Relevant answer
Answer
Stéphane Breton
THANK YOU SO MUCH
  • asked a question related to Variability
Question
2 answers
A polynomial can have constants, variables and exponents,
but never division by a variable.
IRR/NPV equation involves division by a variable. That being the case, how can it be a polynomial?
Relevant answer
Answer
Thanks Graham. Appreciate your answer.
  • asked a question related to Variability
Question
3 answers
I have three simultaneous equations( in Panel data analysis). only  one equation has lagged endogenous as explanatory variable. can i apply 3sls on these simultaneous equations? answer with refence
Relevant answer
Answer
I think the method suggested by Adriana would cover your case nicely, otherwise I would suggest GMM estimators which you find explained in any major econometric textbook. Just be careful in choosing the correct instrument variables.
  • asked a question related to Variability
Question
3 answers
The independent variable is Parents occupation (Father occupation and Mother occupation) with 4 responses for each = Freelancer, business, employee, pensionary, unemployed
While the dependent variable is Entrepreneurial Intentions of the student on a 5 range scale.
I want to find a relationship between Parents occupation (both fathers' and mothers') and Entrepreneurial intentions, I hope it makes sense ;), Can anyone please advice how can I do this in SPSS???
Relevant answer
Answer
Thank you both of you for your suggestions, but I am still unable to get the desired results, so I just took the compare mean of both Father occupation and Mother occupation with entrepreneurial intentions and combined the mean of each category. The category showing the higher mean is showing the higher entrepreneurial intentions. For instance, the category "business" is showing the higher mean, so I conclude that the students whose parents are involved in business activities are showing the higher entrepreneurial intentions.  
I hope it is correct.  
  • asked a question related to Variability
Question
1 answer
In multivariate analysis I have two variables
Relevant answer
Answer
Hi,
you can use vector autoregression in this case. However, you can use Vector error correction model only if there is cointegration between the two variables. If not, you can simply use VAR model or unrestricted VAR. You need to use stationary variables in a VAR or VEC model. If you can provide more information about the nature of variables both DV and IVs, I can provide more info about it.
Good Luck
Thushara
  • asked a question related to Variability
Question
1 answer
In a multiple regression, we have three dependent variables (A, B and C) and we have five independent variables (V, W, X, Y and Z). 
At the time of checking the Collinearity statistics of A, B and C, the value of tolerance and VIF of five independent variables (V, W , X, Y and Z)  are same for all dependent variables (A, B and C) .
 As an example
tolerance and VIF of  (V, W ,X,Y and Z) with A are (0.716, 0.964, 0.645, 0.741,0.862) and (1.397,1.037,1.551,1.349,1.160).
tolerance and VIF of (V, W ,X,Y and Z) with B are (0.716, 0.964, 0.645, 0.741,0.862) and (1.397,1.037,1.551,1.349,1.160).
tolerance and VIF of (V, W ,X,Y and Z) with C are (0.716, 0.964, 0.645, 0.741,0.862) and (1.397,1.037,1.551,1.349,1.160).
The five point likert scales are used to collect data for all the dependent and independent variables.
From the above result what types of interpretation can I make on A, B and C?  
Relevant answer
Answer
Surajit -
Thinking about continuous data, it sounds like you might be referring to multivariate, multiple regression.  Multivariate because of the multiple y-values (dependent variables) of interest (A, B, and C), each with the same 'independent' multiple predictors.  The collinearity would be with regard to the predictors in each case.  It does not sound reasonable that the VIFs would be the same in each case.  (I do not know what "tolerance" means here.)  This is not familiar to me, and since you mentioned likert scales, which is also not my area, I may just not understand what you are doing, but my guess would be that you are using your software incorrectly.  Perhaps data and/or commands are input incorrectly, or there is a software error.  Your repetitive results seem to me to indicate such a problem.  If you did not write the software, then I think you need to study it to see exactly what it is doing. 
By the way, as I said, not my area, but for likert data, wouldn't you use Poisson regression?   See
and links at the bottom of those notes for a discussion, but not with regard to multivariate regression.   I don't think that they mention such a thing. 
Cheers - Jim
  • asked a question related to Variability
Question
1 answer
kj
Relevant answer
Answer
Im not familiar with state space models so I cant help you directly, though I looked up some articles. I wonder if these reads might be any help to you?
  • asked a question related to Variability
Question
11 answers
(Pearson correlation matrix or it will be better if I can use multiple regression analysis to measure the relationship between dependent and independent variables.
Relevant answer
Answer
What is the best statistical analysis can be used to measure the relationship between dependent and independent variables?
What statistical analysis to use is depending on your hypothesis developed.  Pearson correlation matrix is measuring correlationships among all the interval / ratio variables individually.  Typically we don't coin hypotheses to test them with Pearson correlationships as most of the individual variable-to-variable relationships are significant & some variable-to-variable relationships / hypotheses might not make sense based on literature reviewed.
If you have numerous independent variables & 1 dependent variable, you can use multiple regression analysis.  If you have more than 1 dependent variable, you can consider multivariate analyses like Structural Equation Modeling (SEM).  Wishing you all the best.
  • asked a question related to Variability
Question
2 answers
Let's say, I have modeled variable x with forcing data from the time period 1990-2000. The observed ‘x’  I have is spanning only the time period 2000-2010. How can I make an objective comparison of the modeled and observed x ? Or, should I stick to visual comparison, which is not only computationally inefficient but also lacks objectivity.
Relevant answer
Answer
Hi Umar,
Thanks. The post looks to have something to help me with my problem, or at least will lead me in the right direction. I use a  2D  model to simulate variables such as dissolved oxygen in seawater for a given depth. I can add more details if required.
  • asked a question related to Variability
Question
3 answers
Which test would you recommend to look at a correlation between two continuos variables, but adjusted for a co-variate?
Relevant answer
Answer
tests for Partial correlation
  • asked a question related to Variability
Question
4 answers
I have a dataset for several years, I want to find that how percentages of women and men select between yes or no in different years. Is it necessary to calculate year by year in SPSS or there is a simpler way? can I cross tabulate?
Relevant answer
Answer
In the first column of the SPSS, enter the years and in the second and third column, enter the answers of woman and man respectively. After that, you need to assign the years  in weight cases which is  under the data section in SPSS. Finally, when you get the frequency output of the genders, it will give you percentage of woman and man for  each year.
  • asked a question related to Variability
Question
5 answers
Is there any approach to reduce inter-assay variability of home-made sandwich ELISA except good pipetting practice?
Thank you for any suggestion!
Relevant answer
Answer
There is always some variation, but if you include the same standard or control samples in every assay, you can normalize against them. The absorbances can differ but the relative levels of your analyte in a given sample compared to standard/control must be the same in every assay.
  • asked a question related to Variability
Question
11 answers
The form of the function is
f=(convex in variable x1)+(convex in variable x2)-(convex in variable x3)
Relevant answer
Answer
 
This is a so called DC (difference of two convex functions) Program. See, e.g
R. Horst and NV Thoai: D.C. Programming: Overview ,  Journal of Optimization Theory and Applications 103 (1999) 1-43. 
  • asked a question related to Variability
Question
1 answer
Is there anything special to coding an instrumental variable in LISREL or is it coded like all others
Relevant answer
Answer
Hi Roxanne,
and 
JÖRESKOG, K.;SÖBOM, D.; TOIT, M. e TOIT, S. LISREL 8: New Statistical Features. Lincolnwood: SSI, 2000.
Good luck!
  • asked a question related to Variability
Question
15 answers
I am looking for a concise example of three pairwise independent (non-degenerate) random variables that are not mutually independent.
Thanks.
Relevant answer
Answer
Thanks, Mohamed and George
 If you replaced the term random variable by operator and the term independent by commutating would you have an equivalent problem?
Im thinking in terms of contextuality in the Quantum theory.
in which case commutating operators are simultaneously measurable. For matrices this would be simultaneously diagonazable.
Would lie algebra identities apply?
  • asked a question related to Variability
Question
1 answer
Please suggest way to do this integral. All variables are positive. "i" is imaginary unit. Thanks.
Relevant answer
Answer
A hint:
  • asked a question related to Variability
Question
4 answers
I have three time series, which should NOT be classified as independent to each other. I would like to examine the relationship of each of the time series however using a method such as regression assumes independence. How can i calculate this with dependant variables?
Relevant answer
Answer
Add lagged dependent variables. So yt would be a function of yt-1 yt-2 etc..
  • asked a question related to Variability
Question
2 answers
Good Evening to Dear Group members. Could anyone please answer my doubt, about the calculation of the value Exogenous/Endogenous variables in the Measurement model?
We can calculate the "Beta -coefficient" if we know the value of both Independent variable (Exogenous/Endogenous variable) & Dependent variable (Item).
But, here we do not know the value of the Exogenous/Endogenous variable. So, how do we determine the Beta coefficient b/w Exogenous/Endogenous variable & its Item?
Relevant answer
Answer
Thank you, for the reply sir. Surely, I would watch those videos for a better clarity. Actually, my question was, "Whether there is any method for the "manual calculation" of the "Latent variable" score in SEM?" Actually, it is said that, in CB-SEM, the "Beta values" are generated automatically by AMOS, but there is no mention of "Latent variable scores".
              It is also said that we should use the "Weighted mean" to calculate the "Latent variable score" in the "PLS-SEM". 
              But, my doubt is, "What is the underlying assumption in the "CB-SEM" to determine the values of the "Latent Variable score"?" 
Thanks & Regards,
Bharath Shashanka.K.
  • asked a question related to Variability
Question
3 answers
Hey,
Everything I've measured was based on a 5 point Likert scale.
Now I want to address the effect of variable X(trust) on the different dependent variables Y(privacy), which is an average of four dependent variables, Y1, Y2, Y3, Y4. 
X -> Y1,Y2,Y3,Y4 
Which tests are appropriate to do so? Hope you can clarify.
Relevant answer
Answer
Depending on your research objective i.e. the arrow line "->" within your X -> Y1, Y2, Y3, Y4.  If your "-> is to analyze the differences or variance among different groups for certain variables including multiple dependent variables (Y1-4), you can use MANOVA.  MANOVA & other ANOVA are for measuring the differences / variance among different groups for some variables.  
If your "->" is to measure how the X is influencing / impacting / predicting multiple dependent variables (Y1-4), then SEM can be used.
  • asked a question related to Variability
Question
1 answer
While performing EFA, if two variables are loading on same dimension in "Pattern Matrix", what does it means.
Relevant answer
Answer
they are both an index of the same latent construct (the "component" represented by that dimension). If your EFA is actually explorative, they help you to interpret the dimension (i.e. component, construct) by finding the similarity behind both variables.
  • asked a question related to Variability
Question
5 answers
Please let me know whether it is possible to represent graphically a 6-d function, where the dependent variable (y) depends on six variables (x1, x2, x3, x4, x5 and x6). Thanks  
Relevant answer
Answer
@Paul: "It is relatively easy to, for example, locate directions of greatest variation or to identify subsets and groups." - if this was the task then I would go for PCA.
  • asked a question related to Variability
Question
1 answer
I have run Repeated measures Anova, and got strange output for Sphericity: all values are either: zero or one..for all of the variables and interactions. What does that mean? (sphericity does not hold or something else is might be wrong? all the other output tables seem fine ) Thanks!
Relevant answer
Answer
p.s. for instance: zero values in df' s and dot in significance values..Chi also zero, the rest are ones..
  • asked a question related to Variability
Question
5 answers
I have three biomarkers that are used to predict postoperative death. One of the reviewers commented that I should adjust biomarkers with comorbidity CIRS G score. I used to try hierarchical binary logistic regression but I am not sure that this is the right way.
Is there a way that I can adjust biomarker values for every patient individually with CIRS score (continuus) through Compute variable option? And which sintax would it be?
Thank you in advance.
Relevant answer
Answer
ok Danica, these AUCs look pretty high and good. But you should also look at the estimates and p-values of your biomarkers and the CIRS score in the logistic regression. Are both significant? Most importantly, is the coefficient of the biomarker still significant?  If this holds true, then you can claim a prognostic role of the biomarker "after adjustment for the CIRS score".
  • asked a question related to Variability
Question
3 answers
Hi all, I am looking for datasets containing variables of environmental behavior on a country level. Specifically, on any kind of behavior which can affect the environment, both positively and negatively.
For example, recycling rates (per country), percentage of people who commute by bicycle, amount of plastic used, % of renewable energies, deforestation in the last decades etc. Ideally, data should be available for at least 30 countries. It is easy to find data on carbon emissions, but I am struggling to find other variables. Any suggestions?
Relevant answer
Answer
FAOSTAT has lots of data for countries around  the world and the data are easy to download. I am not sure that they have all you want, but it is a source. Check the link.
  • asked a question related to Variability
Question
8 answers
 My Independent variables are Transactional and Transformational Leadership Styles and my Dependents are "Task Performance" and "Organizational Citizenship Behavior" so many scholars have used those typical variables like Orgn'l commitment, job satisfaction, motivation etc so I want to use some apart from these
Relevant answer
Answer
It is better to add (Psychological Contract Breach) as intermediate factor for your project
good luck
  • asked a question related to Variability
Question
5 answers
Hi,
I am currently examining the influence of speculation (I use index investment position as a proxy) on the stability of the commodity market. The structure of my analysis is as follows:
- I use a GARCH model to estimate the volatility of the crude oil price
- Next, I estimate a VAR model with the following variables: crude oil volatility, speculation and the following control variables: gold price, interest rate, inventory, and gobal economic activity index
I use first differences as not all variables are stationary in levels
Do you see any limitations or problems in this set up?
Now my first question: I have conducted all usual checks to test the robustness of my VAR. It shows that my residuals are non-normal distributed. What could be the reason for that? And does it affect the significance of further analysis?
Additionally I would like to do some granger causality testing. In particular I want to test whether speculation causes volatility or vice versa. However as I have a multivariate VAR model I haven't found a method to just test this bi-variate case (and at the same time considering the control variables). Any suggestions?
FYI: I am using R software
Thank you very much for your help.
Relevant answer
My understanding is that no statistical test is complete without information on the degrees of freedom of the model in relation to the dataset. You can take a look at some of Soumitra K. Mallick's mathematical statistical modelling papers on www.researchgate.net/Soumitra K. Mallick. However, you will see as in Mallick. Hamburger, Mallick (2016) and the development of the proof of the Millenium P vs. NP problem and the fundamental development of Functor Algebra Calculus (see also smallick@iiswbm.edu on google) one can purely mathematically deal with the solution to such problems because of the Dbranes String Theory properties of quantum genetic computer algorithms which ultimately solves Gauss Markov problems on the hypothesis testing fundamentals of which  which VARS as well as Granger causality tests as well as Impulse Response Analysis tests are based on. So while in the latter two tests the sample statistic is assumed to be the population value or atleast that of asymptotically large samples, the normality as in the Gauss Markov Theorem especially asymptotically is assumed away, I believe. Hence it is always statistically more correct to take pains to increase the sample size so that the Strong Law of Large numbers can operate in making inferences which again assumes some kind of Gaussian curvature of the fitted model at least. I hope this answers your question to some extent.
Soumitra K Mallick
for Soumitra K Mallick, Nick Hamburger, Sandipan Mallick
  • asked a question related to Variability
Question
5 answers
Any dynamic preprocessing technique which can apply on big data stream with variable (different types of data)?
Relevant answer
Answer
You need to preprocesses the dataset using the available data preprocessing tools, such data cleaning and transformation. After that, invoke your incremental algorithm. I recommend reading the literature for better understanding as this issue is similar to one of your questions that you've asked before. 
HTH.
Samer 
  • asked a question related to Variability
Question
2 answers
in analytical calculation of CFI which obtain by LISREL or AMOSE softwares, if correlation coefficient between drawn latent variables were over than +1, what's the meaning of these correlation and how can describe ones??
Relevant answer
Answer
thank you so much for your attention professor
  • asked a question related to Variability
Question
6 answers
I have never seen this notion, but i know E(X|Y). In this link i encounter the notion as X|Y and he said that X|Y has mean E(X|Y). He also give a property that if X and Y independent then X|Y=X. I only know that if X and Y indepentdent then E(X|Y)=EX and if X in \sigma(Y) (algebra generated by Y) then E(X|Y)=X.
Infact, the notation X|Y made me very confused.! What is the exactly definition of X|Y?
Thank so much!
Relevant answer
Answer
A  well-known example is the case of classical linear regression: If we have a model X=aY+b +error, then X|Y is a normal random variable with mean aY+b and standard deviation equal to that of the error, so long as Y is observed.
  • asked a question related to Variability
Question
4 answers
I have one continuous dependent variable and three categorical independent variables(having 2, 2 and 3 categories). My questions are:
1. would multiple linear regression is good to explain relationship?
2. if yes, how can I test linearity of categorical variables? or I should not be bothered about linearity in the case of categorical data.
3. if multiple regression is not good to test than any alternative test?
4. please provide any guide/tutorial/paper/video about multiple regression having only categorical data. 
Regards,
Darshan
Relevant answer
Answer
1-Yes.
2-The latter.
3-You might be suggested to use ANOVA instead, but the difference is very little if not nonexistent.
4-Just do regression with care. Properly code your categorical predictors and interpret. You can search for videos on two/three-way ANOVA because you might not find a video about multiple regression with categorical predictors.
Rebecca, I cannot see how ANCOVA is related here.
  • asked a question related to Variability
Question
2 answers
Hi,
I am using IBM-SPSS V24. I am experiencing a weird problem. When I copy any variable from 1 data set to another, the copying seems to work fine; but it reduces (and in some variables increases) the number of valid cases. For instance,  in the original data set, I have 1028 valid cases. But when I copy it to a new data set the number of cases reduces to 978.
The data set is in this form, i.e., 1=Yes, and 2=0. 
Any help in this regard would be highly appreciated.
Thank you.
Mustunsir
Relevant answer
Answer
No, it's all SPSS. And unfortunately, I am unable to make sense of the date that copying adds or drops. I have 6000 sample size in total. And I am copying from one SPSS file to another.
  • asked a question related to Variability
Question
1 answer
I needed to mesh a cuboid with variable mesh size. In the centre region (2x8x1 mm), I need to have hexahedral elements of 0.05mm and coarse mesh in the outer region. 
For this, I have made it in two separate volumes but how do I combine two bodies so that there is a gradual change of element sizes at the boundary of the inner cuboid? 
Relevant answer
Answer
You have to satisfy mesh topology issue. This can be done through using the same mesh divisions on both sides. Regarding gradual change of element size you can apply the mesh command with the aspect ratio value.
  • asked a question related to Variability
Question
2 answers
In most of the voltage control loop applications, the input variables to a PI controller are reference voltage and actual voltage and the output is usually a current signal. How the controller converts the difference of two voltage signals to current signal?
  • asked a question related to Variability
Question
8 answers
I understand how does passive imputation works. However, I want to condition one variable so that the other variable will have plausible value.
So here is my problem. I have variables about No. HHM, No. HHM going to school, and No. HHM below 3 years old (just part of my other covariates). So when I run mice I got implausible values (e.g. No. HHM going to school > No. HHM).
HHM = Household members
Thank you very much in advance.
Best,
Muns
Relevant answer
Answer
Hi, 
When you use the mice function, by default all other variables are used except the one we want to predict. To select specific variable for each prediction you have to edit the predictorMatrix by hand: 
Ex: 
You have a dataset with 3 variable x, y and z and you want to impute x using y , y using x and z using x & y the predictor matrix would look like this. 
pm <- matrix(c(0,1,0,1,0,0,1,1,0),3,3)
pm
[,1] [,2] [,3]
[1,] 0 1 1
[2,] 1 0 1
[3,] 0 0 0
Then you just have to add the matrix as a parameter of the mice function. 
mice(data = df, m = 5, predictorMatrix = pm)
see help(mice) for details. 
Regards. 
Charles-Édouard.