Science topic

# Regression - Science topic

Explore the latest questions and answers in Regression, and find Regression experts.

Questions related to Regression

If we have a research (analysis of factors affecting on sustainable agriculture...) in order to analyze its data, most previous researches have used techniques such as regression. To identify effictive factors, Is it possible to use the exploratory factor analysis technique?

Hi,

How do I interpret a significant interaction effect between my moderator (Coh) and independent variable (Hos)? The literature states Hos and my dependent variable (PDm) has a negative relationship. The literature also states the moderator (Coh) has a positive relationship with the DV (PDm). My regression co-efficient for the interaction effect is negative. Does this mean Coh is exacerbating the negative effect (i.e., making it worse) or weakening the effect (i.e., making it better)?

I have attached the SPSS output and simple slopes graph.

Thank you!

Hello, I am trying to analyze factors that influence the adoption of technology, and while doing that, I am facing issues with rbiprobit estimation. I have seven years (2015-2021) balanced panel data that contains 2835 data. The dependent variable y1 (Adopt2cat), the endogenous variable "BothTechKnowledge," and the instrumental variable "SKinfoAdoptNew" takes value 0 and 1. Although the regression works, I am unsure how to include panel effects in the model.
I am using follwing codes:
rbiprobit Adopt2cat ACode EduC FarmExp HHCat LaborHH LandSizeDec LandTypeC landownership SoilWaterRetain SoilFertility CreditAvail OffFarmCode BothTechAware IrriMachineOwn, endog(BothTechKnowledge = ACode EduC FarmExp HHCat LaborHH LandSizeDec LandTypeC landownership SoilWaterRetain SoilFertility CreditAvail OffFarmCode BothTechAware IrriMachineOwn SKinfoAdoptNew)
rbiprobit tmeffects, tmeff(ate)
rbiprobit margdec, dydx(*) effect(total) predict(p11)
If we do not add time variables (year dummy), can we say we have obtained pooled panel estimation? I kindly request you to please guide me through both panel and pool panel estimation procedures. I have attached the Data file for your kind consideration.
Thank you very much in advance.
Kind regards
Faruque

I have 4 groups in my study and I want to analyse the effect of treatment in 4 groups at 20 time points. Which test should I chose?

I did principal component analysis on several variables to generate one component measuring compliance to medication but need understanding on how to use the regression scores generated for that component.

How can I ensure random sampling for customer surveys when the sampling frame is unavailable but needs to run a regression?

I had a few quick questions regarding the output generated by FEAT statistics. I'm currently working with resting-state data and attempting to perform nuisance regression of CSF, WM, Global Signal, motion parameters (standard + extended), and also scrub volumes that exceed a specific threshold of motion using FEAT Statistics. To scrub specific volumes with excessive motion I generated a confound.txt file that includes columns of 0 each with a single 1 indicating the specific volume that needs to be scrubbed. I selected Standard + Extended Motion Parameters to apply the motion parameters generated during FEAT preprocessing. Additionally, I applied CSF, WM, and Global signal nuisance regressors under full model setup by selecting Custom (1 entry per volume) and including three separate .txt files, each including 1 column of average values per volume (for CSF, WM, or Global). Doing so generated the attached design.png and res4d image. Is this the correct way to perform nuisance regression? If so, does the output res4d image look correct? It is very difficult to see the actual image relative to the background. Furthermore, is res4d the right image that I should be using if my goal is to extract the time series of ROIs within this fully processed resting state data?
Any help is very much appreciated!
Best,
Brandon

I perfomed 2SLS ,

In the robust version i found the endogeneity , but did not found in non robust version.

The results in robust version is validy? need your help.

Non Robust options

**Tests of endogeneity**

**H0: Variables are exogenous**

**Durbin (score) chi2(1) = .242302 (p = 0.6225)**

**Wu-Hausman F(1,613) = .227544 (p = 0.6335)**

**. estat overid**

**Tests of overidentifying restrictions:**

**Sargan (score) chi2(1) = .035671 (p = 0.8502)**

**Basmann chi2(1) = .033487 (p = 0.8548)**

**. estat firststage, all**

**First-stage regression summary statistics**

**--------------------------------------------------------------------------**

**| Adjusted Partial**

**Variable | R-sq. R-sq. R-sq. F(2,613) Prob > F**

**-------------+------------------------------------------------------------**

**TURN_1 | 0.1681 0.1152 0.0632 20.6714 0.0000**

**--------------------------------------------------------------------------**

**Shea's partial R-squared**

**--------------------------------------------------**

**| Shea's Shea's**

**Variable | partial R-sq. adj. partial R-sq.**

**-------------+------------------------------------**

**TURN_1 | 0.0632 0.0036**

**--------------------------------------------------**

**Minimum eigenvalue statistic = 20.6714**

**Critical Values # of endogenous regressors: 1**

**H0: Instruments are weak # of excluded instruments: 2**

**---------------------------------------------------------------------**

**| 5% 10% 20% 30%**

**2SLS relative bias | (not available)**

**-----------------------------------+---------------------------------**

**| 10% 15% 20% 25%**

**2SLS size of nominal 5% Wald test | 19.93 11.59 8.75 7.25**

**LIML size of nominal 5% Wald test | 8.68 5.33 4.42 3.92**

**---------------------------------------------------------------------**

**Robust options**

**Tests of endogeneity****H0: Variables are exogenous****Robust score chi2(1) = 2.99494 (p = 0.0835)****Robust regression F(1,613) = 2.77036 (p = 0.0965)**- . estat overid, forcenonrobust
**Tests of overidentifying restrictions:****Sargan chi2(1) = .035671 (p = 0.8502)****Basmann chi2(1) = .033487 (p = 0.8548)****Score chi2(1) = .514465 (p = 0.4732)****. estat overid****Test of overidentifying restrictions:****Score chi2(1) = .514465 (p = 0.4732)**- . estat firststage, all
- First-stage regression summary statistics
- --------------------------------------------------------------------------
- | Adjusted Partial Robust
- Variable | R-sq. R-sq. R-sq. F(2,613) Prob > F
- -------------+------------------------------------------------------------
- TURN_1 | 0.1681 0.1152 0.0632 13.7239 0.0000
- --------------------------------------------------------------------------
- Shea's partial R-squared
- --------------------------------------------------
- | Shea's Shea's
- Variable | partial R-sq. adj. partial R-sq.
- -------------+------------------------------------
- TURN_1 | 0.0632 0.0036
- --------------------------------------------------

When the results of correlation and regression are different, which one should I rely on more? For example, if the correlation of two variables is negative, but the direction is positive in regression or path analysis, how should I interpret the results?

I am doing landuse projection using the Dyna-CLUE model, but I am stucked with the error "Regression can not be calculated due to a large value in cell 0,478". I would appreciate any advice you can provide to solve this error.

**Global Project:**Should we start developing the SIT-USE?

**Software Immune Testing: Unified Software Engine (SIT-USE)**

**Toward Software Immune Testing Environment**

Would you like to be part of the funding proposal for

**SIT-USE**?Would you like to participate in the development of the

**SIT-USE**?Would you like to support the development of HR

**SIT-USE**?**Keywords:**Funding Proposal or Funding, Participation, Support

If you answer yes to any of the questions, don't hesitate to get in touch with me at

info.aitg@aeehitg.com and write in the subject – The keyword(s)

Despite much progress and research in software technology, testing is still today's primary quality assurance technique. Currently, significant issues in software testing are:

1) Developing and testing software is necessary to meet the new economy market. In this new market, delivering the software on time is essential to capture the market. Software must be produced on time and be good enough to meet the customer's needs.

2) The existing software requirements keep changing as the project progresses, and in some projects, the rate of requirement changes can grow exponentially as the deadline approaches. This kind of rapid software change imposes significant constraints on testing because once a software program changes, the corresponding test cases/scripts may have to be updated. Furthermore, regression testing may have to be performed to ensure that those parts that are supposed to remain unchanged are indeed unchanged.

3) The number of test cases needed is enormous; however, the cost of developing test cases is extremely high.

4) Software development technologies, such as object-oriented techniques, design patterns (such as Decorator, Factory, Strategy), components (such as CORBA, Java's EJB and J2EE, and Microsoft's. NET), agents, application frameworks, client-server computing (such as socket programming, RMI, CORBA, Internet protocols), and software architecture (such as MVC, agent architecture, and N-tier architecture), progress rapidly, while designing and programming towards dynamic and runtime behavior. Dynamic behavior makes software flexible but also makes it difficult to test. Objects can now send a message to another entity without knowing the type of object that will receive the news. The receiver may be just downloaded from the Internet with no interface definition and implementation. Numerous testing techniques have been proposed to test object-oriented software. However, testing technology is still far behind software development technology.

5) Conventional software testing is generally application-specific, rarely reusable, and is not extensible. Even within a software development organization, software development, and test artifacts are developed by different teams and are described in separate documents. These make test reuse difficult.

As a part of this research, we plan to work toward an automated and immune software testing environment that includes 1. Unified Component-Based Testing (U-CBT); 2. Unified Built-In Test (U-BIT); 3. Unified-End-to-End (U-E2E) Testing; 4. Unified Agent-Based Testing U-ABT); 5. Unified Automatic Test Case Generators (U-ATCG); and 6. Unified Smart Testing Framework (U-STF). The development of this environment is based on the software stability model (SSM), knowledge map (KM): Unified Software Testing (KM-UST), and the notion of software agents. An agent is a computational entity evolving in an environment with autonomous behavior, capable of perceiving and acting on this environment and communicating with other agents.

**You are invited to join Unified Software Engineering (USWE)**

The objective here is to determine factor sensitivities or slope coefficients in a multiple ols regression model.

I am doing a research project to study the determinants of capital structure. However, I've run into two issues.

After downloading data from Compustat, I noticed there are a lot of missing values amongst the data, and I wonder how I can deal with this data? How is it usually done in finance literature?

The other problem I came across is strange to me, and one of my variables, Interest expsense, includes zero values and sometimes also negative values, which not only does not make sense but also poses issues in calculating coverage ratio. What do you suggest me to do in this case?

I highly appreicate your response.

Best

Saeed

Recently, I was contacted by a professor who wanted to utilize my PyCaret book in his research. Considering that I support scientific advancement in every way possible, I was happy to collaborate with that person. Furthermore, I have decided to freely provide my book to other researchers interested in utilizing it. Here are the topics covered in the book:

• Regression

• Classification

• Clustering

• Anomaly Detection

• Natural Language Processing

• Time Series Forecasting

• Developing Machine Learning Apps with Streamlit

If you want to acquire the book for research purposes, I encourage you to send some information about your project, so we can discuss this further. You can check the link below for more information, and leave a comment below if you have any questions!

𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝘆𝗶𝗻𝗴 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗣𝘆𝗰𝗮𝗿𝗲𝘁: https://leanpub.com/pycaretbook/

In G Power - which should I use for a hierarchical regression with continuous variables? t test - linear multiple regression: fixed model, single regression coefficient OR OR f test - linear multiple regression: fixed model, R2 deviation from zero. What is the difference?

i am runing an instrumental variable regression.

Eviews is providing two different models for instrumetenal variables i.e., two-stage least squares and generalized method of moments.

how to choose between the two models.

thanks in advance

I need help on how to run RIDGE REGRESSION in EViews. I have installed the add-in in EViews, but am having problems running the regression. Could someone, please help me with the step-by-step video (or even explanation, on how to do this. I am facing deadline.

I will sincerely appreciate a timely response.

Shalom.

Monday

I am working on a SEM model using Mplus. The model includes 2 latent factors each with about 4 dichotomous indicators. The latent factors are regressed onto 5 exogenous predictors (also dichotomous). A dichotomous outcome is, in turn, regressed onto the 2 latent factors. I used WLSMV to estimate the model, which is recommended when the latent factor indicators are dichotomous.

The model fits well but my understanding is that Mplus uses probit regression for the DV and latent factors. And I am not very familiar with how to interpret probit results. So I do not know how to interpret the parameter estimates (the indicator coefficients for each latent factor; the exogenous coefficients for those variables after regressing the latent factor on them; and the coefficients for the DV regressed onto the latent risk factors).

Can anyone point me towards reference material that might walk me through how to interpret (and write-up) the results of this modeling?

Thanks for any help.

James

Hello

I am searching for the Panel smooth transition regression Stata code.

Does anyone know of any available code for Stata?

Thank you

Hi everyone

I am using package "XTENDOTHRESDPD" to run a Dynamic panel threshold regression in Stata which is provided here: https://econpapers.repec.org/software/bocbocode/s458745.htm

However, I have the following issue which I could not solve.

To see whether the threshold effect is statistically significant, I am running "xtendothresdpdtest", function after the regression result and I am getting this Error: "inferieurbt_result not found."

I would really appreciate it if you could guide me in case you have any experience with this function.

sing STATA or R, how can we extract intra class correlation coefficients (ICCs) for Multilevel Poisson and Multilevel Negative Binomial Regression?

Dear all

I have a set of balance panel data, i:6, t: 21 which is it overall 126 observation. I decided that 1 dependent variable (y) and 6 independents variables (x1,x2......).

First: I do unit root test it shows:

y I(I)

x1 I(0)

x2 I(I)

x3 I(I)

x4 I(0)

X5 I(I)

x6 I(0)

If I would like to run panel data regression (Pooled, Fixed Effect and Random Effect), is that the correct form for inputting the model in Views:

d(y) c x1 d(x2) d(x3) x4 d(x5) x6

or

Shall I sort all variables in the same difference level, adding "d" to all ?

please correct if I am wrong, these are the steps I would like to conduct the statical part of a panel data:

1. Test Unit Root

2. Panel Regression?

3. ARDL

Hello everyone,

In order to compare two clinical methods, we usually use Passing & Bablok (PABA) regression. Most of the time, our samples are larger than n=50, but for the comparison I'm interested in today (method A vs method B), the samples are small (n = 10-15).

The PABA regression validates the equivalence between the two methods (method A vs method B). Indeed, the CI intercept crosses 0 and CI slope crosses 1 :

- Intercept = -6 and confidence intervalle (CI) = [-56 ; 31]
- Slope = 2. and confidence intervalle (CI) = [0,5 ; 4]

However, I have a few points of concern about these results because :

- The Pearson coefficient is low (r = 0,63),
- The size of the CI is very large,
- The coefficient of variation (CV) between the two methods is high (CV > 20%).

Do you know of any criteria or rules that I could add to the analysis of PABA regression that would enable me to improve our validation method ?

Thanks in advance for your help ! :)

I discovered that three independent variables have standardized multiple coefficients (SMC) equal to 1.02 on a dependent variable. Are there any approaches to be considered for modifying a high R2 in a regression?

I have conducted some ordinal logistic regressions, however, some of my tests have not met the proportional odds assumptions so I need to run multinomial regressions. What would I have to do to the ordinal DV to use it in this model? I'm doing this in SPSS by the way.

Using SPSS, I've studied linear regression between two continous variables (having 53 values each), I've got a p-value of 0.000 which means no normal distribution, should I use another type of regression?

In carrying out panel data regression analysis, it is required that Hausman Specification Test be carried out to choose from Fixed Effect or Random Effect estimation approaches. Another theory holds that Breusch-Pagan Lagrange multiplier (LM) test for panel data is also required to choose between Random Effect estimation and Pooled Effect Estimation.

Which of the preliminary tests should come first? Are these tests the final determinants of which estimation approach to deploy?

Hello community!

I am running a CFA for a within and between subjects design. The research involves the study of students on variables before and after taking an entrepreneurship course. The dependent variables are self-efficacy, with 5 subconstructs, and entrepreneurial intent. The independent variable is the course. Covariates are a continuous age variable, exposure (0, 1, 2, or 3), and experience (0, 1, or 2) (I used dummy variables for these in the ANCOVAs).

I don't have experience with repeated measures CFA and want to make sure I'm doing it correctly. I have attached a picture for the CFA model I have tested. I correlated error terms for the errors of the corresponding measured items. I set the regression weights to equal for both times. I also correlated the latent variables. This study also has multiple groups (female and male), but I believe it does not change anything to the factor structure (I just added separate data sets for those groups in Amos). Please let me know if this assumption is wrong.

- Does the model reflect an appropriate way to test whether the factor structure holds across time?
- Is it OK that I did not include the covariates or should I?
- The model where I constrain the regressions weights to be equal for both times has significantly lower model fit according to the chi square difference test. The model fit is otherwise good for both models TLI & CFI > .9 and RMSEA < .04. Can I argue that theoretically the model should hold and since model fit is good for the constrained model, that it is OK to use it across time? I know the chi square difference test is sensitive to sample size (n > 3,000) but does that matter for the chi square difference test?
- Chi square constrained model – Chi square less constrained model -> 11814.262-11644.759=169.503 and df (2ndmodel) – df (1st model) = 2345-2288=57. p < .001

I would appreciate your insight very much!

Thank you,

Heidi

In addition to Oaxaca-Blinder decomposition, does exogenous switching regression is applicable to see gender gap in market participation of agricultural product?

Hi,

I have a set of studies that looked at the association of sex w.r.t to multiple variables. The majority of the studies reported regression variables such as beta, b values, t-stats, and standard errors. Is it possible to run a meta-analysis using any of the above-mentioned variables? If so, which software would be more meaningful to perform a meta-analysis? I did a wee bit of research and found out that Metafor in R would be the better choice to perform these kinds of meta-analyses.

Any help would be highly appreciated!

Thanks!

Hi,

I have a set of studies that looked at the association of sex w.r.t to multiple variables. The majority of the studies reported regression variables such as beta, b values, t-stats, and standard errors. Is it possible to run a meta-analysis using any of the above-mentioned variables? If so, which software would be more meaningful to perform a meta-analysis? I did a wee bit of research and found out that Metafor in R would be the better choice to perform these kinds of meta-analyses.

Any help would be highly appreciated!

Thanks!

can we apply regression on moderate correlation? Please recommended an easy book to understand for non-statistical readers.

Hello everyone. The p value of the path estime regression weight (B=0.198) from A to C, is 0.014 in my model in the figure. After boostraping, the coefficient from A to C (B=0.198) becomes p value 0.043 as a direct effect. What causes this difference in P value? Many thanks for your comments

Hello spatial analysis experts

Hope you're all good.

I need urgently the commands and R codes of performing spatial binomial regression in RStudio. Please if someone has already worked on it, share the codes from start to end.

Thanks and regards

Dr. Sami

Hi there!

I am currently running SPSS AMOS 24

But the SEM result doesn't show the P-Value for regression weights in estimate when it comes to my three main paths

Estimates only showed score 1 for each correlation, S.E., C.R. and P-Value are all empty

(The rest of the variables are normal, only three main ones)

How can I resolve this question?

Looking forward to kind assistance in this regard, wish everyone well :)

My research topic is ROLE OF TEACHERS' ENTREPRENEURIAL ORIENTATION IN DEVELOPING ENTREPRENEURIAL MIND-SET OF STUDENTS IN HEIs. The research constructs that I am using are ENTREPRENEURIAL ORIENTATION and ENTREPRENEURIAL MIND-SET, both are psychological and behavioral. The variables that I will be measuring are INNOVATIVENESS, PRO-ACTIVENESS and RISK TAKING ABILITY of Teachers.

I will checking the strength of relationship between these construct using regression and would like to use THEORY OF PLANNED BEHAVIOUR by Ajzen in support of my research argument without TESTING or BUILDING the theory. I would seek expert advice as to how can it be done and is it practically acceptable practice
Thank You

Since OLS and Fixed effect estimation varies, for a fixed effect panel data model estimated using a fixed effects (within) regression what assumptions, for example no heteroskedasticity, linearity, do I need to test for, before I can run the regression.

I'm using the and xtreg,fe and xtscc,fe commands on stata.

In what situation we will use please tell me I face difficulty

In 2007 I did an Internet search for others using cutoff sampling, and found a number of examples, noted at the first link below. However, it was not clear that many used regressor data to estimate model-based variance. Even if a cutoff sample has nearly complete 'coverage' for a given attribute, it is best to estimate the remainder and have some measure of accuracy. Coverage could change. (Some definitions are found at the second link.)

Please provide any examples of work in this area that may be of interest to researchers.

I'm working on my PhD thesis and I'm stuck around expected analysis.

I'll briefly explain the context then write the question.

I'm studying moral judgment in the cross-context between Moral Foundations Theory and Dual Process theory.

Simplified: MFT states that moral judgmnts are almost always intuitive, while DPT states that better reasoners (higher on cognitive capability measures) will make moral judgmnets through analytic processes.

I have another idea - people will make moral judgments intuitively only for their primary moral values (e.g., for conservatives those are binding foundations - respectin authority, ingroup loyalty and purity), while for the values they aren't concerned much about they'll have to use analytical processes to figure out what judgment to make.

To test this idea, I'm giving participants:

- a few moral vignettes to judge (one concerning progressive values and one concerning conservative values) on 1-7 scale (7 meaning completely morally wrong)

- moral foundations questionnaire (measuring 5 aspects of moral values)

- CTSQ (Comprehensive Thinking Styles Questionnaire), CRT and belief bias tasks (8 syllogisms)

My hypothesis is therefore that cognitive measures of intuition (such as intuition preference from CTSQ) will predict moral judgment only in the situations where it concerns primary moral values.

My study design is correlational. All participants are answering all of the questions and vignettes. So I'm not quite sure how to analyse the findings to test the hypothesis.

I was advised to do a regressional analysis where moral values (5 from MFQ) or moral judgments from two different vignettes will be predictors, and intuition measure would be dependent variable.

My concern is that the anlaysis is a wrong choice because I'll have both progressives and conservatives in the sample, which means both groups of values should predict intuition if my assumption is correct.

I think I need to either split people into groups based on their MFQ scores than do this analysis, or introduce some kind of multi-step analysis or control or something, but I don't know what would be the right approach.

If anyone has any ideas please help me out.

How would you test the given hypothesis with available variables?

This questions is for beginner students only.

My question is very straightforward. I have different ordinal independent variables (4 categories), and I am trying to regress it on different ordinal and nominal variables. I want help on how I could interpret the coefficients and the odd ratios.

On the picture, the independent variable has 4 categories

gender has 2 categories

age has 13 categories

education has 9 categories

politics has 2 categories

statecol has 3 categories

famincome has 6 categories

I want to ask as well if it is correct to write the "

**i.**" before some variables, and if not, when it is appropriate to use the "**i.**"I attach my Stata output.

Thank you

I'm fitting a model in R where response variable is an over-dispersed count data, and where I meet an issue with goodness-of-fit test. As it has been previously discussed in Cross Validated, common GOF test like chi-square test for deviance is inappropriate for negative binomial regression because it includes a dispersion parameter as well as the mean:

I'm confused and would appreciate any suggestions about the appropriate GOF test for NB regression (preferably intuitive).

Autocorrelation, Hetroscedastcity and the normality of residuals are being considered as the important diagnostics to examine the credibility of model. If Autocorrelation is examining by DW so up to which level the DW will be sufficient for reliable results?

What measure will used for hetroscedasticity in panel regression?

Hi, I am currently writing my master thesis on early indicators of bank failure. For this I have calculated probability of default as my dependent variable and have around 15 different financial ratios as explanatory variables, I have also included two measures of interest sensitivity as possible independent variables.

My data consists of 125 companies over 20 years. I'm using STATA and need help with how I should format my data from excel. I'm also unsure of which kind of regression is best suited for my data. I've tried reading

**Econometric Analysis of Cross Section and Panel Data by Woolridge but I'm feeling a bit lost.**Attached is the Excel file.

Thank You for the help!

The Seemingly Uncorrelated Regression Models

Hello...

I have 40 samples... I want to randomly select a percentage of these 40 samples and make an equation from them by regression and test the remaining percentage of the sample with this regression equation... that is, a number of data For correlation (creation of regression equation) and some data for validation.... My question is:

1- On what basis is this sample selection percentage chosen for correlation and validation? For example, one can take 50% to 50% of the samples, one can take 70 to 30... which one is correct? Is there an article or book that tells the basics?

2- What should be the sample selection criteria for regression equation and validation?

I have tested IPS unit root for my regression variables. However, one of my variables are still non-stationary after first difference, but stationary at second difference. May I know what are suitable model to use for integrated order 2 ?

Can I use a variable that was found to be correlated with the independent variable in the exploratory analysis to control the effect of the variable in hierarchical regression by putting it in the first step without any prior hypothesis related to it?

I think I have learned to do that, but a reviewer pointed out it is not right to use the variable as a control variable because I do not have any hypothesis related to it.

If using it as a control variable is possible, please give me the list of publications that used this method or references that can justify this.

there are two independent (A & B) variable and two dependent (Y & Z). I want to see if the impact of independent variables vary across dependent variables. I've used SEM and that shows the value of the relationship, where A significantly impacts Y and Z, B significantly impacts Y but not Z. At the same time the regression estimate of Impact of A on Z is less than the regression estimate of B on Z. The reviewer asks for a test for significant change, which I don't understand.

If the reviewer is asking if there is significant difference in the impact, how to do that? Kindly help, thanks

Is there a REFERENCE, for selecting items of a construct based on the standardised regression weights and preferring to select the

**ones above 0.6**for doing the Confirmatory Factor analysis and later SEM ?I have found a reference which said if the weights were above .4 the item could be taken.

I'm doing a multiple linear regression where some of the independent variables are normally distributed whereas others aren't. The normal P-P plots of regression seems appropriate as the plots are in line. I have 84 participants in total, is that enough to go ahead with linear regression without assumption of normality being met?

Dear Colleagues,

QQ regression is perhaps one of the latest methods in econometric estimation approaches. In case you have expertise could you please help me by providing useful information as to how to perform QQ regression using R or Stata?

I am currently engaged in a study that applies regression and ANOVA models to several latent variables, including entrepreneurial passion, risk attitude, and entrepreneurial self-efficacy.

In the context of this study, I am seeking a rigorous and straightforward method for determining the factor loadings and latent variable scores for each participant. I am particularly interested in going beyond the traditional methods of simply calculating average or sum scores for these latent variables.

I believe estimating these factors would be more precise using an approach similar to that employed in Covariance-Based Structural Equation Modeling (CB-SEM) models.

Could you provide guidance on how to implement this approach effectively? Would you recommend specific statistical techniques or software tools for this purpose?

How can I ensure the validity and reliability of the obtained factor loadings and latent variable scores? Any advice or resources you could share would be greatly appreciated.

Suppose I have collected data on customer churn rates and various customer attributes. How can I use regression to predict the likelihood of customer churn based on these attributes?

For context, the study I am running is a between-participants vignette experimental research design.

My variables include:

1 moderator variable: social dominance orientation (SDO)

1 IV: target (Muslim woman= 0, woman= 1) <-- these represent the vignette 'targets' and 2 experimental conditions which are dummy-coded on SPSS as written here)

1 DV: bystander helping intentions

I ran a moderation analysis with Hayes PROCESS macro plug-in on SPSS, using model 1.

As you can see in my moderation output (first image), I have a significant interaction effect. Am I correct in saying there is no direct interpretation for the b value for interaction effect (Hence, we do simple slope analyses)? So all it tells us is - SDO significantly moderates the relationship between the target and bystander helping intentions.

Moving onto the conditional effects output (second image) - I'm wondering which value tells us information about X (my dichotomous IV) in the interaction, and how a dichotomous variable should be interpreted?

So if there was a significant effect for high SDO per se...

How would the IV be interpreted?

" At high SDO levels, the vignette target ___ led to lesser bystander helping intentions;

*b*= -.20,*t*(88) = -1.65,*p*= .04. "(Note: even though my simple slope analyses showed no significant effect for high SDO, I want to be clear on how my IV should be interpreted as it is relevant for the discussion section of the lab report I am writing!)

My sample is an environmental sample. Is there anybody who can help me with this?

I am using SPSS to model count data with a Poisson distribution. My initial Poisson model and the default Negative binomial model showed over dispersion and under dispersion respectively. I am fitting a third model with custom negative binomial with log link function to estimate dispersion more accurately. However, I get this message after running the model: "

*There are no valid cases for the log link function. Only the iteration history is displayed. Execution of this command stops*".**Is it okay for me to go with the best of the first two models or there is a way around the problem with the third model (in which case what should be done)?**Hi All,

I'm working on an artificial neural network model, I got the attached results in which the regression is 0.99072 which I think is good, but not sure why there is an accumulation of data about the Zero and One as shown in the attached regression.

Any Idea, or explanation, I will be highly appreciated.

In finding the correlation and regression of multivariable distribution what is the significance of R and R^2? What is the main relation between them?

I have data on sales and various marketing efforts, such as advertising spend, social media engagement, and promotional activities. How can I use regression to quantify the effectiveness of these marketing strategies on sales?

ماهو الهدف من الانحدار الاستراتيجي وما هي طرائق استخدامه

Hello everyone, I only found the additive interaction calculation method based on logistic regression model on the internet, can anyone provide the name of the R package or SAS code for calculating RERI, AP, SI and their 95%CIs based on log-binomial regression model？ Millions of thx！

Hi dear researchers

i am reading about regression analaysis can anyone help me to guve me a brief ideae about when do we use simple linear regression and what is differnce with correlation coefficient thanks in advance

Dear all,

We are establishing statistical equations to predict lime requirement (LR) of tropical soils. I've found in the literature that, for prediction purpose, sometimes regression equations are used, but other times models are employed. Can anyone give us the main difference between a statistical equation and a model ? And which one is most suitable for prediction purpose ? Thanks a lot.

I have data related to households with a number of variables with the dependent variable being household consumption. I need to specify an OLS regression to identify the treatment effect of interest but I do not have interest as a variable within the data provided. How would I go about creating this variable and introducing it as a shock to the data?