Science topic

# Quantitative Methodology - Science topic

Explore the latest questions and answers in Quantitative Methodology, and find Quantitative Methodology experts.
Questions related to Quantitative Methodology
• asked a question related to Quantitative Methodology
Question
5 answers
I'm currently looking for a recommendation for a live (offline) quantitative methodology courses/workshops in Europe, scheduled until the end of February 2022. Online course/workshop is acceptable only if you would highly recommend it.
Please be so kind to share your thoughts/experiences.
Thank you.
Relevant answer
Answer
Thanks for having me. Quantitative is carrying numerical and statically way of various modular approach exercise. Subject, what nature and qualification of candidates you would like to take on board for modular education or information.
If you'd like to seek further discussion you can contact me on Whats app 00447954658263
• asked a question related to Quantitative Methodology
Question
21 answers
If I use 320 sample size using a purposive sampling technique, how can validate the sample size for generalizing results? Are 320 responses could be statistically sufficient to generalize the results?
Relevant answer
Answer
I think its your reviewer who is confused about sample size and generalization, but it is up to you to clear up this confusion by pointing out that your sample cannot be generalized to a population of restaurant users, regardless of its size. You can say something like "Except for the highly unlikely case of taking a random sample of all restaurant users, it is not possible to generate truly generalizable results about this population."
• asked a question related to Quantitative Methodology
Question
11 answers
How can I validate a questionnaire for a small sample of hospitals' senior executive managers?
Hello everyone
-I performed a systematic review for the strategic KPIs that are most used and important worldwide.
-Then, I developed a questionnaire in which I asked the senior managers at 15 hospitals to rate these items based on their importance and their performance at that hospital on a scale of 0-10 (Quantitative data).
-The sample size is 30 because the population is small (however, it is an important one to my research).
-How can I perform construct validation for the items which are 46 items, especially that EFA and CFA will not be suitable for such a small sample.
-These 45 items can be classified into 6 components based on literature (such as the financial, the managerial, the customer, etc..)
-Bootstrapping in validation was not recommended.
-I found a good article with a close idea but they only performed face and content validity:
Ravaghi H, Heidarpour P, Mohseni M, Rafiei S. Senior managers’ viewpoints toward challenges of implementing clinical governance: a national study in Iran. International Journal of Health Policy and Management 2013; 1: 295–299.
-Do you recommend using EFA for each component separately which will contain around 5- 9 items to consider each as a separate scale and to define its sub-components (i tried this option and it gave good results and sample adequacy), but am not sure if this is acceptable to do. If you can think of other options I will be thankful if you can enlighten me.
Relevant answer
Answer
Faten Amer , sample size it is not a problem at al in Bayesian Factor Analysis, see for example:
• asked a question related to Quantitative Methodology
Question
4 answers
Dear colleagues,
I am preparing questionnaires for online admission with a population of the final grades of elementary and second grades of high schools.
In the research design, I have a student questionnaire and parent questionnaire.
I am planning to use Google Forms as a platform for data collection.
My question is this: Which would be the best way to anonymously connect data from student and parent questionnaires?
Ivan
Relevant answer
Answer
Hi Ivan Beroš - is the nature of the questions very sensitive? I ask because generally you can offer to anonymise the results when you code the results and store any results with identifiers in an institution's secure cloud server.
Very best wishes, James.
• asked a question related to Quantitative Methodology
Question
8 answers
Greeting to fellow researchers and seniors,
There has been a popular phenomenon (i.e let's call it construct A) that has several identifying ‘hallmark’ components, and is uniquely associated with sexual abuse survivors.
I would like to verify that the components can indeed stand as a construct, and that these components are indeed a unique phenomenon among sexual abuse survivors, more than other typologies.
Previously, I have carried out Structural Equation Modelling (SEM) and regression investigating risk factors, correlates, etc. I would like to use a different approach on this phenomenon. If needed, I also have the 'raw' data on potential risk factors, correlates, etc.
a) What statistical approach do you recommend me to use? All variables are continuous, but categorization is also possible. Is Latent Profile Analysis (LPA) appropriate here, given we would like to assess typologies based on several latent continuous variables? I intend to use LPA to examine the determined typologies and assess whether the phenomenon (to facilitate our discussions, we shall name it construct A) is more pronounced in the group with sexual violence than on the other groups.
Or are another statistical methods more appropriate (including SEM, Factor Analysis, etc)?
b) For Latent Profile Analysis, is it possible to use Mplus? Otherwise, what applications are recommended for this purpose?
Thank you. Looking forward to your replies.
Warmest Regards
Relevant answer
Answer
a) LPA could be appropriate if you assume that there are qualitatively distinct profiles of means across the indicators of Construct A across previously unknown subgroups (latent classes). That is, the mean profiles to be extracted should not just differ in level (in which case a dimensional [e.g., factor] model would probably be more useful), but also in shape (i.e., at least some of the mean profiles should cross in indicating group [latent class] differences in kind rather than just differences in degree). You could then examine whether class sizes vary across manifest grouping variables (e.g., group sexual violence vs. no violence) using multigroup LPA or whether class membership depends on continuous covariates (e.g., age) using covariate analysis {logistic regression of class membership om covariates).
b) Yes, Mplus does allow you to run LPA.
• asked a question related to Quantitative Methodology
Question
27 answers
I think we make a mistake when we consider that data are qualitative if they come from applying qualitative methods and techniques or if they are collected in qualitative researches. I think we make the same mistake when we consider quantitative all data collected through, for example, questionnaires.
I think we have to consider the qualitative or quantitative character of data looking exclusively at data. Are they numeric? Are they textual or visual?
I think (as Traian Rotariu argues) that quantitative data are numeric and that qualitative data are textual or visual and that we could gather quantitative and qualitative data with each and every method and technique being it qualitative or quantitative.
For example most of the data gathered with questionnaires are qualitative in their primary form: opinions, gender, preferences etc. and just a few are quantitative in their primary form: income, age, children's number etc.
Relevant answer
To answer this valuable question, let me repeat what I have written in my recording sheet during my bachelor study:
• While Quantitative data collection methods use mathematical calculations to produce numbers, Qualitative data collection methods concern with words and produce descriptions.
• While Quantitative methods are more structured and allow for aggregation and generalization, Qualitative methods are more open and provide for depth and richness.
• Quantitative and qualitative each has their strengths and weaknesses. Sometimes numbers are more useful; other times, narrative (qualitative data) are more useful. Oftentimes, a mix of quantitative and qualitative data provides the most useful information.
• asked a question related to Quantitative Methodology
Question
5 answers
Hi there,
I have a confusion regarding the appropriate test I should choose to study media consumption pattern between two populations.
I would like to compare people from Punjab and Kerala in terms of their reliance on newspapers for COVID-19 related information.
One variable is state and it has two options.
1. Kerala
2. Punjab
The second variable is a rating on a 1 to 5 agreement scale for the following statement. The statement is "I read newspaper to get COVID-19 related information"
I think I could use Chi-square test for homogeneity or Chi-square test for association.
But there is a problem. The distribution of the Age of respondents is very different in the samples from Kerala and Punjab. The sample from Kerala is mostly young. So, I think this may create a problem in the analysis. The difference in the sample may be actually due to Age not the difference in State. But we are not sure.
In this case, is it appropriate to perform an ordinal logistic regression with both Age and State as predictors and Rating as the dependent variable?
Or should I just do the chi-square test by ignoring Age?
Or is there a better way to solve this issue?
Thanks in advance.
NB:
Sampling: The questionnaire was created using Google Form and the link was distributed through various social media groups.
Relevant answer
Answer
"Controlling" for age does not solve all problems. For example, see https://meehl.umn.edu/sites/meehl.umn.edu/files/files/084nuisancevariables.pdf.
If the way in which your sampling procedure lead to differences in age it presumably may have lead to differences in other things, measured and unmeasured, and including your outcome variables. There has been a lot of work on matching in recent years (https://www.amazon.com/Observational-Studies-Springer-Statistics-Rosenbaum/dp/0387989676), but before looking at these procedures it is necessary for you consider what went wrong with your sampling. This can help you to think about what other biases may exist.
• asked a question related to Quantitative Methodology
Question
7 answers
When conducting a comparative study that is quantitative in nature, how does one decide which statistical test would be best suited for the research?
For example, if comparing success rates at passing certification exams for students attending online programs vs students attending traditional in-person programs, which statistical test would be best? How do you come to the conclusion of which one is more fitting than others?
Relevant answer
Answer
One could use a chisquare test to test whether more (or less) students succeeded who attended the online program than those who attended the traditional program.
• asked a question related to Quantitative Methodology
Question
12 answers
If I want to collect data from the respondents using online survey then what should I need to do to get better responses.
Relevant answer
Answer
The following publications may further help:
• Andrews, D., Nonnecke, B. and Preece, J. (2003) Conducting Research on the Internet: Online Survey Design, Development and Implementation Guidelines, International Journal of Human-Computer Interaction, 16, 2, pp. 185-210.
• Bowers, D. K. (1998) FAQs on online research, Marketing Research, 10, 4, pp. 45-47.
• Fricker Jr., R. D. (2017) Sampling Methods for Online Surveys, in Fielding, N.G., Lee, R.M. and Blank, G. (eds.) The SAGE Handbook of Online Research Methods. 2nd ed. London: SAGE Publications Ltd, pp. 162-183.
• Nayak, M. S. D. P. and Narayan, K. A. (2019) Strengths and Weakness of Online Surveys, Journal of Humanities and Social Science, 24, 5, pp. 31-38.
• Nulty, D. D. (2008) The adequacy of response rates to online and paper surveys: what can be done?, Assessment and Evaluation in Higher Education, 33, 3, pp. 301-314.
• Sue, V. M. and Ritter, L. A. (2007) Conducting Online Surveys. Thousand Oaks, California: Sage Publications, Inc.
• asked a question related to Quantitative Methodology
Question
8 answers
If we collect data through online survey (i.e. survey monkey, google form & etc.) then what should we called for sampling method? What are the sampling methods can be used for collecting data through online survey?
Relevant answer
Answer
I agree with Dennis Njung'e and Aruditya Jasrotia totally. Convenience sampling would do as well.
• asked a question related to Quantitative Methodology
Question
3 answers
Dear community,
I would like to combine two subscales of a questionnaire to form one predictor: It is an instrument on intrinsic vs. extrinsic goals which assesses for several goals their attainment and importance by two separated questions. Since using the attainment and the importance as two distinct predictors won't work I was thinking of using the difference as a variable by subtracting the importance from the attainment. For example a positive or zero value for the scale "personal growth" would mean that the attainment is large/as large as it's attributed importance, while a negative value indicates the lack of attainment of an important value.
Has anybody modelled something similar before, e.g. using lavaan or MPLUS? It seems to work nicely as manifest varibale in a MLR but I would like to try it as SEM.
Thank you very much in advance,
Kind regards
Matteo
Relevant answer
Answer
Hello Matteo,
It wasn't clear why you would be unable to use both an attainment and an importance rating in your model. Maybe you could explain why that restriction exists for your research.
However, it is important to note that difference scores are always less reliable than either of the two constituent scores unless: (a) both measures are perfectly reliable (highly unlikely in your scenario); or (b) the correlation between the measures is zero (probably unlikely as well). If both measures have comparable variance, then the formula for reliability of a difference score is: (mean reliability - rxy) / (1 - rxy), where: mean reliability is the average reliability of the two measures, and rxy is the correlation between measure "x" and measure "y".
One final thought: Imagine two respondents, one with maximum rating on attainment and maximum rating on importance (yielding a zero difference, following your proposed approach); the other with a minimum rating on attainment and minimum rating on importance (again, zero difference). Numerically, these are "identical" cases. However, I'd be hard pressed to argue that they're comparable with respect to the goal in question.
Good luck with your work.
• asked a question related to Quantitative Methodology
Question
8 answers
I am collating evaluation methodologies. So, broader methodologies for inclusion in my report are also welcome. Please also indicate the preferred citation for the evaluation methodology.
Relevant answer
Answer
Your request for a convenient research methodology cannot be answered without more information about the context of your research. The choice of a quantitative approach , qualitative approach or a mixed-methods apprach is only the function of what you want to achieve from your research. The gender inequality construct is responsive to multiple theoretical and practical traditions in research. One option to research gender inequality is to opt for quantitative approach that is further developed into a survey design ( Creswell, 2003 ). You need to be careful about the imperatives of a survey design in terms of sampling, data collection and data analysis. You may opt for probability sampling ( your best option if possible) because the representativeness of your population sample is consequential at all levels in survey designs. The instrument for data collection coud be a structured questionaire. As for the data analysis part, you may consider all statistical tools with the capacity to equate correlation, regression and variance.
This one option among many. The important thing is to conduct research from the perspective of an establish tradition in terms of methodology and design.
Best of luck
• asked a question related to Quantitative Methodology
Question
8 answers
If the sample size is small (less than 100) how it can be published in peer reviewed journal. What could be the justifications for using small sample size .
Relevant answer
Answer
Two considerations: (1) Many small studies lacking power can be combined to derive inferences with considerable power, so a small study can contribute to knowledge. If there is a deep literature, a Bayesian estimate with an informative prior might be useful. Otherwise, a small study might contribute to meta analysis. (2) If the literature is nonexistent or shallow, think of the study as being exploratory and hypothesis generating, more a qualitative than quantitative analysis, and do not oversell the findings.
• asked a question related to Quantitative Methodology
Question
12 answers
I am doing my master thesis in risk management with Quantitative methodology. The question is, I am not able to decide on which industry to choose and on what parameters? Also, the data as to be generated by myself. Could someone help me please
Relevant answer
Answer
Like other members said the MC application is very broad in scope. First, you should consider what kind of project management problem will be solved using MC, after that you can choose the industry. After you decided, you can start to create the system model, describe the scenario (how the system work) and the parameter of input, process, and output in sequence. The important thing about using MC is the model. You can make an assumption for some condition to make a model more logically (similar to the actual condition but not exactly the same, it is very difficult). Make sure you can measure important parameters from each step from your model. Finally, you can simulate the model to get the result. Validate your model with historical data (please don't generate the data by your self, using some reference from database is necessary), make some adjustments to reduce deviation and then you can use it to predict the future situation (at this step you can generate data by your self using what-if analysis based on the industry situation). To simulate the model you can select appropriate software or you can use excel for a simple model. I hope It can help you.
• asked a question related to Quantitative Methodology
Question
3 answers
Good morning all,
I am working on a research project. I have decided to use a pre-post design in which I aim to evaluate the role of yoga on secondary schools. As evaluation method I am using a likert scale. I will have 3 different conditions so 3 different samples.
Can I use ANOVA to evaluate the differences between the subjects in a condition (within group) and the difference between the 3 conditions (between groups)?
I don't think I can use T-test since I have 3 samples but I am not sure if I can use ANOVA with a likert Scale. Can you please help me?
Thank you very much!!
Relevant answer
Answer
Can you clarify if you are analyzing a scale per se, that is values composed of several Likert items, or if you are analyzing individual Likert items? The recommended analyses will likely be different.
• asked a question related to Quantitative Methodology
Question
16 answers
I am in the middle of questionnaire development and validation processes. I would like to get expert opinion on these processes whether the steps are adequately and correctly done.
1. Items generation
Items were generated through literature review, expert opinion and target population input. The items were listed exhaustively till saturation.
2. Contents validation
Initial items pool were then pre-tested with 10-20 target population to ensure comprehensibility. The items were then reworded based on feedback.
3. Construct validity
a) Bivariate correlation matrix to ensure no items correlation >0.8
b) Principal Axis Loading with Varimax Rotation. KMO statistic >0.5. Barttlets Test of Sphericity significant. Communalities less than 0.2 were then removed one-by-one in turn. Items with factor loading high cross-loading were removed one-by-one in turn. Then, item with factor loading <0.5 were removed one-by-one in turn. This eventually yielded 17 variables with 6 factors, but 4 factors have only 2 items. So I try to run 1, 2, 3, 4, 5 and 6 factor models, and found that 4-factor model is the most stable (each factor had at least 3 items with factor loadings >0.4). Next analysis is only on 4-factor model.
c) Next, i run Principal Component Analysis without rotation on each factor (there are 4 factors altogether), and each resulted in correlation matrix determinant >0.01, KMO >0.5, Bartlett significant, total variance >50%, and no factor loading <0.5.
d) I run reliability analysis on each factor (there are 4 factors altogether) and found cronbach alpha >0.7, while overall realibility is 0.8.
e) I run bivariate correlation matrix and found no pairs correlation >0.5.
f) Finally, i am satisfied and decided to choose four-factor model with 17 variables and 4 factors (each factor has 5,4,4,4 items), and each factor had at least 3 items with loadings >0.5. Realibilility for each factor >0.7 while overall is 0.8.
.
My question is, am i doing it right and adequate?
Your response is highly appreciated.
Thanks.
Regards,
Fadhli
Relevant answer
Answer
attached file may help....
• asked a question related to Quantitative Methodology
Question
11 answers
How to apply the quantitative methodology and which data to collect?
Relevant answer
Answer
Basically asking how governmental regulation related to climate change affects economic development?
The big challenge here is how do you know what would have happened if those policies did not take effect. The results could also depend greatly on what part of the economy one looks at. Coal mining and coal use would decline under climate change policies while wind, solar, hydroelectric, and similar sectors will benefit. While individuals will win or lose, the net effect on society is unclear.
• asked a question related to Quantitative Methodology
Question
7 answers
I'm new to conducting research and was wondering if anybody could enlighten me on what methodology would best suit my topic.
My thesis is about The influence of public outrage on media coverage of rape cases in Morocco.
On the one hand, I would like to analyze the impact of public outrage on social media, and on the other hand, analyze the impact of public outrage on the press (I've narrowed my sample to two newspapers).
Subsequently, I intended on comparing both points.
I've thought of a quantitative methodology with a correlational approach or a causal-comparative approach. However, statistics regarding any focal point of my research are non-existent.
Therefore, I decided to adopt a qualitative methodology instead, using a simple case study while trying to reformulate my hypothesis since it does not require statistics.
Any advice will be immensely appreciated.
Relevant answer
Answer
Hi Hayley,
You will first need to decide how you will measure 'influence' - and 'outrage'.
You can do this both qualitatively and quantitatively.
If you can specify parameters for influence (e.g. a higher proportion of articles) with outrage (was there a protest at the time? Or some measure of public outrage) - then you could see if the two measure correlate.
A qualitative analysis would definitely be useful.
Again - how you define your constructs will determine how you measure this - and you can then build a rich case study around this :)
I hope this helps.
• asked a question related to Quantitative Methodology
Question
5 answers
Hi I would appreciate some guidance on my proposed quantitative methodology and design. As per the uploaded diagram, my model has four independent variables (IV’s) and two dependent variables (DV’s). All of the variables will be measured using summated scales. I intend to use the averages of the summated scale scores to represent each of the variables. I then intend to analyse the relationships between the IV’s and DV’s using multiple regression. My research question does not extent to examining the relationships between the two DV’s (these are two desirable but different outcomes). If my research questions extended to covarience/relationships believe the DV’s then I believe that SEM would be more appropriate. However in this case, although I have two DV’s in my model as they are entirely separate, I believe I can test this using two runs of a multiple regression in SPSS (one for the IV’s on DV A and one for the IV’s on DV B). but I would appreciate any feedback. Many thanks Nick
Relevant answer
Answer
It's fine to use two distinct models with the same data to test two different research questions. If you want to test the interrelationships between the two dependent variables, then you need to do more, but that depends on how much you expect the two dependent variables to be interrelated. Also, even if you expect the two to be interrelated, testing two independent models might be a good first step (depending on what you wish to test).
• asked a question related to Quantitative Methodology
Question
4 answers
I am working with a SEM model where several of the mediating variables covary. When calculating indirect effects should the covariance double arrows count as a path? Meaning that if X and Y covary and both affect Z, should i calculate an indirect effect of Y on Z going through X? Or in other words should i multiply the coefficient of Y with the covariance of X and Y to calculate the indirect effect of this path?
Relevant answer
Answer
Hi Emil,
covariances, regardless whether they are between exogenous variables or between the errors of endogenous variables, reflect non-causal connections between the respective variables. Depending of the exact literature and its history, the meaning of covariances may differ (in SEM, they represent a "don't know-situation", in graph theory/DAGs, they represent omitted confounders) but both interpretations clarify that their connection don't count as "due to Y" (where Y is a respective exposure). If you think that the covariance between X and Y is because Y affects X then substitute the covariance with an effect and incorporate this effect into your indirect effect calculation. Whether this change is legitimate and leads to a correct estimate of the true overall effect of Y, depend of cause whether this change is correct. It is as always: You get estimates that are conditional of the correct specification. Further, you will see that the fit won't change, hence, both models cannot be differentiated with the current data.
The same is true (as Mark noted) when we talk about covariances between two mediators (say, m1 and m2). If you don't think that m1 causes m2, than an omitted/not-estimated error covariance (as Mark long did :) implies that all of the covariance between m1 and m2 will be due to the joint influence of their antecedents. If there are, however, further common causes of both (which are not in the model), you *have to* estimate the error covariance. Otherwise the model won't fit, as the algorithm cannot achieve re-producing the empirical covariance between m1 and m2 (i.e., it is larger than implied by your model). If however, the covariance between m1 and m2 is NOT due to omitted common causes but due to m1 causing m2, than an error covariance would be the wrong thing and the effect of m1 on Z would be underestimated (because its indirect effect via m2 is not present).
Your initial problem of not recognizing that two seemingly equal predictors (X and Y) have indeed a causal structure is called "overcontrol" and refers to controlling for a mediator in order to estimate a variables total effect, see
Elwert, F. (2013). Graphical causal models. In S. L. Morgan (Ed.), Handbook of causal analysis for social research. (pp. 245-273). Dordrecht Heidelberg New York London: Springer.
Shrier, I., & Platt, R. W. (2008). Reducing bias through directed acyclic graphs. British Medical Journal, 8(70), 1-15.
HTH
Holger
• asked a question related to Quantitative Methodology
Question
32 answers
The Slovin's Formula is quite popularly use in my country for determining the sample size for a survey research, especially in undergraduate thesis in education and social sciences, may be because it is easy to use and the computation is based almost solely on the population size. The Slovin's Formula is given as follows: n = N/(1+Ne2), where n is the sample size, N is the population size and e is the margin of error to be decided by the researcher. However, its misuse is now also a popular subject of research here in my country and students are usually discourage to use the formula even though the reasons behind are not clear enough to them. Perhaps it will helpful if we could know who really is Slovin and what were the bases of his formula.
Relevant answer
Answer
If you use statistical models (like Ttest, ANOVA, Pearson r, regression analysis, path analysis, SEM, among others) to test the hypotheses of your study, then I suggest you conduct a statistical power analysis in computing your minimum sample size. Sample size is a function of the following components: effect size, errors in decision (Type I and Type II), complexity of the statistical model, among others. Statistical power analysis or simply power analysis is finding the optimal combination of the said components. You can use the G*Power software which is downloadable for free. Just search it in google.
Slovin's formula has been taught by "irresponsible professors" in the Philippine Colleges and Universities. Sorry for the strong words, but that is true. Because they dont have formal training about Statistics, they taught the wrong things to their students. They are like blind people guiding another blinds.
Anyway, you may read a published article titled "On the misuse of Slovin's Formula".
I am on travel now. I am just using my phone to reply your message. I will email to you later the said article if you want. If interested, just email me at johnny.amora@gmail.com and then I will send you some materials including the said article.
• asked a question related to Quantitative Methodology
Question
3 answers
What is the statistical and scientific deference between mediator and moderator in reaserchs?
Relevant answer
Answer
Hi,
to the contrary, mediation is not a statistical concept but a causal concept. A mediator is supposed to transmit the effect of an exposure variable to the outcome. A moderator is a contextual variable on which the occurrence of an causal effect (or statistical relationship, if you broaden the perspective) depends. According to the level of the moderator, the effect can be existent or non-existent or vary with regard to its strength (going from zero to "strong") or vary from negative to positive.
That having said, a mediator can also be a moderator, when the exposure has both a direct and indirect effect (via the mediator) and the mediator moderates the direct effect in its strength.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173-1182.
Best,
Holger
• asked a question related to Quantitative Methodology
Question
2 answers
A lo largo de los procesos de dirección, asesoría y acompañamiento de proyectos de investigación basados en metodologías cualitativas, se vienen cometiendo errores que repercuten en la rigurosidad y la calidad de estos estudios:
El error más comúnmente cometido se asocia a seguir lineamientos pertenecientes a la metodología cuantitativa, para orientar problemas eminentemente dados para una aproximación cualitativa.
Entre las fallas observadas tenemos:
1. Utilizar el término variable para dirigirse al problema de estudio
2. Pedir operacionalización de variables.
3. Preguntar por los procedimientos de muestreo con base en la idea de población y muestra.
4. sugerir la aplicación de procedimientos comúnmente utilizados para la creación y validación de instrumentos de medición, para la validación de técnicas cualitativas.
5. Desconocer la naturaleza inductiva y flexible de los enfoques cualitativos(ej. el planteamiento del problema, los propósitos, etc podrán irse modificando a medida que el estudio avance, incluso en las etapas últimas del mismo).
Otro de los problemas presentados y que se asocia con lo mencionado anteriormente es que, en algunas ocasiones quienes asesoran proyectos cualitativos no tienen ni idea de estas metodologías, incluso está sucediendo a nivel de posgrados, conllevando a quien realiza la asesoría a enfocar el estudio desde métodos inadecuados e incluso a cambiar el tema de estudio.
Es fundamental para entender la lógica de la investigación cualitativa, conocer los presupuestos epistemológicos y ontológicos, sobre estos se fundamentan los métodos.
Sigue habiendo un gran desconocimiento acerca de la metodología cualitativa , aun así se le cuestiona , se ponen en tela de juicio sus hallazgos y se reduce a conocimiento no científico.....
Relevant answer
Answer
Thanks Kelvin.
Wishes .
• asked a question related to Quantitative Methodology
Question
7 answers
First of all, as part of the processing of information collected in the Big Data database systems, the sentiment analysis is used very frequently during analytical processes conducted for the needs of scientific research. Sentiment analysis allows you to get an answer to the question, what is, for example, recognizability, what is the awareness, opinion on a specific topic among users of specific Internet portals, websites containing comments from Internet users, and social media portals.
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question: What kind of research dominate in the analysis of data collected in Big Data database systems?
Please reply
I invite you to the discussion
Thank you very much
Dear Colleagues and Friends from RG
The key aspects and determinants of applications of data processing technologies in Big Data database systems are described in the following publications:
I invite you to discussion and cooperation.
Thank you very much
Best wishes
Relevant answer
Answer
I follow answers
best regards
• asked a question related to Quantitative Methodology
Question
4 answers
In which sectors of the economy, in which types of companies and corporations will develop the most dynamic technologies for analyzing large collections of information in Big Data database systems
The Big Data database technology is finding more and more applications in business.
Multi-criteria processing of huge data sets collected in Big Data database systems allows preparing reports in a relatively short time according to given criteria.
The report development time depends mainly on the computing power of Big Data servers.
In processes of complex economic and financial analyzes, risk management, etc. for the purpose of determining the economic and financial situation of business entities, they are increasingly carried out in computerized analytical platforms of the Business Intelligence type.
Perhaps in the future, artificial intelligence will also be involved in this field of analytics.
In some countries, IT companies have been operating for several years, developing the Big Data database technology for commercial and business purposes.
It is only a matter of time to combine these various analytical and database technologies in cloud computing.
In view of the above, the current question is: In which sectors of the economy, in which types of companies and corporations will be the most dynamically developed technologies for analyzing large collections of information in Big Data database systems
Please, answer, comments. I invite you to the discussion.
Relevant answer
Answer
Biomedical engineering.
Hope this helps,
Matt
• asked a question related to Quantitative Methodology
Question
9 answers
Quantitative methodology entails an empiricist ontology and objectivist epistemology; while qualitative methodology preaches constructionist ontology and interpretivist epistemology; both seem extreme positions regarding reality ‘out there’ and reality ‘socially constructed‘. What do you suggest?
Relevant answer
Answer
I would argue for emphasizing your research questions and the appropriate methods for addressing them. Meanwhile, leave metaphysics to the philosophers.
• asked a question related to Quantitative Methodology
Question
2 answers
Hi,
I'm preparing a PhD proposal to study an egocentric network in primary health care setting in a low-middle income country feed with a name-generator survey. I have 2 questions:
1. Any suggestion to the minimum sampling size to keep the validity?
2. What is the estimated time to do an (egocentric) social network analysis of a sample size of X?
Any suggestions, references?
Many thanks!
Virginia
Relevant answer
Thanks Víctor, will try to check it in the future.
• asked a question related to Quantitative Methodology
Question
5 answers
Dear members
I have ranking data on scale 1-10 , whereby participants have ranked their 10 reasons for moving abroad (1- from most preferred reason to least ).
Can i asses the Mann-Whitney U test here, comparing the association between gender and reasons of moving abroad? as data is already in ranks
2. If yes, then we would have to every 10 reasons separately to gender and calculate the test
or is there any way to create an index of the 10 reasons of moving abroad and then apply the test?
Thanks
Relevant answer
Answer
Dear Kanika,
Ranked data is not independent. A better test would be Friedman's ANOVA.
• asked a question related to Quantitative Methodology
Question
15 answers
I am a Master of Research Student trying to learn about the proper way of conducting research. I am currently using a mixed-method approach for my methodology. My supervisors asked me to explain my sample size and the response rate. I was advised to interview 4 managers and distribute 50 surveys to their employees. Do we actually go for a sample size by instinct? How can I find out if it is correct? Would examiners find my sample size problematic? Thank you for your opinions.
Relevant answer
Answer
Sample size determination is one of the gray areas in research. For qualitative study, you should proceed to include a participant until you reached a saturation point , a point where any more participant hardily generate new information. In your case , you took 4 mangers. If you get sufficient information from them, it could be suffice. For quantitative ones , different methods including various formulas are suggested. The total number of target population, budget, homogeneity and accessibility of your participants should also be considered. To say 40-50 employees is suffice, you need to take these factors into considerations. Please see Krejcie and Morgan sample size determination method (1970) for further information.
• asked a question related to Quantitative Methodology
Question
6 answers
Hello,
What is the most appropriate quantitative methodology to use for measuring two independent variables impact on one dependent variable?
My two independent are World Bank Governance Index and Ibrahim African Governance Index. My dependent is Socioeconomic Performance.
Many thanks in advance.
Mohamed
Relevant answer
Answer
If you do not have multiple measurements for any of your variables, then SEM is not appropriate, and should should use regression.
• asked a question related to Quantitative Methodology
Question
23 answers
I have read a number of text on use of control variables in SEM and there seems no general agreement. I am currently thinking of the inclusion of moderator and mediator in a model. Is it appropriate to have control variables included in addition to the moderator and mediator variables, most especially when the process of making the model fit the data sample is iterative and in the process of modification of the model the independent variables may assume different role than was initially intended?
Relevant answer
Answer
Hi Bruce.
In fact all depends upon the theory you use (actually statistics is always a tool and can be totally meaningless if you don't have a good theory behind the model).
If the theory says a construct A is a 2nd order one, compound by constructs B and C (let's call them "subconstructs" from now on; this is more a poetic license than a real used word) and your results confirm this, that's OK. But, if the model doesn't fit in this approach you should try directly the subconstructs in place of the 2nd order construct.
Attached is a paper I published where the model has some 2nd order constructs but I decided to check if this was the best approach and try some 1st order alternative (actually 3 alternative models). At the end I confirmed that 2nd order model was the best.
In the other way around, if your theory treats the constructs as 1st order (it seems this is your case), there is no reason to try a 2nd order approach, even because what would this 2nd order construct be ?.
If your variables were presented in some papers as a 2nd construct, them you should try; if not, forget about and keep 1st order approach.
See my paper, remembering that, in my case, theory says there are some 2nd order constructs and I decided to check if this was the best approach much more because in Social Sciences sometimes things are not straitghforward.
Hope this can help you.
Any doubt, feel free to call me back.
Regards.
• asked a question related to Quantitative Methodology
Question
4 answers
I am currently developing a quantitative methodology for measuring the environmental impacts of households. Please feel free to get in touch if you are involved in similar research. It would be great to exchange ideas and techniques.
Relevant answer
Answer
Thanks for providing the link to your paper abstract. If possible, I would like to take a look.
My current plan is to use the CO2e emissions of something as close as possible to the average UK single-family household for comparison. I see this as giving some useful context to the data I will gather from the cohousing communities which I hope to work with. However, I do wonder if I should try and find a more like-for-like comparison by matching certain variables, as you say. For example, cohousing residents tend to come from a certain socio-economic background, and as socio-economic background is strongly correlated with CO2e emissions, perhaps I should be taking this into consideration when deciding upon what secondary data on single-family households to use as comparison.
• asked a question related to Quantitative Methodology
Question
8 answers
Iam currently researching on the issues affecting Food manufacturing SMEs in Tanzania.The main aim of the study is to examine the factors that result into supply chain issues and provide solutions as to how these can be avoided.From secondary sources that is previous research a number of issues have been reported.So Iam thinking of sending out questionnaires to these SMEs to gather information on the issues and i also want to interview supply chain proffessionals and local and international bodies about the efforts they have taken or a taking to ensure that these issues are solved.In this case will a quantitative methodology be suitable or both?Or will a qualitative method be suitable?
Relevant answer
Answer
Hi Jesca,
Thanks for raising the question. I understand that you like to analyse the factors affecting " Food manufacturing SMEs in Tanzania." In this case, I would take both qualitative and quantitative method at different stages of research work. Here is a proposed plan for analysis (Example, testing a proposed model with SEM and any other multivariate technique)
1) Secondary research: Perform a through literature review to identify factors which are affecting " Food manufacturing SMEs" in Tanzania or in other countries.
2) Once you identify the factors, look for the operationalized definition of those constructs. If you get existing scale, you can use those scales with suitable modifications if any for using those in your target country.
3) If you do not find any suitable scale, you can develop a new scale to operationlize the construct. But, It is a lengthy process.
4) Qualitative research (Expert interview): Suppose, you find questions from the previous studies to measure the target factors (constructs), still it is essential to perform content/face validity of those questions in your context seeking expert opinion (you can seek views from few SCM professionals)
5) Quantitative test: Once you fine tune your questionnaire based on the previous feedback (step 4), you can conduct pilot test to check the reliability and validity of measurement instruments. If you find satisfactory results in pilot test, you can move to the final study.
6) Quantitative test: Now, You can conduct quantitative analysis for validate your hypotheses.
Hence, based on your research objective you can use both qualitative and quantitative method.
Hope this helps.
Thank you,
Gobinda
• asked a question related to Quantitative Methodology
Question
20 answers
Hi, I am conducting a research for large sample (n=427). My data are not normal so I used Mann-Whitney U test to compare several group of sample (n1=213, n2= 214), (n1=158, n2= 180). The results are (U=18030.5, n1=213, n2=214, p<0.01, two tailed), (U= 13689.5, n1= 158, n2= 180, p< 0.05, two tailed). Even though I already knew the results from SPSS, I still want to know the critical value of Mann Whitney for large samples (n>100) cause I couldn't find it in any resources (most of them only for 20 samples). So, please kindly share it with me if you know about it! thank you
Relevant answer
Answer
Hi Edita,
However, I concur with the preceding responses. Your sample is sufficiently large enough to perform some form of parametric analysis.
Have a great day!
--Adrian
• asked a question related to Quantitative Methodology
Question
6 answers
hi, if we establish construct validity of a questionnaire developed by some one else, in the confirmatory factor analysis stage what model fit should be checked for the employed questionnaire?
GFI,CIF,IFI,AGFI,....../? which one ?all of them?
tanx
Relevant answer
Answer
Hi Rokhsareh
Report RMSEA and CFI, that's well enough.
jk
• asked a question related to Quantitative Methodology
Question
4 answers
Dear Professionals,
My research topic for M. Phil. is " Social Security in manufacturing Industry in India". I am interested in quantitative methodology as I am from mathematical and statistical back ground. Now my issues are 1) How can I frame research questions?
2) How can I set hypothesis(both H0 and Ha) and how many
hypotheses can I set?
3) What is the best method of analysis to test my hypothesis?
4) What kind of research questions can I set, please give
examples?
Kindly share your views and I am extremely thankful to you all in advance. your views with clear explanation and illustration are of a great assistance to me.
Warm Regards
Adhikari V V Subba Rao,
Research Scholar,
Tata Institute of Social Sciences(TISS)-Mumbai
Relevant answer
Answer
Social security is all-embracing and so is a very wide concept. What does it mean and how is it measured? You may need to narrow its scope to apply it to India. Indian security has its peculiarities? The scope will have to be narrowed down further to apply it to manufacturing industry in India. This will help in the formulation of the research questions and hypotheses.
• asked a question related to Quantitative Methodology
Question
6 answers
Hello, so I am doing a correlational research about adolescences in 5 areas of Jakarta. I divided the sample by their gender, age and educational backgrounds (12-17 years old (junior high school until senior high school) and 18-21 years old (college students)). I also control the Socioeconomic Status (SES) of the adolescence's family. My participants will be adolescences which are from middle low SES family.
Relevant answer
Answer
Very few people use true "convenience" sampling because that implies taking just about anyone. At a minimum, you already have eligibility criteria, and you have set up the possibility of systematic comparisons, so you are doing purposive sampling by definition. What you don't have is random selection within your subgroups of interest, which means your statistical results will not generalize to the population.
• asked a question related to Quantitative Methodology
Question
14 answers
Me and my collaborators are going to conduct a network analysis and we would like to learn as much as possible about this method before starting coding our data. Thank you!
Relevant answer
Answer
• asked a question related to Quantitative Methodology
Question
4 answers
So I am doing a correlational research about adolescence in 5 areas of Jakarta. I divided the sample by their gender, age and educational backgrounds (12-17 years old (junior high school until senior high school) and 18-21 years old (college students)). So my research can be representative for each area of Jakarta (East Jakarta, West Jakarta, South Jakarta, Central Jakarta, and North Jakarta).
Relevant answer
Answer
Sample size calculation is studied for probabilistic sampling. However, if one makes an uncertain hypothesis as that the hazard reproduces probabilistic sampling, then the formulae given for simple random sampling with replacement (for example) could be an approximation to nonprobabilistic quota sampling. This is not scientific completely but it is an approximation whem it is impossible to use probabilistic sampling.
• asked a question related to Quantitative Methodology
Question
4 answers
I have a question regarding calculating the simple effects of a linear regression model with interaction terms. Using STATA I apply the following command (regress FHNIT D1_EU##c.C_Inflat) to which I get results showing that the interaction term between the two independent variables in significant.
Being that the interaction term is significant, I want to look at the simple slopes in order to analyze the interaction of the two variables at specific values. I have tried playing with the commands, however, I cannot seem to get STATA to produce the simple effects. Perhaps this function is not supported. Please let me know how I may be able to produce the simple effects through STATA (or even SPSS).
Relevant answer
Answer
Hi Eltion,
I have written an ado that generates plots for interaction terms of multiplicative regressions. Just type in ssc install interactplot and look up the help file via help interactplot. I provided some simple and some more advanced examples in the help file, which should help you getting used to the command.
Alternatively, you can create marginal effect plots in Stata by using the margins and marginsplot command.
• asked a question related to Quantitative Methodology
Question
5 answers
Dear all,
What is the conceptual & estimation differences between
a) Capital expenditure,
b) Capital formation,
c) Investment ?
Can someone use any one of the above as a proxy to the rest when data of the rest are not available?
Or any methods to derive one from the others?
Thanks in advance
Relevant answer
Answer
In a strict sense, capital expenditure, capital formation and investment seem to have a little bit difference.
Capital formation is an investment in a newly produced asset. This word may be matched with numbers at the national level.
Capital expenditure is an investment in a newly produced asset as well as exising asset. This word may be more matched with numbers at the sectoral level under the nation. If the analysis is done for households or businesses, capital expenditure will be more relevant.
Investment is a kind of acquisition of assets. The asset can be financial or non-financial. Capital formation and capital expenditure are more relevant to non-financial produced assets.
Therefore depending on the analytical scope, data should be carefully selected, in my opinion.
• asked a question related to Quantitative Methodology
Question
3 answers
Hello and a Happy New Year!
I would like to estimate the relations between the BIG 5 and the Dark Triad (Narcissism, Machiavellismn and Psychopathy). My sample size is n = 282 and I have 123 Items for 11 scales.
Should I use a CFA with MLR/WLSMV-estimator or should I use Pearson-correlations with Bonferroni-Holm-correction?
What should I do, if my CFA had a poor model fit (chi square, rmsea, srmr, cfi, tli, and so on)? Can I interpret the correlations between the latent variables or should I switch to classical correlation analysis?
Thank you very very much and best regards. :-)
Relevant answer
Answer
- The sample size seems adequate for conducting CFA (see e.g. Todd Little's Longitudinal CFA book in which he shows that above N = 120 estimates stabilize in similar models).
- If there are at least 5 categories on each item, use MLR (see http://psych.colorado.edu/~willcutt/pdfs/Rhemtulla_2012.pdf).
- Poor model fit: Look for the reasons (examine residual covariances/correlations, and modification indices) - in case there are obvious reasons, modify model; in case not, leave it as it is. Latent correlations from CFA should be interpreted in any case. Switching to classical correlation analysis wouldn't circumvent the problem of badly fitting model, just aggravate.
• asked a question related to Quantitative Methodology
Question
8 answers
I want to know what teachers think about their school training for an article.
Relevant answer
Answer
The decision whether you use a quantitative technique or qualitative technique is based on the research question; it is not the choice of the researcher. However, if research questions guide you to opt quantitative technique, then there are different methods like an experiment, case study, survey etc. (See the link given below)… Measuring perception is more accurately measured in questionnaire-based survey study. To adopt/adapt/develop a questionnaire, you need to understand the variables of your study, operationalize these and make a decision about selection of questionnaire and scale.
• asked a question related to Quantitative Methodology
Question
4 answers
I am a junior criminal justice major doing a research proposal for my class in quantitative methodology. I need raw datasets to do my research from, and have been unable to find any. Suggestions?
Relevant answer
Answer
It is not something that I am aware of in the UK. However, there is currently a transition in police officer entry standards in the UK driven by the College of Policing. They may have some background research. However, as policing functions differ hugely I agree with Ben measuring performance will be very difficult. So much so key opinion formers differ enormously on how to do so. Good luck Sean
• asked a question related to Quantitative Methodology
Question
19 answers
It is evenly useful your opinion on methods to apply in education, qualitative or quantitative.
Relevant answer
Answer
I agree that the key is the kind of question(s) that you want to answer, but it is also important to consider the kind of skills you want to develop. Often, it is hard to achieve a high level of expertise in both qualitative and quantitative methods, so if you have preference for one or the other, that can be an important part of your decision.
Also, don't overlook the possibility of mixed methods research, where you combine both qualitative and quantitative methods in the same project.
• asked a question related to Quantitative Methodology
Question
4 answers
Could you please let me know what nonparametric tests for multivariate regression analysis there are and what software would be useful for the analyses? I am thinking of administering two different questionnaires whose items are ordinal data using a 7 point-Likert scale and of using multivariate regression analysis. Thank you for your help in advance.
Relevant answer
If your Likert scales are ordinal rather than interval, then you need to consider nonparametric analyses. That said, some scales may be interval, some may be ordinal, and while I recommend to be conservative and use non-parametric analyses, other researchers do use parametric analyses for a mixed data-set. However, I think the best practical advice is to find a peer-reviewed journal article that uses your methodology, or similar, in your field and then use this standard as a starting point to organize your analysis work.
If using the SPSS software package I have attached a link to an SPSS manual which is very widely used, and which will contain the decision making steps for statistical test choice and the step-by-step instructions for computing, interpreting, and writing up your analysis section for your results.
• asked a question related to Quantitative Methodology
Question
3 answers
I am conducting a research on evaluative language in political tweets. The alternative hypothesis I have in my study can be accepted only if three conditions are met. Specifically, I have three features, x and y has to be ≥1.20, while z has to be <0.80. Moreover, the hypothesis applies to four different sub-corpora. How do I calculate p-value? Do I have to calculate it for each of the 12 elements?
Relevant answer
Answer
Thanks for your reply. I'll try to be more specific.
In my study, I have collected tweets from 4 parties, creating one corpus composed by 4 sub-corpora. Then, I used a software to annotate tweets following the Appraisal framework which consists of three main features: graduation, attitude and engagement. The four examined parties are expected to exploit graduation of meaning and attitude expression more frequently than other parties (I am also using reference corpora), while at the same time showing less propensity to engage with different points of view. Hence, the hypothesis are the following:
H0: There is no significant difference in graduation, attitude and engagement instances between populist and democratic parties tweets.
H1: Populist parties tweets present a higher number of graduation and attitude occurrences, together with a lower frequency of engagement instances, when compared to reference corpora.
The level of significance for the hypothesis was tested at a 1% level of significance (i.e., α = 0,01). In order to reject the null hypothesis, the relative frequency values regarding graduation and attitude have to be to be ≥1.20, while the same value for engagement is supposed to be <0.80. These numbers are relative frequencies automatically calculated by the software, which compare occurences of the three features in both corpora of interest and reference corpora.
• asked a question related to Quantitative Methodology
Question
5 answers
In a qualitatively driven mixed methods study, can the research questions appear in this order? 1. Qualitative RQ, 2. Quantitative RQ, 3.Mixed Methods RQ. Creswell ( 2008) recommends starting with a mixed method RQ, but in my study, it works better to start with a qualitative RQ ( the one which looks at perceptions and beliefs of individuals), construct a questionnaire based on the themes and concepts emerging from the qualitative phase and finally, perform the data integration to answer the mixed methods RQ. Is that a feasible approach?
Relevant answer
Answer
The following papers should be helpful:
• Collins, K. M. and O'Cathain, A. (2009). Introduction: Ten points about mixed methods research to be considered by the novice researcher. International Journal of Multiple Research Approaches, 3, 1, pp. 2-7.
• Johnson, R. B. and Onwuegbuzie, A. J. (2004). Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educational Researcher, 33, 7, pp. 14-26.
• Onwuegbuzie, A. J. and Leech, N. L. (2006). Linking Research Questions to Mixed Methods Data Analysis Procedures. The Qualitative Report, 11, 3, pp. 474-498.
• asked a question related to Quantitative Methodology
Question
7 answers
What is the good statistical approach to test the common method variance?
Relevant answer
Answer
Please let me know if this reference (freely downloadable) is helpful to you:
Common Method Biases in Behavioral Research: A Critical Review of the
Literature and Recommended Remedies
Philip M. Podsakoff, Scott B. MacKenzie, and
Jeong-Yeon Lee
Indiana University
Nathan P. Podsakoff
University of Florida
Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes
through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for ...
Dennis
Dennis Mazur
• asked a question related to Quantitative Methodology
Question
3 answers
I am trying to best understand the method to how prejudice reduction is measured.
Relevant answer
Answer
Please let me know if these references/sites are useful to you:
1.  [PDF]Maximum likelihood estimation of endogenous switching regression ...
by M Lokshin - ‎Cited by 287 - ‎Related articles
In this model, a switching equation sorts individuals over two different states (with one regime observed). The econometric problem of fitting a model with endogenous switching arises in a variety of settings in labor economics, the modeling of housing demand, and the modeling of markets in disequilibrium.
2.  [PDF]SWITCHING REGRESSION MODELS AND ESTIMATION
Outline. Switching Regression Models. Model setting. Motivation. Estimation ( Two-stage method). Variations. Censored models. Models with self-selectivity ...
3.  Endogenous Switching Regression Models with Limited Dependent ...
by DA POWERS - ‎1993 - ‎Cited by 10 - ‎Related articles
Social research often involves estimating the effects of a categorical treatment on a dependent outcome variable. Endogenous switching regression models are natural extensions of classical experimental designs, which allow tests of assumptions about the exogeneity of treatment effects from survey data.
Dennis
Dennis Mazur
• asked a question related to Quantitative Methodology
Question
4 answers
Dear colleagues. I have a challenging question and cannot find the answer in the literature yet. As you know, in social sciences, the face validity of the scales is supposed to be checked by expert judges. In reality in business science, they rarely are (Hardesty and Bearden, 2004).
Question: when they are, who are the expert judges? My hypothesis is that they are very frequently peers (researchers, professors, PhD students) and rarely "end-users", like consumers or patients etc.
Could anyone help me in confirming that hypothesis or not?
Thanks.
Relevant answer
Answer
Good question. I think this shows one of the limits of face validity. Who are the experts?
If it does not seem "valid" for different groups of users (is users a better group than experts?), that does not mean it is not valid in other senses of the word validity (e.g., causal validity by Borsboom, Kane's how it is used validity, Meehl and Cronbach, etc.). One problem with experts is that they are not always right, This is why the US courts no longer use just general acceptance (Frye) for accepting expert testimony (the Daubert trilogy of cases addressed this). Galileo was not helped by "experts."
• asked a question related to Quantitative Methodology
Question
5 answers
I have developed a conceptual model of critical success factors of digital libraries in Iran using a grounded theory approach. for the second phase of the study I am going to confirm the model by a statistical analysis. I prepared a questionnaire based on the model asking the experts' opinions. What statistical analysis method can I use for analyzing the results of the survey? Do you know a similar study?
Relevant answer
Answer
It sounds like you have a mixed methods design of the from QUAL --> quant. Depending on your purposes, this can be quite useful, as I have argued in the attached publication.
Still, I have to agree with Daniel Wright that the term "confirm" can be problematic, especially when you are dealing with the results from qualitative research. So, you should be sensitive to the preferences of your audience when you choose the language you use.
• asked a question related to Quantitative Methodology
Question
4 answers
Hi,
Does anyone know/.have a reference for what the standardised factor loadings (highlighted in the attached) should be when performing confirmatory factor analysis. Is it the same as the rule of thumb for factor loadings when performing an exploratory factor analysis (>.4)?
Thanks,
Emma.
Relevant answer
Answer
Hi Emma,
"Common variance, or the variance accounted for by the factor, which is estimated on the basis of variance shared with other indicators in the analysis; and (2) unique variance, which is a combination of reliable variance that is specifc to the indicator (i.e., systematic factors that inﬂuence only one indicator) and random error variance (i.e., measurement error or unreliability in the indicator)." (Brown, 2015).
it can be said that If the factor loading is 0.75, observed variable explains the latent variable variance of (0.75^2=0,56) %56. It is good measure. So if your factor loading is 0.40, it explains %16 variance. As a cut point 0.33 factor loading can be given. Beacuse of it explains %10 variance.
• asked a question related to Quantitative Methodology
Question
1 answer
You may be doing a preliminary investigation to test your hypothesis that Cg in unable to. . . .
OR giving detailed description on what the practices are in the worst and the best cases?
Are you interesting in the question Why this is the case? Or just how true your hypothesis is?
Relevant answer
Answer
Not a survey. We are testing whether CG codes issued in 2002 by capital markets authority (CMA) was able to deter earnings management (EM) in Kenya for non-finance listed firms. We develop a CG index based on the provisions of the CG code and apply Jones model to determine discretion accruals regarded as a proxy for EM. Our contribution is based on applying composite CG index contrary to existing studies. Our study also provides support to the initiatives by CMA to enact a new code and other changes at the NSE. Our interest is to assess the effectiveness of CG code issued in 2002 under conditions described as “developing country situation”. We could not find CG data for firms in other EA states but our findings can be generalized to the common law developing countries.
The project is closed. Look out for a special issue on Africa by the of the Journal of Accounting in Emerging Economies that will be out any time now.
• asked a question related to Quantitative Methodology
Question
21 answers
If a researcher wants to find the construct validity of a existing questionnaire or scale in a different population (country), what would be the most appropriate factor analysis to perform (EFA or CFA)? Literature seems to be inconsistent and some people suggest to perform both. Please do feel free to share your views.
Relevant answer
Answer
General rule: EFA > Used for instruments (or scales) that have never been tested before (for their validity are reliability). CFA > Used for instruments (or scales) that have been tested before (for their validity are reliability).
I argue that when you translate an existing instrument (or scale) (that has been tested before for its validity are reliability), in order to use it in another country (different language), this instrument (scale) becomes “new”. So, you need to perform EFA.
Moreover, I argue that when you use an existing instrument (or scale) (that has been tested before for its validity are reliability) in the same country (same language), but in another sector or research setting, this instrument (scale) also becomes “new”, since it is being tested (used) on a very different population / sample. So, you also need to perform EFA.
So, when should we use CFA? My point of view is that CFA should be used in empirical studies that use instruments (or scales) that have been tested in many previous studies (instruments or scales that have extensively being tested by their validity and reliability). Then, our only job is to confirm that these instruments (or scales) are valid and reliable in our research setting.
On a personal note, I tend to perform both analyses: first EFA, then CFA. I consider that the validity and reliability of my instrument is enhanced with this dual approach.
• asked a question related to Quantitative Methodology
Question
4 answers
This transformation is necessary to consider studies that provide non-parametric descriptive values for a meta-analytic study.
Relevant answer
Answer
Thank you very much for the help. Unfortunately, when authors describe descriptive data by median and IQR it is usually because their scores are not normally distributed.
• asked a question related to Quantitative Methodology
Question
5 answers
Hi,
I want to conduct a randomized controlled trial to examine the effect of hypnosis in women with breast cancer during chemotherapy in their health-related quality of life, fatigue, anxiety, depression, and insomnia. I did not find any previous similar studies. Therefore, is it ok to conduct this study even though I don't have evidence?  Also, I do not have effect size to calculate sample size too. How can I estimate sample in this situation? Also, I want to know that whether is it ok to divide the group into hypnosis and control group (I mean need to adopt  other method)? I want your valuable suggestions.
Thank you so much.
Relevant answer
Answer
Hi Saraswati
Peter's advice is excellent. If you decided to power up for a medium effect, then you would need 64 patients in each study arm for a 2-arm trial with 80% power and alpha=0.05. If as Peter suggested, you go for a 3-arm trial, then you would need 69 patients per study arm to allow for (say) two pairwise comparisons and with alpha set to 0.04 to adjust for multiple comparisons. Although choosing a medium effect size might seem a bit arbitrary, a difference of 10 on a 100-point scale (eg the SF-36) has often been found to be of clinical importance. Further, a 100-point scale is likely to have an SD of around 20. This gives an effect size of 0.5 (or medium), which is what we have powered for.
Adrian
• asked a question related to Quantitative Methodology
Question
8 answers
I have conducted exploratory factor analysis (EFA) before CFA. I am looking for academic justification of doing so. The scales which i have adapted to measure constructs had no reliability issue. I am looking for any justification along with any reference of a paper or book to justify that exploratory factor analysis could be run before confirmatory factor analysis.
Relevant answer
Answer
Hi Mazhar Ali,
Always the statiscal proceedings are polemic! And in the science we have many opinios.
There is justified if your scale is "virgin" and yet not supported by theory. (generated by Interviews or only researchers´ opinions) The name exploratory means exploration of data, when us do not have a theoretical factorial model. But we never applied EFA and CFA in same sample!
We usually split the same sample into two random groups. With the first, we do the AFE and with second sample we do the AFC.
EFA "...eventually leads to factors which the investigator then interprets. It tends to be stepwise (datadriven) rather than direct (theory-driven)" (Nunnally & Bernstein, 1994, p. 450), i. e., CFA.
See:
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill. [chapter 11: Exploratory and Confirmatory Analysis p, 450 - ] for a very important explanation.
hope this helps.
Dirceu
• asked a question related to Quantitative Methodology
Question
6 answers
Because there are papers favoring both the relationships. Please provide your take regarding the directionality of this relationship.
Relevant answer
Answer
Mr. Marco V. Rossi,
Agree with your answer. Yes, It is a circular process, depends very much upon the context.
• asked a question related to Quantitative Methodology
Question
2 answers
I'm conducting a monte carlo simulation and will be calculated relative efficiency. Are there any agreed-upon cutoffs for acceptable relative efficiency?
Relevant answer
Answer
Which code you will use?
• asked a question related to Quantitative Methodology
Question
2 answers
Hello.
I want to conduct an impact assessment study from a randomised sample and no baseline data. Having read literature, I became confused as I realised that some authors tend to combine the two approaches (ESR and PSM). What is the best model to use?
Your comments will be useful please.
Thanks.
Relevant answer
Answer
Thank you.
• asked a question related to Quantitative Methodology
Question
5 answers
Is there a good review of methodological issues around blended longitudinal and cross-sectional studies? Specifically I am interested in the issues around the change in sample composition between these groups overtime.
Relevant answer
Answer
Thank you Kelvyn, this looks very relevant.
• asked a question related to Quantitative Methodology
Question
1 answer
I had 40 interview transcripts from 4 cases and planning to use fsQCA for the data analysis. I am not sure that before we get into the fsQCA process, can I code the data using pattern matching method in Nvivo?
Thank you in advance :)
Relevant answer
Answer
Dear Sawitree, everything cs/mv/fsQCA need is a table of suitable data. How these data were produced in the first place is, strictly speaking, not part of QCA itself (this process is called "calibration"; for instance, the UN's Human Development Index uses calibration, but not QCA).
If I understand your question correctly, you'd like to search for particular collocations of words in your interview transcripts and assign corresponding values (e.g., find "hate*", assign "0"; find "love*", assign "1").
What I doubt is that fsQCA is the most appropriate solution for your purposes. By using fsQCA, you essentially lose all categories in your analysis between the two most extreme ones (e.g., everything between "hate*" and "love*"). Given the qualitative nuances of interview data, mvQCA seems to be the much better choice in your case. In mvQCA, all intermediate categories (e.g., "dislike", "indifference", "affection") are preserved in the analysis.
You can easily run mvQCA in my QCApro package (http://www.alrik-thiem.net/software/). Why you should not use the Tosmana software (the software most often used for mvQCA in empirical research) is explained here: https://youtu.be/n8k4OQY5mHg.
Best wishes,
Alrik
• asked a question related to Quantitative Methodology
Question
9 answers
I am doing an ISM on the barriers towards SSCM. There has been a debate regarding how many experts should be consulted during this study. Like the minimum number of experts. Please suggest.
Relevant answer
Answer
There is nothing in the literature to specifiy exactly how many experst as a mimimum or maximum. In my opinion, the most important issue is to identify who is expert in the field. From observing similar issues in the publicaitons, 3-5 experts could be suitable.
good luck
• asked a question related to Quantitative Methodology
Question
5 answers
please guide me, in my research i am comparing two groups on the basis of provision and non provision of career guidance and counselling services. I have used two scales including CDDQ a developed 9 point rating scale and other is a checklist on dichotomous scale (yes, no) options developed by me . Please guide me can i compare dichotomous scale with likert scale. If not then which method is appropriate for analysis.
Relevant answer
Answer
You can compare the means of the continious scale scores of those responding "yes" with those responding as "no" for each dichotomous question. Depending on whether you have normal distribution you can do this by a t-test or a Mann Whitney U test, former being the parametric one. Imho, this seems to be the most appealing way to do it.
• asked a question related to Quantitative Methodology
Question
2 answers
Anyone who can find me published material on invariance tests of the student engagement scale in the higher education context? It would be much appreciated if you can send me published reports on measurement invariance of the NSSE and AUSSE.
Relevant answer
Answer
Dear Hassan,
I really thank you so much for your generous support,
Cheers
• asked a question related to Quantitative Methodology
Question
14 answers
Hello, I am investigating differences in attitudes towards cyberbullying in post Brexit UK , between 2 groups, those who voted leave and those who voted remain. My questionnaire (Likert scale) consists of 14 questions only.
what method of statistical data analysis shall I use please?
TIA
Relevant answer
Answer
There is a difference between single, Likert-scored questions, which should be treated as ordinal, versus multi-item scales composed of Likert-scored variables, which can be treated as continuous variables.
Because there have been so many questions here about this topic, I have put together a set of resources:
• asked a question related to Quantitative Methodology
Question
7 answers
Hi,
I am a Master of Nursing Science student. Currently, I am conducting a research in patients with breast cancer. I have chosen social support as an independent variable to examine its predictive power in health-related quality of life. To measure Social support, I used the Modified Medical Outcomes Study Social Support Survey (eight item). The scoring manual didn't exactly explained the cut off score for poor, fair and good social support measured in 0-100 point scale. But I referred to previous similar study (only found one study) which categorized as poor,(less than 60) fair(60-79) and good social support (more than 80) and explained that it was categorized according to Bloom's theory. But there is no citation to search further. I tried to confirm searching so many times. But I could not find the article that used same categorization based on score. So, I kindly request you to let me know how it can be done or post article here if you have idea.
Thank you.
Relevant answer
Answer
I don't understand why you want to create cut-offs for a continuous variable, which would involve a loss of information. If this variable is typically analyzed using techniques such as correlations and regression, then it doesn't make sense to chop it up.
• asked a question related to Quantitative Methodology
Question
5 answers
Use of factor analysis in questionnaire development or validation
Relevant answer
Answer
Dear Jeroen Nawijn, Can I get the fulltext?
Thank you Saurav.
Dear Peter, can I get the link or fulltext?
• asked a question related to Quantitative Methodology
Question
1 answer
I wish to moderate alphas of energy 5.4MeV from an Americium 241 source to 2MeV.
What type of moderator would be suited for this?
Cheers
Shri
Relevant answer
Answer
Hi, I thin because of the Bragg peak behavior and the overall very short reach of Alpha particles it will be quit difficult to get the exact energy value. However, you van try to use a certain distance in air to slow the alpha particles down. Here you can find the 'Bragg curve of 5.49 MeV alpha particles in air' https://en.wikipedia.org/wiki/File:Bragg_Curve_for_Alphas_in_Air.png
and the corresponding formulas in this article:
I hope thats helpful.
Best regards
Tobias
• asked a question related to Quantitative Methodology
Question
3 answers
I studied sleep quality by PSQI  on reporting the results which is better to report the results as continuous data ( global PSQI score ) or categorical data ( normal and abnormal sleep quality )
On using the categorized form  what is global PSQI score cut of point that could be considered
Relevant answer
Answer
I would suggest using a categorical variable, such as "optimal" (PSQI < 6), "borderline" (PSQI 6-7), and "poor" (PSQI >7). See Lund et al. (2010), which uses these categories. If you use the global score as a continuous variable, you will get too large of a range to analyze the data. For example, poor sleepers can have a PSQI of 19, which would be considered an outlier using a continuous variable. You could always just use "poor" or "optimal/normal" as 2 categories, if you are sure to explicitly operationalize your variables.
• asked a question related to Quantitative Methodology
Question
6 answers
I will be happy if any one explain clearly.
Relevant answer
Answer
Dear Palash,
I appreciate your interest in our work and thank you for your comments.
ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.
The diagnostic performance of a test is the accuracy of a test to discriminate diseased cases from normal controls.
ROC curves can also be used to compare the diagnostic performance of two or more laboratory tests.
ROC Curves plot the true positive rate (sensitivity) against the false positive rate (1-specificity) for the different possible cutpoints of a diagnostic test. Each point on the ROC curve represents a sensitivity/specificity pair.
The closer the curve follows the left side border and the top border, the more accurate the test.
The closer the curve is to the 45-degree diagonal, the less accurate the test.
Hope this helps.
• asked a question related to Quantitative Methodology
Question
4 answers
Is the CONSORT checklist appropriate for reporting a same sample pre- and post-test study? or are there other checklists out there for this specific research design? Thanks
Relevant answer
Answer
No. it is applicable only for randomized clinical trials in 2 different groups
• asked a question related to Quantitative Methodology
Question
3 answers
(TJSQ) by Lester (1982).
Relevant answer
Answer
Hi Shahril,
I have the questionnaire. Feel free to mail me:
Regards,
Josh.
• asked a question related to Quantitative Methodology