Science method
Quantitative Analysis - Science method
In chemistry, quantitative analysis is the determination of the absolute or relative abundance (often expressed as a concentration) of one, several or all particular substance(s) present in a sample.
Questions related to Quantitative Analysis
I have six kinds of compounds which I then tested for antioxidant activity using the DDPH assay and also anticancer activity on five types of cell lines, so I got two types of data groups:
1. Antioxidant activity data
2. Anticancer activity (5 types of cancer cell line)
Each data consisted of 3 replications. Which correlation test is the most appropriate to determine whether there is a relationship between the two activities?
Hello Seniors I hope you are doing well
Recently I've read some very good research articles. In those articles datasets were taken from V-Dem, Polity and Freedom House. Though they have shared the link of supplementary datasets and the process of how they analyzed these datasets in SPSS or R in brief but I couldn't understand and replicate these findings. It may be because I am not very good at quantitative data analysis.
So I want to know how could I better understand this Datasets analysis easily like V-Dem etc. Is there any good course online, lectures or conference video etc. Or good book?
Article links
Any help would be appreciated.
Thanks in anticipation.
Dear all,
I am using the Imodpoly algorithm with python to fit very noisy fluorescence data. However, in a few instances, I notice that changing the polynomial degree or using arpls algorithm will fit my data better. If I am running many data sets and my goal is to perform quantitative analysis and comparison, do I have to use the same fitting algorithm for each data set or can I mix and match algorithms for better fitting?
Thanks
I plan to develop a semi-structured assessment tool and further validate it on a relatively small sample of below 50 (clinical sample). I have been asked by the research committee to consider factor analysis.
So in this context, I wanted to know if anyone has used regularized factor analysis for tool validation which is recommended for small sample sizes?
As we know, atomic and molecular emission lines of laser-induced breakdown spectra can be used for quantitative analysis, classification, etc. Does continuous radiation, which is usually subtracted in quantitative analysis, contain any useful physical information?Are there any applications for continuous radiation in LIBS?
Hello everyone! Currently I'm busy with finalizing my master's thesis and due to a high drop-out rate in my intervention I was not able to conduct the initial analysis to test one of my hypotheses. Instead of doing a quantitative analysis, I have analyzed the answers to the evaluation questions after each part of the intervention. The purpose of the evaluation questions was only to evaluate how the participant perceived the intervention and not specifically related to the central construct I am examining in my paper (Psychological Flexibility), whereas the initial quantitative analysis would test whether the scores on the Psy-flex (measure for Psychological Flexibility) would improve after the intervention (compared to the first measurement).
Since I modified the analysis for this part, I had the following questions:
1. Can I still formulate the initial hypothesis in the introduction and write down in the data-analysis that a qualitative analysis is conducted due to small sample size?
- My supervisor says this is not possible and that I should formulate a hypothesis for the qualitative analysis in the introduction (while in this case it is exploratory right?). According to her I should exclude this initial hypothesis from the paper, although this was part of the initial plan.
2. Is the qualitative part not based on a exploratory research design and am I therefore not obliged to formulate a hypothesis?
- The purpose of the evaluation questions were to evaluate the part of the intervention. I did not construct specific questions for the specific skills of Psychological Flexibility (as in an interview with themes & coding etc.). According to my supervisor there should be specific hypothesis for it formulated in the introduction, since I can't say that the study is based on a mixed-method design otherwise (is this true? As long as I report which analyses I conduct in the data-analysis even when modified, I can still say that it is based on a mixed-method design right?) IMPORTANT NOTE: I already did a quantitative analysis before the intervention procedure, so therefore I thought that the combination of quantitative and qualitative design can be seen as mixed-method design.
I hope this explanation is clear for you to give me some advice on how to approach this. If not, ask me some questions and I will try to elaborate on it.
Thank you in advance!
I read some articles about statistical robustness of SmartPLS. However, I am not sure about the appropriateness of SmartPLS in the case of survey study involving a representative sample with adequate sample size. Any suggestions?
Thank you!
If I use SmartPLS to test the structural model then how I can measure the Goodness of Fit Index (GFI). What are the indices I need to observe for validating the research model?
I was given a role play as a financial analyst and the task is to perform a presentation on how to estimate the growth rate of a company by doing quantitative analysis using the company's financial statements.
Hence, which variables from the financial statement should I use to be able to estimate and calculate the projected growth rate?
I need an all in one software that could handle quantitative analysis aside R and also easy to operate
There is a problem in my research with quantitative analysis of XRD patterns of glass-crystalline materials (including glass-ceramics and geopolymers).
Thanks to the discussion (https://www.researchgate.net/post/Does-anyone-know-how-to-quantify-C-S-H-in-cementitious-materials-using-XRD) I've found RieCalc program which calculate of rescaled phase fractions (including amorphous phases).
Unfortunately, I've faced two difficulties:
1. This program "could not be found" at http://www.geoscienze.unipd.it.
2. I'm not sure about its suitability for the analysis of geopolymers and glass-based materials.
Are there any other options to find a program for automatic quantitative analysis of crystalline and amorphous phases, and where can I find them?
I want to do quantitative analysis for vitamin A acetate raw material using HPLC method, but my sample cannot dissolve in many organic solvent such as methanol,ethanol,chloroform and hexane.
is there any recommended solvent that i can use for dissolve it ?
I have added 4 control variables namely firm size, board size, industry and firm age. do i have to collect data for the control variables? my research topic is impact of gender diversity on firm performance.
Let us suppose that we have an intervention, for example technology integration in the science classroom. Can we study what mediators could affect the results of the intervention, for example learning motivation? Can we study what moderators could affect the results of the intervention? And why.
For example, can we study how gender mediates the influence of the intervention on the learning otivation? or it would be better if we consider the interaction of gender and the intervention?
Respected Researchers,
I am working on urban sustainability and my final objective is to propose a framework of urban sustainability. In this regard, I have used EFA first and want to use CFA, but I got to know that SEM has two methods i.e., CB-SEM and PLS-SEM. If I do not use CB-SEM or PLS-SEM, can I use just CFA in my study? If I can use it, please recommend me the procedures for conducting CFA.
I am conducting a study on the retail e-commerce segment in Nigeria and planning to do quantitative analysis using PLS-SEM [to help test mediating effects] but the population is less than 20 and so is the sample size.
The data type is Primary, collected through survey.
1. Can I do an acceptable/adequate analysis with a sample size of 13 respondents?
2. If no, what are my options to proceed considering population size is limited?
3. Would switching to a qualitative/descriptive approach be an option, and why?
4. What data analysis tool should be most ideal for use in my situation and considering I need to test for mediating effects of variables?
Could you please answer by stating your student-academic status?
For example:
Dr. I'm a student (Social Psychology-Competence stage).
Hi,
I am looking for the latest data of population from authentic sources/government authorities for anytime between 2012-2021. It will be for my study area, South 24 Parganas district, West Bengal. Any leads/contacts for acquiring the same will be of great help for my research work.
Thank you.
hello every one
Can the independent variable be a constant
I want to Run MANOVA
I have the independent variable that is constant a budget during the year
and the opinion on the impact of the budget on quality of education (1 DV), on research (2DV), diplomatic rate (3dv), Employment rate(4DV) are the dependent variable
i Run the MANOVA but it looks like i have a lot of errors even if i tried to normalize Data,
SPSS gives me this message: this Box's Test of Equality of Covariance Matrices is not computed because there are less than two nonempty cells.
i tried to work with the discrim but i get the same errors
i get my data frow a survey it means that i can not change it , what other methods to analyse my data
I am trying to analyse the impact of quantified budgeting (historical budgeting, because it is the only method that is applicated in mycountry,the independent variable is historical budgeting>>its the budget>>> its continuous variable) on some universities performance indicators.
Thank you for the help
Hello Stata users. Please help.
When running Cronbach's Alpha test for internal consistency...
I have some missing values in the data set coded as 999.
Are they included in calculations or dismissed by Stata software as default?
Using other words do I have to mark some option in Stata before running Cronbach's alpha calculations so the software would dismiss missing values?
Could anybody clarify? Many thanks in advance.
Hi, all,
Recently, I carried out some catalytic experiments of 1,2-dichlorobenzene, and I want to analyze the organic products from the catalytic process. And I want to make a mass balance base on the amount of Carbon. Actually, I had got some results with the help of Tenax adsorber/air pocket and a GC/MS with TD injection.
I can identify the abundant organic products with the help of GC/MS, however, I don't know their accurate amount, because I have no standard substances.
So, the question is coming. How to realize the quantitative analysis of organic products from the catalytic process of 1,2-dichlorobenzene?
Spatial and Temporal variations in Tea cultivation and Production in major tea growing regions in Sri Lanka. this is the research topic. under this topic, I hope to conduct mixed methodology both qualitative and quantitative. Can you give an idea of how to write the philosophy of this research? I mean can I mention this is a positivist approach. Because the major part is going to quantitative analysis.
My method for a quantitative analysis is ion pair chromatography with a gradient method. I tried to control any parameter that can affect on reproducibility, but my analysis is not reproducible. How can I have a reproducible analysis in this method?
I am writing an article where I am determining the effect of import substitution, should I include the diagnostic tests results. Or only the short run ECM and long run bound tests results are enough to interpret?
Imagine you have measured a series of curves, e.g. spectra of a dissolved compound of various concentrations, films with different thicknesses etc. Before you can retrieve the data, someone meddles with it, i.e. multiplies it with an unknown factor (or, alternatively, assume that your empty channel spectrum changes within the series), large enough so that it matters, but small enough that the data still seem to make sense.
Do you know a method that not only indicates that the curves have been altered, but also allows to retrieve the original/unflawed data?
Would you please share your kind opinion regarding this issue?
I am analyzing a Nationally representative survey and I wonder if I recode the categorical variables like gender or education, it would mess the weights!
each row of the data has a weight, strata, and PSU. does recoding the categorical variables impact the results of my regression analysis?
I'm planning to conduct a Quantitative analysis.
any recommendations for Software that can be used in the quantitative analysis?
hello every one
I have an independent quantitative variable and two dependent quantitative variables is MANCOVA suitable for quantitative analysis in this case?
Hello Every one I just need a help for choosing the write test for my Data I have 2 quantitative dependant variable and one qualitative variable with one level modalities ( normally the variable has 4 modalities but what is Applicable in Morocco is one of them I am speaking about the funding method of research) can I use in this way A MANCOVA test or Not???
If not what test should I use ? And why
- 2 dependent quantitative variable
- 1 independent qualitative variable with one level ( or one modality)
Thank you
I want to count these fragments for image analysis of autolysis. Please suggest good software, it is so critical in my work.
Generally HPLC, we can use it for qualitative and quantitative analysis.
What is the main difference while using it with PDA or with MS detector?
What are the advantages of MS to PDA and vis-versa?
If I use 320 sample size using a purposive sampling technique, how can validate the sample size for generalizing results? Are 320 responses could be statistically sufficient to generalize the results?
Dear RG community,
I have analysed several flat sections of romanechite (Ba,H2O)2Mn5O10 by EPMA to have quantitative analyses. Romanechite would normally have about 15-17wt.% BaO but instead we have 2-3 wt.% BaO (see analysis bellow), which is a huge difference. We have rather the same results using EDS analysis of the same samples (4-5 wt.%Ba). Finally, when we did ICP-MS (whole rock sample of romanechite), the Ba content is "normal" with 14 wt.% (14,000 ppm) Ba.
I'm wondering why we have such a big upset when using EPMA. Maybe it is due to the standard (benitoite) we used or the ZAF calculation method. If you have any idea...
Thank you !
The standard:
Element Standard Mass(%) ZAF Fac. Z A F
BaO Benitoite 37.08 0.9158 1.0078 0.9087 1
Analysed romanechite:
No. SiO2 Al2O3 BaO As2O5 Fe2O3 MnO2 Na2O MgO CaO K2O PbO SrO ZnO WO3 CuO CoO Total
Sample1 0.051 0.358 2.854 0.123 0.232 62.265 0.01 0.004 0.2 0.02 0 0.166 0 1.937 0.078 0.089 68.387
I have heard some academics argue that t-test can only be used for hypothesis testing. That it is too weak a tool to be used to analyse a specific objective when carrying out an academic research. For example, is t-test an appropriate analytical tool to determine the effect of credit on farm output?
Dear all!
I hope you had a wonderful weekend. At the moment Im in the later stages of planning a hopefully good quantitative article in entrepreneurship. I will use connections in the industry (to do the dirty work of actually convincing people to participate )where Im active and my question is, what do you deem to be an acceptable sample size for a questionnaire about decision making, connecting into other areas?
It is a relatively small business community in our country so sample size can not be 1000, if yes there must be a discussion about expanding the geographical area.I know what the literature says but what is your experience regarding minimum sample size in different level journals. No need to say Im a qualitative researcher seeking to make an excursion into enemy territory :-)
Thank you so much for your input in advance.
Best wishes Henrik
I've been asked to give feedback on a study that used a survey with the option for comments in each question. Some participants decided to share additional observations and thoughts for some questions. I've found that these additional comments carry rich qualitative data so I'm suggesting they analyze them and integrate them into the results (since they're currently not).
However, I'm not sure how to justify this methodologically (or even if it's appropriate). Even though these comments add insightful information about the participant's perceptions, they only account for a portion of them.
Options I'm currently considering:
(1) Use a common theme analysis for the qualitative data and relabel the study from quantitative to mixed-methods.
(2) Still define it as quantitative, but mention that some qualitative data was gathered as optional comments and analysed as well (would this be methodologically correct?).
(3) Do not use the qualitative data for the results, since it doesn't come from all participants.
Any thoughts?
Thank you very much in advance!
Power and gas retailers, are exposed to a variety of risks when selling to domestic customers. Many of these risks arise from the fact that customers are offered a fixed price, while the retailer must purchase the gas and power to supply their customers from the wholesale markets. The risk associated with offering fixed price contracts is exacerbated by correlations between demand and market prices. For example, during a cold spell gas demand increases and wholesale prices tend to rise, whilst during milder weather demand falls and wholesale prices reduce.
Hello, all,
I'm trying to design a questionnaire that gets at two constructs for educational practitioners: 1) beliefs about teaching and learning, and 2) interpretations of an educational reform.
For the latter, I'm not trying to get at attitudes (what do you think of the reform?) but rather the beliefs, logics, assumptions that practitioners think inhere in the reform (why is the reform happening/what is it about/what does it propose, educationally?)
Now, I realize this is a bit confusing. As an example, I'd hope to juxtapose something like: "students should learn/learn best through memorization". For (1), the respondent would answer "I believe this is true." For (2), though, I'm struggling -- something like "I think this is the case for/intention of X new reform."
I've been trying to find examples of such comparative surveys focusing on perceptions/interpretations of innovation, reform, change, etc. But I'm coming up short.
Would it be fare to frame interpretations, cognitively, as "expectations" ("with X new reform, I expect this to be true") or social demands ("I think my school wants me to do this")?
Any thoughts are much appreciated!
(PS - To complicate this further, I intend to make a similar survey for students!)
What are the programs which can be used for structural modeling?
I have used an AMOS previously but the trial period had run off, so are there any other available programs for the use which are free of charge?
thanks
How can I validate a questionnaire for a small sample of hospitals' senior executive managers?
Hello everyone
-I performed a systematic review for the strategic KPIs that are most used and important worldwide.
-Then, I developed a questionnaire in which I asked the senior managers at 15 hospitals to rate these items based on their importance and their performance at that hospital on a scale of 0-10 (Quantitative data).
-The sample size is 30 because the population is small (however, it is an important one to my research).
-How can I perform construct validation for the items which are 46 items, especially that EFA and CFA will not be suitable for such a small sample.
-These 45 items can be classified into 6 components based on literature (such as the financial, the managerial, the customer, etc..)
-Bootstrapping in validation was not recommended.
-I found a good article with a close idea but they only performed face and content validity:
Ravaghi H, Heidarpour P, Mohseni M, Rafiei S. Senior managers’ viewpoints toward challenges of implementing clinical governance: a national study in Iran. International Journal of Health Policy and Management 2013; 1: 295–299.
-Do you recommend using EFA for each component separately which will contain around 5- 9 items to consider each as a separate scale and to define its sub-components (i tried this option and it gave good results and sample adequacy), but am not sure if this is acceptable to do. If you can think of other options I will be thankful if you can enlighten me.
How can i validate a questionnaire for hospitals' senior managers?
Hello everyone
-I performed a systematic review for the strategic KPIs that are most used and important worldwide.
-Then, I developed a questionnaire in which I asked the senior managers at 15 hospitals to rate these items based on their importance and their performance at that hospital on a scale of 0-10 (Quantitative data).
-The sample size is 30 because the population is small (however, it is an important one to my research).
-How can I perform construct validation for the items which are 46 items, especially that EFA and CFA will not be suitable for such a small sample.
-These 45 items can be classified into 6 components based on literature (such as the financial, the managerial, the customer, etc..)
-Bootstrapping in validation was not recommended.
-I found a good article with a close idea but they only performed face and content validity:
Ravaghi H, Heidarpour P, Mohseni M, Rafiei S. Senior managers’ viewpoints toward challenges of implementing clinical governance: a national study in Iran. International Journal of Health Policy and Management 2013; 1: 295–299.
-Do you recommend using EFA for each component separately which will contain around 5- 9 items to consider each as a separate scale and to define its sub-components (i tried this option and it gave good results and sample adequacy), but am not sure if this is acceptable to do. If you can think of other options I will be thankful if you can enlighten me.
Just getting a gauge from various sides of the community regarding which statistical analysis method is underrated.
Thank you.
My question is connected to rather unclear point of error correlation that many scholars encounter while conducting their SEM (structural equation modeling) analysis. It is a pretty often when scholars report procedures of correlating the error terms to enhance the overall goodness of fit for their models. Hermida (2015), for instance, provided an in-depth analysis for such issue and pointed out that there are many cases within social sciences studies when researchers do not provide appropriate justification for the error correlation. I have read in Harrington (2008) that the measurement errors can be the result of similar meaning or close to the meanings of words and phrases in the statements that participants are asked to assess. Another option to justify such correlation was connected to longitudinal studies and a priori justification for the error terms which might be based on the nature of study variables.
In my personal case, I have two items with Modification indices above 20.
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
12 item1 ~~ item2 25.788 0.471 0.471 0.476 0.476
After correlating the errors, the model fit appears just great (Model consists of 5 latent factors of the first order and 2 latent factors of the first order; n=168; number of items: around 23). However, I am concerned with how to justify the error terms correlations. In my case the wording of two items appear very similar: With other students in English language class I feel supported (item 1) and With other students in English language class I feel supported (item 2)(Likert scale from 1 to 7). According to Harrington (2008) it's enough to justify the correlation between errors.
However, I would appreciate any comments on whether justification of similar wording of questions seems enough for proving error correlations.
Any further real-life examples of wording the items/questions or articles on the same topic are also well-appreciated.
I would like to determine the proportion of every constituent of my essential oil.
I made GC MS and the the qualitative analysis by recognising the different constituents presents using n-alkanes series. And now I'm looking for a way to determine the proportion of my constituents.
Any help in this regard will be appreciated.
I would like to ask how to interpret both naturally log-transformed data and square root data after getting the results from the SPSS analysis
should I report both values as below or the back-transformed means with the CI are enough?
For square root data :
The arithmetic mean obtained from the statistical analysis of the square rooted transformed data was 2.209 with 95 % confidence interval between (1.8, 2.62) for group 1. When the data is back-transformed for proper presentation, the mean od group 1 should be 4.87 days with 95 % confidence interval between (3.24, 6.86)? simply back-transformed the value by self multiplication? is that correct?
For Natural log-transformed data (LN) :
the mean obtained from SPSS was 2.59 was a confidence interval of (1.703, 3.48)
I couldn't understand how should I report back the natural log!
could you please give me a clue on how to report such result?
Thank you
Hi, I am an undergraduate currently working on a project that is using a quantitative survey.
I have developed 3 scenarios that have the same 5 Likert scale questions across these scenarios. Also, the questions are split into confidence and experience, as they are asking respondents to self-rate themselves on confidence and experience on the skills specified in the questions.
My question is, how should I analyse the Likert scale responses across all 3 scenarios? Can I sum them up then divide them to get the mean value of each response to each question? I cant seem to find similar papers like my situation.
I have found Cronbach alpha to be >0.7 across all the questions and there is significant positive correlation between confidence and experience across all 3 scenarios. Are these valid reasons enough to add up arithmetically the responses across the 3 scenarios? I can't find any research to say when I am able to add the responses up.
Please help as I am quite lost. Please cite sources in your statement so I can read up further too.
Hi,
We use PVDF membranes and probe for a variety of proteins including Stat3, RPS6, PTEN, PlexinA1, GAPDH etc.
I've been reading that for PVDF membranes addition of 0.01% SDS to the secondary antibody solution is recommended, but having scoured dozens of protocols and sites I can't see to find the reason for this. We do not add SDS and our results are fine for quantitative analysis, but definitely could be cleaner (hence the reason for researching ways I can increase signal and decrease background further).
Can anyone explain what the purpose is of adding SDS to secondary solution and what the effect is on results?
Thanks in advance!
Hi everyone,
Do you have any experience in preparing sample dilutions to measure the concentrations of inflammatory cytokines (TNFα, TGFβ, MCP-1, IL-1α, IL-1β, IL-6, and so forth) for an ELISA when it comes to any rat brain tissue, especially hippocampus. I'd prefer not to perform a pretest, and waste any well-strips beforehand. Based on your experience from different labs, what would be the expected protein concentration ranges of these cytokines in a supernatant saved from tissue homogenate prepared from a healthy rat (Sprague Dawley) brain tissue (hippocampus)?
Kind regards,
Hi,
My current qualitative research (for my master thesis) looks at how a company can respond better to scarcity in its supply chain. My theoretical framework consists of the whole supply chain risk management process and defines characteristics that I'm comparing to determine if missing one of the characteristics could explain the success or lack of success. But before I could answer the 'why' it was a success or not, I need to analyze 'if' it was a success.
The company has a lot of data available. I defined a few variables that determine 'success' (e.g., percentage the company can fill regarding customer demand). Is there a methodology that describes how to look at the available data? The closest methodology I could get was descriptive statistics, but I'm not sure if it covers everything I want to do. Can somebody help me with this?
Do you know a website to search for freelance collaborators in statistical analysis (SPSS, EQS, fuzzy, etc...)?
Hi there.
I'm doing my dissertation of site schedule risks to volumetric modular construction, using a deductive approach to generate my question set based on previous studies, and literature review. the response rate will be low as the market itself only has 3 main companies, I'm predicting a response rate of around 20-30 participants, internal to these companies, or experts in the field. this to be followed up by a semi-structure case study interviews (4) based of the findings.
my concern is that the low level of respondents will only be sufficient to produce a hypothesis and not test a theory, will calculating the medium prove sufficient or non-parametric tests, this is a quant-qual analysis
thank you
For my dissertation, I asked my participant to estimate the weekly income of different occupations. As I did not provide a predetermined set or income range, I would assume that these open-ended questions as the participants could give any answer that came to their mind.
I am going to use the means for each occupation and see whether participant's social class or participant age has an effect on their income estimates (two-way ANOVA). For my method section I am trying to explain what types of questions I used but after doing some research, I am very unsure if I can use open-ended questions for quantitative analysis like that.
Hope anyone has advice for me :)
Thanks a lot,
Kristina
Hello, I'm working on a project to validate some ultrapure water as nuclease-free down to very low detection limits. I've seen methods that utilize agarose gels to measure the nicking of circular DNA as a substrate for the DNase enzyme. In some methods, linear DNA is used to represent the nicked product of the enzymatic reaction. Is it feasible or practical to run gels with wells containing a dilution series of DNase enzyme incubated with the circular DNA substrate? Is there a method other than visual detection of the EtBr stained bands to compare such a dilution series and make a quantitative calibration curve?
Any help, advice or references would be greatly appreciated. Thanks!
-Mike
No previous study used large-scale data from many countries on a topic- can this be a strong rationale for a study? for instance, if I say that there is no previous cross-national study on a topic so it cannot be said that it is a worldwide phenomenon rather than a specific regional phenomenon. Also, a cross-national study will give the result a stronger generalization power which might of interest to international practitioners like WHO.
- Can these statements be a strong rationale and justification for a study? Please let me know, Thank you.
Hello,
In relation to another recent, as yet unanswered, post (https://www.researchgate.net/post/How_to_calculate_comparable_effect_sizes_from_both_t-test_and_ANOVA_statistics), I am wondering how I can calculate the sample variance of an effect size, in order for me to then infer the confidence intervals.
So far, I have been calculating 'Cohen's d' effect sizes from studies' experiments, by using the t-value of two-sample t-tests and the sample size (n) for each group. I then convert the Cohen's d into an unbiased 'Hedge's g' effect size.
I understand that, normally, in order to calculate sample variance, I would also need to know the means of the two groups and their standard deviation. However, these are not reported in most studies when a t-test is calculated. Is there any way I can calculate the sample variance of my Hedge's g effect sizes without this information?
Many thanks,
Alex
What if the Cronbach's Alpha of a scale (4-items) measuring a control variable is between the .40 -.50 in your research. However, the scale is the same scale used in previous research in which the scale received a Cronbach's Alpha of .73.
Do you have to make some adjustments to the scale or can you use this scale because previous research showed it is reliable?
What do you think?
How to do quantitative analysis using Raman spectroscopy ?
or
How to find out assay using Raman spectroscopy?
I have sample a of 138 observations (cross sectional data) and running OLS regression with 6 independent variables.
My adjusted R2 is always coming negative even if I include only 1 independent variable in the model. All the beta coefficients as well as regression models are insignificant. The value of R2 is close to zero.
My queries are:
(a) Is negative adjusted R2 possible? If yes how should I justify it in my study and any references that can be quoted to support my results.
(b) Please suggest what should I do to improve my results? It is not possible to increase the sample size and have already checked my data for any inconsistencies.
I have homogeneous distribution of fitc in 2% agarose gel.Later when i analyze the image in imagej, intensity varies as i change the region of interest within the same image. The results aren't reproducible even for the same microscope,same sample. How to determine accurate intensity in such samples. I want to use this intensity for quantitative analysis.
In my master's thesis, I am doing a mixed method research where I have a quantitative analysis of a survey with 21 items measuring a total of 8 elements of a theoretical model. The sample size is approximately 77.
I have been told that due to my mixed methods approach, can get enough analysis in sticking to some descriptives, and potentially a regression.
My question is therefore if I can combine items that are intended to measure the same construct into one variable and use as dependent variable without having run some kind of factor analysis prior? I do not want the quantitative part to dominate my research and would therefore prefer sticking to only a few quantitative analyses, but of course not run any tests that ignore critical prerequisites.
Hi, I am planning to collect matching data from employees and their supervisors about some organiational variables via online survey. I have studied researchers assign some codes to group the data togather. Can anyone please explain how to assign the codes or how to collect this kind of grouped data online? Sorry, I am going to do this for the first time. Thanks.
A few articles with qualitative method have used a descriptive statistic analysis and a statistics profesor said that it is correct. My question is: if you use statistics in those researchs, won't it be a mixed study?
Hello everyone,
we are conducting a quantitative research and our questionnaire consists of around 20 different questions.
Our dependent variable is an ordinal variable (three answer choices, concerning the hours spend on a mobile phone e.g. 1-2 hours, 3-4 hours, 4+) and the independent variables are all based on a likert scale. Our independent variables are made upon multiple questions (all based on likert scales) and were summarized using the means. Can anyone tell me which kind of analysis we can do to understand any regression or correlation relationships?
We would really appreciate your help!
Thank you!
I have applied the Yoon-Nelson kinetic model to the experimental data obtained for CO2 adsorption on solid adsorbents. For the two materials studied, the experimental values of adsorption capacity were found to be 5.9% and 2.3% higher than those of empirical values. However, the R2 values were 0.998 and 0.985 respectively. Now, a journal’s reviewer is expressing disagreement with the values, pointing out that Experimental values are more than empirical values. I feel that high value of R2 and closeness of values (between experimental and empirical) are sufficient to show that the model fits well, as it is only empirical, not theoretical. Please comment on this.
I am working on a meta-analysis of diagnostic value and one included study used several sets, comprises training set, validation set, replication set, and asymptomatic set. Each of them yielded different diagnostic values. How should I extract the data for quantitative analysis? Should I extract all of them and made them into individual data?
Can you help me finding a free download of quantitative image processing software? A software that having multivariate data analyses included.
Molarity o HCl is 0.1M. Is it possible to have a negative value for Mml ?
I want to have a quantitative analysis of the excited molecules in 2Photon Excitation microscopy and compare it to the confocal one. Therefore if I can have the illuminated volume of the sample in these two techniques then I can approximate the excited number of molecules in that volume.
I am working on studying the perception of menstrual leave policy amongst male and female employees. This policy has been implemented in few organisations in India and currently it is tabled in the Parliament for discussion.
I would like to study beforehand about what employees perceive regarding this policy and whether it will be fruitful to turn this policy into a law based on its acceptance amongst employees.
As a beginning, I have difficulty figuring out which connotation of perception would be most appropriate here and which methodology would be suitable to extract accurate and informative data.
Hello, experts, what quantitative analysis of the relationship between gender differences and developmental progress of preschool children can be found
I'm a bit confuse how to measure students' learning. I have used word 'learning' in my research topic and now I'm struck what kind of questionnaire or any other tool would be used to measure students' learning not their performance or achievement. I need a clarification about the term 'Learning"
I am trying to perform the cell-weighting procedure on SPSS, but I am not familiar with how this is done. I understand cell-weighting in theory but I need to apply it through SPSS. Assume that I have the actual population distributions.
I am trying to obtain permission to use Muller/McCloskey scale on job satisfaction among nurses and in a meantime would love to see how it is scored.
I found references for factor loadings, correlations of the subscales etc. but need to look at scoring which is I think building into a continuous variable.
Thanking in advance to anybody who can help.