Science topic
Scale Construction - Science topic
Explore the latest questions and answers in Scale Construction, and find Scale Construction experts.
Questions related to Scale Construction
Dear Sir/Ma'am,
Greetings!
I developed a psychometric tool consisting of 95 items. I had shared with 5 experts for content validation. 51 items are removed after the content validation process. According to Polit & beck (2006) and Polit et al., (2007) the acceptable CVI values should be 1. The 51 items did not get the required value of CVI for acceptance thus, the 44-items are retained.
My question is 'Should I circulate the questionnaire (with 44-items) to students for criterian-releated validity or 'Should I go with 95 original items?'
If I go with 95 items then what is the use of doing content validation or computing CVI. Am I following the right path if I decide to go with 44 retained items? please say
Big Thank you,
Narottam Kumar
Hello,
Using factor analysis, I recently created a social questionnaire with four factors, each containing four items. My next step is validating the questionnaire.
I want to show that this questionnaire can differentiate between 3 product categories that have different social characteristics.
- What statistical method should I use to prove the differences?
- Should I ask the same respondents to evaluate all 3 products? or it is ok to have separate respondents for each product category.
Thank you
Hi,
I have turned up a questionnaire prepared in English. I want to convert it into Turkish than practice it. But I cannot be sure whether I should analyze its validity and reliability because it is not a scale. I hope I can have a chance to pick your brains. Thank you in advance.
Hi everyone,
I am looking for book recommendations on how to construct a scale, preferably with a focus on (and examples from) social sciences. Any suggestions?
Thanks in advance!
Hi,
Anybody come across scales that measure uniqueness and self image of individuals as a result of acquiring a certain product or brand. I found number of scales that measures the perception of the brand itself or the perception of a person about himself; but could not find anything that measures the perception of a person about him/herself as a result of a product or a brand possession.
If anyone can help, this would be grateful.
Thanks
What should be done if items did not load in EFA?
should they be discarded? Or that means that they are unique and do not have partners?
What if these items are important or represent our dependent variables ? Can we include them as single items?
I want to read more than one load cell using a single Raspberry Pi.
What are the programs which can be used for structural modeling?
I have used an AMOS previously but the trial period had run off, so are there any other available programs for the use which are free of charge?
thanks
How can I validate a questionnaire for a small sample of hospitals' senior executive managers?
Hello everyone
-I performed a systematic review for the strategic KPIs that are most used and important worldwide.
-Then, I developed a questionnaire in which I asked the senior managers at 15 hospitals to rate these items based on their importance and their performance at that hospital on a scale of 0-10 (Quantitative data).
-The sample size is 30 because the population is small (however, it is an important one to my research).
-How can I perform construct validation for the items which are 46 items, especially that EFA and CFA will not be suitable for such a small sample.
-These 45 items can be classified into 6 components based on literature (such as the financial, the managerial, the customer, etc..)
-Bootstrapping in validation was not recommended.
-I found a good article with a close idea but they only performed face and content validity:
Ravaghi H, Heidarpour P, Mohseni M, Rafiei S. Senior managers’ viewpoints toward challenges of implementing clinical governance: a national study in Iran. International Journal of Health Policy and Management 2013; 1: 295–299.
-Do you recommend using EFA for each component separately which will contain around 5- 9 items to consider each as a separate scale and to define its sub-components (i tried this option and it gave good results and sample adequacy), but am not sure if this is acceptable to do. If you can think of other options I will be thankful if you can enlighten me.
How can i validate a questionnaire for hospitals' senior managers?
Hello everyone
-I performed a systematic review for the strategic KPIs that are most used and important worldwide.
-Then, I developed a questionnaire in which I asked the senior managers at 15 hospitals to rate these items based on their importance and their performance at that hospital on a scale of 0-10 (Quantitative data).
-The sample size is 30 because the population is small (however, it is an important one to my research).
-How can I perform construct validation for the items which are 46 items, especially that EFA and CFA will not be suitable for such a small sample.
-These 45 items can be classified into 6 components based on literature (such as the financial, the managerial, the customer, etc..)
-Bootstrapping in validation was not recommended.
-I found a good article with a close idea but they only performed face and content validity:
Ravaghi H, Heidarpour P, Mohseni M, Rafiei S. Senior managers’ viewpoints toward challenges of implementing clinical governance: a national study in Iran. International Journal of Health Policy and Management 2013; 1: 295–299.
-Do you recommend using EFA for each component separately which will contain around 5- 9 items to consider each as a separate scale and to define its sub-components (i tried this option and it gave good results and sample adequacy), but am not sure if this is acceptable to do. If you can think of other options I will be thankful if you can enlighten me.
executive functioning = working memory, attention shifting and inhibition if task.
Please suggest me scale who asses all three of them or even i am ready to go with different scales. kindly mention authors also.
I want to use these tests for my m.phil thesis which is non funded and i am unable to bear expenses of publishers.
Dear researchers,
I am trying to develop two scales for a project I'm working on
there are no specific scales in the literature so we decided we could try to create the scales we needed.
I used cronbach's alpha to determine reliability, and i used principle component analysis. But I'm having some difficulites interpreting the results and knowing what I need exactly from PCA
For the first scale
it consists of 11 items
cronbach's alpha is about 0.85 and mean inter item correlation was 0.35
I used PCA and after parallel analysis i ended with one component/factor
based on the component matrix all variables loaded strongly on the component (0.5+)
Since all items loaded on one component, is that considered good for a scale?
As for the second scale
cronbach's alpha was about 0.83 and inter item correlation was about 0.26
I also used PCA and after parallel analysis I ended up with two factors
the first factor had a big eigenvalue while the second factor was just barely above "significance"
Based on the component matrix, all variables loaded pretty well on the first component (0.3+) but some variables loaded on both components, either positively or negatively
I removed some variables that loaded on two components,
cronbachs alpha became a bit lower but mean inter item correlation increased to 0.29
after PCA and parallel analysis only one factor was retained and all variables loaded nicely on that component
To recap, my questions are:
for the first scale, is having one component a sign of a good scale?
and for the second scale, how do interpret negative loadings on one component and positive loadings on the other, and how do i interpret loadings on two components
last but not least, is what i did for the second scale (removing some variables) a good or bad decision?
I'm researching on burnout in counseling psychologists and we're trying to look into which source of social support could be most helpful to buffering the effects of burnout. Would anyone happen to know if there's a scale that measures social support from the organization, social support from supervisors, and social support from colleagues altogether?
Any help would be appreciated.
Thank you!
I'm proceeding to adapt a scale in English into my own language, and also wanna make some small changes not related to cultural problems. The original scale is designed to measure a general concept, now I want to change the items a little bit so it can measure that concept in a context-specific way.
So I wanna ask what is the right procedure to do it? Should I run a translation-adaptation process first (translate, make some cultural changes if needed and run a pilot study to check the reliability & validity of the adapted scale) then make the context-specificity changes that I want for the research topic;
or translate and make the changes at the same time, then run the pilot study to check the reliability & validity of the adapted scale?
Thank you in advance.
I am a student who needs to do some research for my master thesis about the expected product care of a phone in the future. To be precise: I’m doing research on the impact that a Country of Origin label has on the usage of a mobile phone. In this research, I’ll give different information to different respondent groups (the “made-in-label” will differ). Now I want to develop or use an existing scale that measures how well the respondent thinks he’ll take care of the product. For instance: “I will use this product longer than a year”, “I will take care of this product” etc.
Does anyone know a reliable existing scale or how to develop a new one?
In order to measure research engagement of academicians, will it be wise to use the JES (Job engagement scale) by Rich et al. (2010) with modifications in words like (Research activities instead of Job)?
Adding a file for reference.
I am trying to obtain permission to use Muller/McCloskey scale on job satisfaction among nurses and in a meantime would love to see how it is scored.
I found references for factor loadings, correlations of the subscales etc. but need to look at scoring which is I think building into a continuous variable.
Thanking in advance to anybody who can help.
Which is a lucid standard comprehensive book to guide on psychological scale construction and its standardization?
I am developing a scale to investigate self-understanding in relation to cultural diversity among university students. I will be using this scale to investigate criterion validity for the scale I developing.
Dear All,
I am working with environmental scales to analyze the influence of individual's attitudes on their pro-environmental behavior performance. I am guided by the theory of planned behavior proposed by Icek Ajzen, besides some environmental concern scales. However, I am figuring out how to create correctly the composite variables for the attitudes, subjective norms and behavioral control. From a paper (Meijer et al 2016) I found that the author multiply each belief with their respective outcome. This score will be sum up with the rest statements of the same category e.g. attitudes and that would be then added in to the model. Apparently this is not too common and so, I would like to hear from you an advice on how to build them?
Thank you,
Miriam
For example: Would using a 7 point scale of 0 - 3 in steps of 0.5 give you different results to using a 7 point scale of 0 - 6 in steps of 1? I'm aware that verbal labels are likely to be better but I'm interested in the possible differences between purely numerical scales that use 0.5 or 1 increments.
Thanks.
Hello
I want to build a questioner about emotional and cognitive Strategies . my question is:
how should I Formulate sentences or items?
they should began mostly with " I thing " or "I feel" to represent the emotional and cognitive side of the subject ?
or
they should began with a verb to present a behavior or an act "I do" to represent the Strategies?
Tank you
Hi everyone! I am trying to find the "how to" on item difficulty and discrimination on SPSS and its interpretation. I have read about and performed the command analyze> scale> reliability analysis, to get the corrected item total correlations (which I believe can be interpreted as a dicrimination analysis...?). I also watched videos that teach that you could calculate dificulty index by calculating means and sums in the analyze> descriptive statistics> frecuencies and interpreting the means. But this is a different method to the one I read about in books and articles (rbis).
I am using this article as reference:
It describes item correlation, discrimination index and difficulty index as different methods for item reduction. Is it adecuate to use just one or two of these analysis? Must we use all at the same scale construction proyect? According to the item correlation method, I only keep 4 of 30 initial items from the scale so far.
Proyect Description: Scale construction with 30 initial dichotomous items (True/False)
I am currently trying to create a scale to measure a multi-dimensional parenting construct. There is currently no strongly established theory about the construct and I am investigating it in an age group that has not typically been the focus of parenting researchers. I created a list of 26 items based on a qualitative study and have done an EFA on the data. Almost half of the items are skewed and some are quite kurtotic due to low base rates of those parenting behaviours. However, I believe that these items are theoretically relevant to the construct of interest. Due to high skew/kurtosis/presence of non-normality, I used polychoric correlations for the EFA. A 3 factor solution was recommended.
My questions are:
1) The determinant of the matrix is less than .00001 but Bartlett's and KMO are good (fit indices are generally good as well). I have read in previous discussions online that <.00001 determinants may arise due to high kurtosis in items. Does anyone know of a reference/resource that explains this in more detail and/or has recommendations that it's not the end of the world?
2) A number of the skewed/kurtotic items have low communalities (<.40) even though they have factor loadings of over >.40. What are the best practices or existing rules of thumb on how to proceed with elimination of items to refine the scale? Should I delete the items with low communalities (despite the sufficient factor loadings), and then re-run EFA? Or should I delete items based on low factor loadings (<.40), then re-run EFA? If the latter, would it be necessary to do anything with (i.e., elimiate) the items that have low communality? Or just leave them?
Thanks very much in advance.
Please share any representative constructs and measurement scales pertaining to Firm Generated Content (FGC) and Social Listening.
Hi everyone! I am currently working on constructing a true/false binary response questionnaire of parenting knowledge. I found out that factor analysis is not appropriate for binary variables. Also I have read articles about principal component analysis, and polychoric and tetrachoric correlations, but found no examples on how to use these concepts for scale construction. Anyone knows about specific SPSS steps, other programs or plug-ins and interpretation for scale constructing for these analyses? Is there any other appropriate analysis? I would be thankful of any help!
I have administered a 15statement attitude questionnaire with agree/ disagree response codes. The scale was developed as per method of Thurstone scale construction. I am finding mixed approaches to interpret the data:
1. one approach says just count the number of agree statements and that is a person's attitude score or take their percent
2. second approach says that the score for each statement is the median for that statement at the time of scale construction
Can anyone please guide on what is the right appraoch?
Many thanks!
Many researches have Exploratory Factor Analysis followed by Confirmatory factor Analysis when theory is available. My research has not established model can I apply this Exploratory Factor Analysis and the result cab be taken as a scale developed
Hi everyone,
I've conducted an EFA and ended up with 5 factors. A few of the items are cross-loading over 2-3 factors. I have already removed 10 items that either do not correlate or cross-load significantly.
I am fairly happy with the factors, however, the cross-loading items are confusing me and I have a few questions.
1. When calculating the total scores, means and Cronbach's alphas for each factor, do I include the items which cross load with other items?
2. When I present the final scale/solution, how do I present the cross-loading items?
3. There is one factor which is negatively predicted ('Lack of Support' [all items have a negative value]), however, I have changed the scoring so it positively predicts the factor (Support). There is one item in this subscale which cross-loads with another. How does this impact the scoring? Should I try to remove this item?
4. I started with a 37-item scale and I now have 27 items. How many items are too many to delete? At what point should I just accept it as an overall scale with a good Cronbach's alpha (.921) and say further research into factors and subscales is needed?
I am reluctant to delete the few cross-loading items I have remaining, as when they are removed from the analysis, the reliability score decreases for the individual factors and the overall scale.
This is my first time doing an EFA and so I would be very grateful for any advice or recommendations you may have.
Thank you.
What are the general suggestions regarding dealing with cross loadings in exploratory factor analysis? Do I have to eliminate those items that load above 0.3 with more than 1 factor?
I would like to model Knowledge as a formative composite measure using PLS. I understand that formative measures alone are unidentifiable and unfortunately, I don't have any outcome variables to include in the model. I'm wondering if I could use the repeated indicator approach to model only the measurement of a construct by turning indicators into single-item first-order constructs and the composite a second order construct?
Also, I'm wondering if anyone is familiar with Adanco? Adanco is able to estimate item loadings of a composite construct, although the weights are all equal when using Mode B. Would modeling the variable this way be any more useful than simply calculating item-total correlations?
Hi, there.
A few days ago, I posed a quesiton about factor analysis of survey results, but I left many important details unspecfied. HERE is the more detailed one.
I develop a theoretical model of language assessment literacy (LAL) based on literature. In this model, LAL is captured in seven dimensions. A survey of 56 items with a 5-point Likert scale (unknowledgable, slightly knowledgable, moderately knowledgable, rather knowledgable, very knowledgable) is designed to measure primary EFL teachers' LAL. This survey is first piloted with 71 target teachers, and 65 cases are valid. Many of the respondents have strings of similar responses to successive different items. An exploratory factor analysis (SPSS) with generalised least squares and direct oblimin methods shows that nine factors have Eigenvalues greater than 1, but the first item has excessively large loadings (about 60%) on the first factor. The correlation matrix shows significant correlations between every two items (ranging from .22 to .92), suggesting multicollinearity. The corrected item-total correlations range from .58 to. 85, indicating good discrimination among respondents.
How should I revise the survey items to make the survey work so that I can confirm my model?
THANK YOU VERY MUCH.
I am writing a paper in which I am calculating/estimating how many students are socially marginalized in Denmark. I have applied Exploratory Factor Analysis (EFA) and determined that seven items correlate strongly and are likely connected to a single underlying factor. However, three of these items are measured on a Likert scale (1-4) without a neutral category, while the others are measured on a Likert scale (1-5) with a neutral category. Is it possible to weight each item so that they contribute equally to the composite measure and, if so, how should I proceed? Is there any literature that you can recommend on creating composite measures using items with different scales or ranges? Any help would be appreciated, thank you.
HI Researachers,
do we have a scale for COVID 19 for economic impact?
We can say that these factorial analysis approach are generally used for two main purposes:
1) a more purely psychometric approach in which the objectives tend to verify the plausibility of a specific measurement model; and,
2) with a more ambitious aim in speculative terms, applied to represent the functioning of a psychological construct or domain, which is supposed to be reflected in the measurement model.
What do you think of these generals' use?
The opinion given for Wes Bonifay and colleagues can be useful for the present discussion:
I am working on academic anxiety of undergraduate students. Is there any scale constructed/available to measure academic anxiety of undergraduate students?
I'm an undergraduate student who has a course in test construction as well as research. The construct we are studying is mental toughness, and we plan on using the MTQ-48 to assess it. I am aware that there is a technical manual for the MTQ-48 available online, unfortunately, it does not contain a scoring procedure, and does not have information about which items belong to what subscale. Using other references, what we got is that the MTQ-48 is scored on a 5-point likert scale; with 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree. We are also aware of what items fall under each subscale, such as items 4,6,14,23,30,40,44,and 48 which fall under the challenge subscale. However, we were not able to find a reference stating which items are negatively scored. While we could make an educated guess, it is required that we have a source for the scoring procedure. If anyone here has such a reference, or knows the items, it would be highly appreciated. Thanks!
Is there anyone out there who's done research on Transformational Leadership (TL) without using the standardized questionaires like the Multifactor Leadership Questionaire (MLQ)?
I'm doing a study on the effect of leadership on employee behaviors (OCB in specific) and my set of data is an employee survey of motivation and the psychological working environment (based on QPS Nordic). The data was thus not gathered for my specific study, which leads to some extra challenges. One is determining measures of perceived transformational leadership. My intention is to point out questions that seem to map the four i-s of such leadership and then do some validation testing on them.
Anyone who's done this before, or who has done some sort of validation testing of such surveys against the established ones when it comes til TL?
I'm grateful for all suggestions, experiences and tips!
Kind regards,
Christian Otto Ruge
I would be thankful if someone could give me a short breakdown on how to test unidimensionality of polytomous data (likert-scale) using IRT-models in R.
As it is my understanding that the ltm-package is obsolete, I'm looking for an equivalent to ltm::unidimTest() (perhaps within the mirt-package?).
Thank you
I am looking for a concise scale that would place a person on the continuum between "When someone needs help, I am usually the one who steps up" and "Oh, surely someone else will do something".
I have found some related scales/constructs, but none is what I am looking for:
-Locuf of Control is focused on oneself - "how much I am in control of what is happening to ME." I am interested in one taking control over helping someone else.
-Bystander attitude - I even found a "Bystander Attitude Scale", but it relates to sexual violence. I am interested in helping behaviors (from charitable giving to stepping up to defend someone in dangerous situation)
-Altruism related more to "wanting to help" rather than "taking the (uncomfortable) step and helping"
-Proactiveness - the scales I found all relate to one's own career development
Thank you, I appreciate all suggestions very much!
Hello,
I am a master student in nursing at the University of Antwerp. I'm doing my thesis about courageous leadership. For this a made a conceptual model and a new questionnaire.
The aim of my research now is to develop a valid measurment. For this a developed a questionnaire with 41 questions about taking action, honesty, ... Now a need to do a factor analyse. But my questionnaire has 2types of likertscales is this a problem?
I am in the middle of questionnaire development and validation processes. I would like to get expert opinion on these processes whether the steps are adequately and correctly done.
1. Items generation
Items were generated through literature review, expert opinion and target population input. The items were listed exhaustively till saturation.
2. Contents validation
Initial items pool were then pre-tested with 10-20 target population to ensure comprehensibility. The items were then reworded based on feedback.
3. Construct validity
a) Bivariate correlation matrix to ensure no items correlation >0.8
b) Principal Axis Loading with Varimax Rotation. KMO statistic >0.5. Barttlets Test of Sphericity significant. Communalities less than 0.2 were then removed one-by-one in turn. Items with factor loading high cross-loading were removed one-by-one in turn. Then, item with factor loading <0.5 were removed one-by-one in turn. This eventually yielded 17 variables with 6 factors, but 4 factors have only 2 items. So I try to run 1, 2, 3, 4, 5 and 6 factor models, and found that 4-factor model is the most stable (each factor had at least 3 items with factor loadings >0.4). Next analysis is only on 4-factor model.
c) Next, i run Principal Component Analysis without rotation on each factor (there are 4 factors altogether), and each resulted in correlation matrix determinant >0.01, KMO >0.5, Bartlett significant, total variance >50%, and no factor loading <0.5.
d) I run reliability analysis on each factor (there are 4 factors altogether) and found cronbach alpha >0.7, while overall realibility is 0.8.
e) I run bivariate correlation matrix and found no pairs correlation >0.5.
f) Finally, i am satisfied and decided to choose four-factor model with 17 variables and 4 factors (each factor has 5,4,4,4 items), and each factor had at least 3 items with loadings >0.5. Realibilility for each factor >0.7 while overall is 0.8.
.
My question is, am i doing it right and adequate?
Your response is highly appreciated.
Thanks.
Regards,
Fadhli
Hi!
I'm looking for a software that will gave me the possibility to ask questions with standard Likert scale 1-5 AND in the same instruction/question I will have space to ask about for example 'On what level/extent this question refers to your job responsibilities?' I have in mind the Side-By-Side Matrix Questions.
Example:
Question 1
Instruction 1 Agree 1:2:3:4:5 Disagree || Low 1:2:3:4:5 High
Instruction 2 Agree 1:2:3:4:5 Disagree || Low 1:2:3:4:5 High
Instruction 3 Agree 1:2:3:4:5 Disagree || Low 1:2:3:4:5 High
Thanks in advance!
I am trying to analysis the effect of various independent variable like service quality, ISP commitments,corporate image, product attributes, trustworthiness, service value, switching cost over the dependent variable brand loyalty for ISP customers. I have created questionnaire that consist ordinal scale (from very poor to very good) for some independent variables and interval scale(Strongly disagree to strongly agree) for other independent variables. How can I merge the interval and ordinal scale to analyse and develop relationship with the interval scaled loyalty.
I'm writing my paper on problems and challenges faced by school counsellors :development of a scale .please help me with some good reference
The Cronbach alpha for two factors are 0.5 and 0.4 respectively. What does this signify? Is there a justification/support to retain the factor or eliminate the factor?
I am currently doing a research proposal surrounded perceptions and influences of revealing attire and sexual assault. The research questions are as follows:
1. Is there a difference between how men Perceive the intent of women’s Sexual attire and the motivations identified by women?
Hypothesis - it is hypothesised there will be a difference between how men perceive the intentions of a revealingly dressed woman compared to females.
2. In cases of sexual, does the victim hold more responsibility if they are wearing sexual attire?
hypothesis - there will be a gender difference in the amount of responsibility placed on a victim in a sexual violence case if she is wearing sexual attire. Precisely, women will place less responsibility compared to men.
3. Are men more likely to approach women in revealing clothing?
- it is predicted that participants believe that men will be more likely to approach a woman if she is in revealing clothing.
For this study a psychometric scale construction method will be used. The questionnaire will be split into 3 sections:
1. assessment of motivation (participants shown image of a model in revealing clothing then asked to rate her motivations for wearing such clothes by indicating how much they agree to give statements such as “she wishes to feel attractive” “she intends To convey sexual interests“ on a 5-point likert scale from 1(strongly disagree) to 5 (strongly agree).
2. Direct assessment of motivation. (Women complete this section only and instructed to respond to 5 questions on a 3-point likert scale yes, sometimes, no, with questions such as do you dress revealingly to feel attractive...)
3 assessment of victim blame and reactions. (how much responsibility subjects place on this model if she was a victim of a rape scenario. participants will provide an answer using a 5-point likert scale from 1(not much) to 5(a lot). Men will also be asked if they saw her on a night out how likely would they be to approach her, 1(extremely unlikely) 5 (extremely likely).
I am struggling on on the data analysis section as I am supposed to include how I would analyse the data my study would produce. I also need to explain why.
Hi Researchers, I have examined several studies to see what is the most common methodology in scale measurement (Including Churchill's paradigm), But there are still some confusions in my mind. Please help me to distinguish between steps in developing and validating a scale.
1/When we say "developing" (not validating), does it mean that using one set of data would be enough?
3/Exploratory=development? Vs confirmatory =validating ??
4/ What are the statistical methods usually used in the developing step (again not validation): EFA with factorisation tests alone (Bartlett, KMO...)?
5/Is Cronbach's alpha used in the exploratory step? Confirmatory step? Both of them?
6/Is Amos a compulsory step in validating or we could just use another factor analysis in a second set of data with additional statistical tests (e g Joreskog?)
7/Is developing a scale what we call "measurement model"? Which means no independant, moderating or mediating variables are involved, right?
I am looking forward to read your expert answers. Thank you very much indeed.
How to normalize numeric ordinal data from [1, Infinity) to [1,5]? I want to normalize relative scores which is ordinal by nature. But range of score can be from [1, Infinity). So, I need to bring it on scale of [1,5]. Can anybody help me figure it out?
Data values are double (float) type values.
Dear researchers,
A student of mine is currently preparing her thesis on a specific construct that has never been measured. She's intending to apply Churchill Paradigm to developp a scale. Would that be sufficient as a research problem for her thesis and for building a model (knowing that she will use both exploratory and confirmatory factor analysis)?My question: Can she settle for the developpement of a scale as her main research problem? Should she not look for other relevant constructs to be mainstreamed into the final confirmatory model? Or is it not necessarily a requirement? I am looking forward to read your answers. Thank you in advance for helping me.
In the model in my thesis, I draw a direct link between the two constructs organizational constraints and workload to the construct Abusive Supervision. Studies have shown that constraints and workload are stressors and negatively impact performance, motivation and well-being. I justify an increase in Abusive Supervision that way, that there were studies that have shown that supervisors e.g. treat the employees hostilely, because they perform poorly and thus throw a bad light on the supervisor. Can I put my questionnaire under these assumptions? And just assume from previous works, without interrogating the performance separately, that the two stressors have a negative impact on performance and thus favor hostile supervisor behaviour?
Like this: Organisational Constraints ++----> Abusive Supervision
Or do i Need to put it this way: Organisational Constraints- - ----> Perfomance ++-----> Abusive Supervision
I hope my Question is understandable. Thanks in Advance!
Best Regards
Kim
Hello, I want to use the Job related affective well being scale from 1999, Paul T. Van Katwyk, Suzy Fox, Paul E. Spector, and E. Kevin Kelloway, http://shell.cas.usf.edu/~pspector/scales/jawspage.html for my master Thesis. I Need a german Version of it. On the page is the german Version from a master Thesis. Is it okay to use this items? Or am i just allowed to translate englisch items to german to use them for my work?
Kind Regards
Kim
I'm developing a questionnaire to assess observational data in an area of research which is notoriously prone to low inter-rater reliability. Hence, I'm looking for factors in questionnaire design that are generally known to show positive impact on inter-rater agreement. Unfortunately, any means of physically and/or verbally interacting with the raters prior to data collection is not an option, so methods such as Frame of Reference (FOR) training are not an option in my case.
My initial choice regarding the response format would be a behavioral anchored scale, but previous research have shown insufficient improvements in agreement of raters. I don't wish to open up a discussion on the topic on idiosyncratic variation - which will remain a problem - but instead focus on possible improvements regarding response format / scale construction. Perhaps someone can supply me with interesting research regarding this topic?
Many thanks!
Specialists in testing and scales construction believe that the value of the reliability coefficient found by test-retest method is usually noticed to be higher than the value of that found by Alpha cronbach method. Is there a scientific justification for this differenc?
I'm doing a research about the impact of sensory factors on tourist experience in hotels (sensory branding of hotels and its impact on tourist experience).
Are there some scales that you can offer me about this topic?
Do I need to combine the sensory dimensions and tourist experience scales? or can I reach a combined one?
Thank you in advance for taking the time to answer my question.
Can anyone suggest a link between these two factors? How will you justify an effort to find a relation between the two?
Aloha,
I am a atudent (business psychology) and I prepare my bachelor thesis. The main aim of the thesis is the validation of two translated scales.
For the translation I adhered closely to WHO Guideline "Process of translation and adaptation of instruments":
- Forward translation
- Expert panel Back-translation
- Pre-testing and cognitive interviewing
- Final version
How would you validate statistically the scales?
This particular scale is needed for my thesis. Thank you very much.
A semi-serious discussion on manners in respect of adults, children, infants and animals led to these questions:
1. On what type of scale would you measure "manners"? From 'no manners' to 'very good manners' (e.g., 0 - 4); or 'very bad manners .. no manners ... very good manners' (e.g., -4 ... 0 ... +4)?
2. Having in mind your choice of scale, how would you operationalize the construct 'manners', in order to measure it?
Enjoy, and thank you in advance for your ideas.
I want to measure success in creative/cultural entrepreneurship. Besides economic growth, subjective perceived success and survival, "NON-TECHNICAL INNOVATION" and "ARTISTC MERIT" are my success/performance indicators. Does anyone know scales and constructs to measure them via questionaire? Thank you in advance.
I'm exploring the concept that scales can be ordered and that certain items should carry more weight in a scale. I came across guttman scalograms and Mokken scaling techniques. Creating the initial Mokken scale makes sense. What I don’t get is that after I get my H coefficients and AISP to run with Mokken, how do I evaluate the data in a meaningful way?
If I use a mokken analysis on a 10 item likert survey, it wouldn't make sense to get an overall composite mean score to represent the latent ability since I established that items are ordered on difficulty. Do the H coefficients determine item weights? How can I sum a participants score on my newly created Mokken scale?
I am currently preparing a thesis proposal for a study of the effect(s) of gender stereotypes and discrimination on aspects of employee well-being. I intend to survey employees on their experiences of gender stereotyping/discrimination and the effect of these experiences on (a) job satisfaction, (b) organizational commitment, (c) turnover intentions, and (d) withdrawal behaviors. Thus, I am interested in investigating the relationship between gender discrimination and the aforementioned occupational outcomes. I have been researching appropriate measures for these constructs but have yet to reach a consensus on what to use.
Through my research I have found many scales that measure sexism, although the majority are mostly-if not entirely-attitudinal measures. Below is the current list of my top choices to measure gender stereotypes, sexism and discrimination (in descending order):
(1) Schedule of Sexist Events
(2) Stigma Consciousness Questionnaire
(3) Ambivalent Sexism Inventory
The Schedule of Sexist Events scale is my top choice currently as it most closely measures actual behavioral instances of experienced discrimination (SSE-Lifetime and SSE-Recent). A limitation of this scale, however, is that it samples women exclusively. If at all possible, I would like to use a measure that allows for a heterogeneous gender sample. Based on others' experience, does anyone have any recommendations for measures to use? Does anyone have any experience with the SSE or SCQ and ASI? Any and all recommendations would be greatly appreciated. I am not trying to determine whether an individual holds sexist beliefs, but rather if and how much an individual has experienced gender discrimination.
Thus far I have chosen the following measures for the remaining variables (but I am open to change):
Job Satisfaction: Job Satisfaction Survey (Spector, 1985).
Organizational Commitment: Organizational Commitment Questionnaire (Mowday, 1979)
Turnover Intentions: Turnover Intention Scale (TIS-6)
Withdrawal Behaviors: N/A
I am comfortable with my choices of the JSS and OCQ. I am, however, uncertain of the TIS-6 and have not been able to find a verified and valid measure for withdrawal behavior. Any suggestions on scales/surveys/questionnaires that would fit these variables would be greatly appreciated.
Thank you for your time and consideration!
Hi,
Can anyone please share some recent and good research papers which lay the foundation and explicitly explain/describe the ground work for psychometric scale construction.
I shall be highly grateful for the help.
Regards to all
Dear all,
I would like to use a system of scoring to rank some statements regarding risk management application in a project. I have seen the system of scaling used by the INK model (Dutch version of EFQM model) which uses a four scale rating using 10, 7, 3, and 0 which respectively means 'totally applied', ' to a large extent applied', ' limited applied' and ' not applied'. I would like to know whether there are specific scientific reasons behind these numbers or any similar scales. Why, for example, this model uses 10, 7, 3, 0 and not 10, 7, 5, 0? I am looking for a scientific article that explains the reason behind this scale or any other scales. Does anyone has the experience of using the similar scale? Does anybody know how a scale should be defined and what are the reasons that should be considered?
I appreciate any answer/ comment on this question.
regards,
Dear all,
my dependent and independent variables are 7-Point Likert scale and measure perception toward effective leadership, therefore I assume my data is of ordinal nature.
The independent variables are six constructs with 5 questionnaire items each, which measure cultural values.
The dependent variables are 2 constructs with 19 items each, which measure the perception of effective leader attributes.
So I hypothesize that each of the culture dimensions are associated with perceived effective leader attributes. I have collected the data and intend to do the statistical analysis as follows:
1. Reliability Analysis - Cronbach's Alpha
2. Factor Analysis
But then I'm struggling between ANCOVA or Spearman's Rho to test association. I understand that ANCOVA is used for interval and Spearman for ordinal data.
Could you please advise on an appropriate method?
Many thanks,
Ahmad
Dear all,
I'm investigating perceptions of effective leadership of Germans and Indians (I want to know which leader attributes are mutually endorsed and which are not). Part of the questionnaire assessed the attitude towards cultural values of the respondent, the other part attitudes toward effective leader attributes.
I have run a questionnaire and gained N=209 responses. The questions were asked on a 7-point Likert scale and the precise questionnaire can be found here: http://globeproject.com/data/GLOBE-Phase-2-Beta-Questionnaire-2006.pdf (I only used part of the questionnaire and not all questions, as I only investigate perceptions). I hypothesize that certain elements of culture are predictors of certain perceived effective leader attributes.
Now I find it a bit difficult to assess which statistical tests make sense. I've done some research online and the information is quite ambiguous and sometimes conflicting.
Therefore, I'd be really grateful if anyone here could advise.
Thank you,
Ahmad
I am doing a study on determinants of pro-environmental behavior, where I propose values orientation (egoistic, altruistic and biospheric) and awareness of environmental threats as possible determinants. all of the IVs that I use for my treatment are continuous in nature (scale). The DV is a set of scores as well. I initially ran a multiple regression to test my hypothesis, but I am told to use N-factorial/ way ANOVA instead, so that I can see the interactions among my variables. I am attaching my concept for reference.
Any suggestions/ advise would be much appreciated.
The Likert scale items are intended to answer a of whether communication between hospital staff is effective or not .
Dear everyone,
I am in quite some distress regarding my thesis.
I have set up a survey, consisting out of 21 questions. All 'based' upon the Likert scale fundaments, yet not completely like it.
Some questions I've made can be answered with 5 questions, ranging from: completely agree, to completely disagree. However, I also have some questions (i.g. What source sounds most trustworthy? and then 5 possible answers).
Basically my thesis is about' what can stimulate the consumer acceptance of Halal food'. This is my DV(consumer acceptance) and out of my 21 questions I have constructed 17 IVs, that form(in groups) 6 main constructs(Health, CSR, Price, etc) that possibly influence the consumer acceptance.
I hope until here everything is quite clear.
So what I want to do is to check if there are any correlated answers(let's say I see someone answered question 1 with the first option and then it maybe correlates with him answering question 14 in the same way). Plus I would like to see which questions are significantly answered in the same way(to see what the majority of people deem important).
I have no clue why my supervisor told me this survey was okay, as now I have no clue how to do this. I don't even know where to start(reliability checks), as I have no clue how to combine these Likert-scale question and the non-traditional Likert-scale questions..
Can anyone please suggest appropriate scales for measuring university graduates' voluntary and involuntary unemployment, subjective and objective employability and visible and invisible underemployment?