Science topics: Research Social Validity
Research Social Validity - Science topic
Evaluation of the degree of acceptance for the immediate variables associated with a procedure or program designed to change behavior. This includes the social significance of the goals of treatment, the social appropriateness of the treatment procedures, and the social importance of the effects of treatments.
Questions related to Research Social Validity
What is the validity of the use of reliability and validity criteria in content analysis? If that is systematically true, what method is appropriate?
Dear colleagues, dear participatory-action research practitioners,
I would like to open the discussion on the criteria for evaluating participatory research (whether it is action-research, participatory action research, CBPR, etc.).
How do you evaluate participatory research projects that are submitted for research grants and/or publications (papers) ? Do you apply the same criteria as when you evaluate non-participatory research projects? Or have you developed ways to evaluate non-scientific dimensions such as the impact of this research on communities, the quality of connections between co-researchers? And if so, how do you proceed ?
Thank you in advance for sharing your experiences and thoughts.
Pour les collègues francophones, n'hésitez pas à répondre en français ! Quels sont les critères que vous utilisez pour évaluer des projets de recherche participative ? Utilisez-vous les critères d'évaluation scientifique que vous appliquez aux autres types de recherche ou est-ce que vous avez des critères spécifiques et si oui, lesquels ?
Baptiste GODRIE, Quebec-based social science researcher & participatory action research practitioner
Pandemic has a huge impact on everything including human, organizations and government policies. If a research is designed before the on going pandemic it was assumed that when the data would be collected the usual business would be the same, however, it has been changed enormously affecting the normality. Would a model constructed before the pandemic will still be relevant, explaining the variation. Or it requires a different model incorporating the pandemic factor to better explain the variation?
The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. I want to know that in what way it will help a researcher ?
I was running a model in SmartPLS 3.
1. 40 indicators' outer loadings (out of 101 indicators) are between 0.4-0.7. Not all of them are too low.
2. AVE results for 5 variables (out of 11) are below 0.5.
3. Fornell-Larcker criterion for two correlations is not established.
4. All HTMT results are below 0.85.
According to Hair (2011): Generally, indicators with outer loadings between 0.40 and 0.70 should be considered for removal from the scale only when deleting the indicator leads to an increase in the composite reliability (or the average variance extracted;) above the suggested threshold value.
Here are the results after eliminating 13 indicators with the lowest outer loadings:
1. All the AVE becomes more than 0.5.
2. Fornell-Larcker criterion isn't still established for two correlations.
4. On the other hand, the results for HTMT changes, and the HTMT for one correlation is upper than 0.9.
I am aware that the elimination of items purely on statistical grounds can have adverse consequences for the construct measures’ content validity (e.g., Hair et al. 2014). Therefore, researchers should carefully scrutinize the scales (either based on prior research results or on those from a pretest in case of the newly developed measures) and determine whether all the construct domain facets have been captured. At least two expert coders should conduct this judgment independently to ensure a high degree of objectivity (Diamantopoulos et al. 2012).
I do not know about the theoretical support for eliminating some indicators. I would be really thankful if you'd help me what I can do in this situation.
A school in Jordan is doing an impact study on its alumni. The variables are a list of traits and values (innovation, leadership, empathy, etc…). I’m responsible for preparing the questionnaire.
My methodology is:
1- For each value/trait, find an inventory or scale that measures it.
2- Choose three items from the inventory/scale.
3- Combine the three items from all the inventories/scales to create the new questionnaire (about 60 items).
I need an expert who can review the final questionnaire and give an approval and recommendations to improve the questionnaire.
Thinking in terms of a social setting such as a dance, a concert, a meal, if an experiment were to be designed in such a way, how can the method be validated? Similarly, what role would reliability play in an experiment set in a social setting? How can you recreate social settings for further empirical study?
I would love to read some examples of studies if you are familiar with any!
Can I do an Incremental validity analyses with variables drawn from the same raw data? For example, imagine a researcher is interested in how a person's sociability is related to their mental well being. As a data source, the researcher records group conversations among 10 people interacting in a room. Imagine that a standard way to measure sociability is to simply total the number of "chats" made by a participant in the discussion. But lets say I wanted to test if actually a better (i.e., having stronger predictive validity) way to measure sociability is to take the proportion of a person's chats, relative to all other chats made in the group. Could I do an incremental validity analysis where I treat the total number of chats as one variable and the proportion of chats as another? And if not, what would be the proper way to establish the superior predictive validity of the proportion based measure of sociability?
I'm looking for some advice on my study design. In short, I want to study the impact of mobile VR on a specific skill in an engineering course (are students who study with VR do better as measured by a pre-post test + psychological assessments, such as self-efficacy?). I have 3 classrooms (undergraduate college level), two of which will have about 30 students and one - probably around 15. The semester- long VR intervention is integrated with the existing curriculum (which means we have certain constraints on the order of topics that we do in VR). We have overall 10 modules, all of them can be potentially taught in VR, but we can also teach less than 10. So the treatment is the use of VR (as opposed to using 2D materials for instruction).
Now, the problem is - I would typically use a between-subject design, comparing Classroom 1 to Classroom 2. However, I'm not sure what to do with Classroom 3 AND the problem is that one classroom is likely to be formed by students from a non-engineering program (who tend to be in the same class due to their scheduling), while the other classroom will have engineering students. In other words, there is a problem of baseline equivalence of the two conditions (if one class does VR and the second class does 2D).
If I use within-subject design, then there is a big problem with counterbalancing; some modules have to go first, and alternative VR vs. 2D in other modules between classes causes problems in interpreting the unique contribution of the VR in the outcome variable (which is a test measuring the skill of interest) because the test can only be administered twice (due to the practice effect).
Which design would be more appropriate? Between-subject design is what I would do, but then there is the selection threat to the internal validity; in the within-subject design, teasing apart the effect of VR as opposed to the order of topics and the content of topics is almost impossible. I would appreciate any thoughts about this issue.
what is nomological validity? can anybody explain it in detail?
thanks in advance
When it comes to conducting quality research and effecting meaningful projects, it indeed becomes crucial to understand the various parameters which should be considered while finalizing the topic of the project as unlike a short-term assignment like writing a research paper, the dimensions of the project might change mid-way on the virtue of the sheer scale and time-factor associated with the same. Question is, how the same could be ensured, that is, finalization of an effective topic for a research project? The question has been asked with reference to executing a project in management and social sciences
I'm trying to develop a scale to measure effects of social networking sites usage on children's attitudes towards family relationships, do you have ideas?