Science topic
Reliability - Science topic
Explore the latest questions and answers in Reliability, and find Reliability experts.
Questions related to Reliability
Hello! Is it acceptable if an adapted questionnaire is tested for reliability but not validity?
Hi. I'm generating the amount of data required for training an artificial neural network (ANN) using a reliable and validated self-developed numerical code. Is it right approach?
Or the necessary data should be produced only with experimental tests ?
Best Regards
Saeed
I was going through different ways to calculate insulin resistance, I came across the eGDR equation. It is a very good and easy reliable way to calculate insulin resistance. But, I am confused a little about how to decide the threshold for a population. also, I came across various studies where a modified version of eGDR was used, how do authors decide that they might need modified version of eGDR?
Sorry if I ask but i dont understand.
Hi,
I've ran reliability analyses for my data and received a negative cronbach alpha of -.562 for my Rosenberg Self-esteem scale.
I do have reverse scoring and have gone back to unreverse score and re-score to amend any mistakes I made.
However, after running my reliability test again i still received a negative result of -.608.
I'm not sure where the issue may lie as i have checked my reverse scoring and believe there are no errors made here.
How can i resolve this?
Thank you
I tried searching for relevant topics in this website, but couldnt get to any of the references that the website gave information based on, can you tell me if you think this site is reliable and how to get to the references?
need to know about all possible statistical analysis that can be applied to analyze the qualitative traits or data particular for wheat....
In our study, it was recommended by our professors to set a deadline and if at least 20% of the total population has responded within the time allotted, we may proceed with data analysis and interpretation. However, we cannot find a reliable source to support this method.
Modern research recommends that researchers apply the Mixed Method Research Approach to developmental research studies.
This is both Quantitative and Qualitative data that have to be collected from the field. The former is from the sampled respondents, and the latter is from the non-sampled respondents, like Focus Group Discussions and Key Informants Interviews at the local level.
Most development practitioners and Social Science researchers are interested in using this Mixed Methods Research Approach.
The main advantage of using this mixed-method approach is that it could help the researchers ascertain the results' validity and reliability.
This Quantitative data can be triangulated with the Quantitative data to measure the validity of the research.
Hello,
I built a questionnaire for my cross-sectional study based upon multiple previous studies but I'm facing a problem in determing the appropriate way to test validity and reliability, considering that my questionnaire contains different types of items:
A) Dichotomous items (yes/no). For example: Did you suffer from acne before? Yes/No
B) Polytomous items (only on answer allowed). For example: How old were you when you first had acne? <15 years, 15-20, <20 years... etc
C) Polytomous items (multiple answers allowed). For example: what is the site of acne? Face, trunk, chest... etc
I'm attempting to work with induced polarization and geoelectric subsurface structural mapping but am having difficulty locating a reliable dataset that could be enhanced and used for regional sensing and analysis. If anyone has any leads, I would greatly appreciate them, thank you!
How could automation assist blood banks enhance the standard and reliability of their laboratories?
I have got a comment from a reviewer and do not know how to address it:
The authors have revised the manuscript well, but I'm still unsure about the data collection period. This study collected data in 2013-2014 which means about 9-10 years ago. And we know conditions have changed a lot after the pandemic, and the authors need to justify that this research is still applicable. The justification given is still quite dubious to me. Maybe you can include references that support that the data is still valid and reliable for analysis in your study.
Would you please mind suggesting how to respond to this comment?
Hello,
I want to investigate election violence and its impact on voters' well-being in seven counties in Liberia. I am still perplexed about which sample size determination method to use, but I would love to employ the proportionate sample size method if possible.
I have two datasets that I am still indecisive about. One is the 2008 census data for the seven counties; the other is the number of registered voters during the 2017 presidential elections.
Which data would be appropriate to use? What is the formula for using the proportionate sample size determination method? I would also appreciate suggestions for any reliable method other than what I have mentioned here.
Thank you!
When I write a subroutine in Intel Fortran (via Visual Studio 2019), the file is saved as.f90. However, I need the file type to be .for to run in Abaqus. Is there any more efficient and reliable way to change my.f90 files to .for?
Thank you in advance.
I have a dichotomous data and a likert scale and have to analyze variables of both, so in that case we can check the reliability ,but when it comes to validity how to check the validity of these two different construct of questions in which one is dichotomous set of questions and the other one is likert, can we check the validity of both of them as we do in factor analysis?
Hi All,
I need help finding sources for reliable data on smoking behaviour in terms of health risks. I tried OSF, but it doesn't have data on smoking.
I shall be highly grateful for any help.
Thank you.
Best,
Mariyam Abbas
I'm developing a multiple level sustainability certification model for the insurance industry and they would like the reporting element to include the type of information that we could get from that type of calculator,,
Thanks in advance
Is a questionnaire that has been tested for validity and reliability, must be tested again when the questionnaire will be used? if there are already test results for a certain country's version, should it be tested again when it is used in that country?...
Hello,
Can I easily find an identification database of chemical components on the internet without complicated conditions?
Which basics are important and which are reliable?
Knowing that the components targeted by the identification are components of organic plant extracts.
Thank you very much in advance.
Cordially.
Hello everyone,
I would like to investigate three factors using a central composite design.
Each factor has 5 levels (+1 and -1, 2 axial points and 1 center point).
I chose my high and low levels (+ and -1) based on a screening DoE I did previously using the same factors.
I chose an alpha of 1.681 for the axial points because I would like my model to be rotatable. However, for one of the three factors, one of the axial points is outside the feasable range (negative CaCl concentration....). I thought of increasing my low level for this factor to avoid this. Lets say, increasing the value from 0.05 to 0.1 to avoid reaching the negative range with the axial point, but I was wondering, if this would effect the reliablity of my model?
Another option would be to change the design to one that has no axial points outside the design points. However, this is actually my area of interest.
Can anyone help?
Dear Author,
Warm Greetings from Academic 2023!
Congratulations, based on the quality of your recent publication, you are provisionally selected for the Research Award and recommended by our scientific committee. In this regard, we welcome you to nominate your short research profile through an online submission system of the International event.
Selected Category: Best Researcher Award
Online Nomination:
Note:
Submit your updated profile under the selected category.
Submission is peer-reviewed by editorial members.
Hi,
I have recently seen many ways for writing a peer review. Many have suggested to use table for this purpose. I need to know what criterias should be taken into account. For instance, do all references must be reliable and high impact? What the table includes? how can analyse the articles effectively? how many articles to be reviewed?
Any helpful steps?
Thanks you all
I am trying to do some patch clamp experiments recently.
However, I noticed that we don’t have a perfusion system.
Is is still doable?
Will the results be reliable?
I would like to get your opinion about which open source tool is more reliable for battery aging simulation and performance evaluation? There are various open source tools such as PyBaMM or OpenFOAM. I would like to get advice upon your experience.
Thank you in advance!
We are performing an experiment to estimate the change in the microbial profile in the gut of dairy cattle to different diets. The experiment's main aim is to see if there is a reduction in methane emissions and a change in the microbial profile of the gut. We are planning to perform a microbiome analysis to get a complete estimate of the change in the microbial profile.
The question here is;
1. Is microbiome testing the best method to go about estimating the reduction in methane emissions? (as it can be used to estimate the methanogenic archaea in the gut). Is the result really reliable?
2. Is In-VItro testing to estimate methane testing a good way to measure methane emissions, is this testing reliable?
Thank you.
The original scale has 4 subscales, and I will use it as one of my essential outcome measures. I need to use 2 subscales only to assess specific changes.
In SmartPLS 4's Construct reliability and validity output have Composite reliability (rho_a) and Composite reliability (rho_c).
What is the difference between Composite reliability (rho_a) and (rho_c)?
Which one should use for my study?
The study to break the culture - babel enigma code is still available, without reliable sample this code can be not broken 💔😭💔
If you would like to be part of studies answering the link between culture, emotions and cognition, hey! There is still time to fill it!
Please contact me to get more details!
Precision-based ranked retrieval evaluation metrics from information retrieval (IR) such as Precision@k (P@k), AveragePrecision@k (AP@k), and MeanAveragePrecision@k (MAP@k) employ only oracle or user-assessed relevancy scores while completely discarding the system-generated relevancy scores. However, system-generated relevancy scores are the ones that are used for ranking the retrieval outcome.
Is it right to discard the system-generated scores (known as similarity scores in case-based reasoning (CBR)) in evaluation metrics?
Let's consider two variants (A and B) of a CBR system with identical case representation, case base, and retrieval output. However, the retrieval results differ by their system-generated scores based on which the retrieval ranking is performed. Say, the oracle-assessed relevancy scores and system-generated relevancy scores for the top 3 ranks for A and B are:
- A: oracle-assessed relevancy (0.9, 0.8, 0.7) and system-generated relevancy (0.9, 0.8, 0.7)
- B: oracle-assessed relevancy (0.9, 0.8, 0.7) and system-generated relevancy (0.5, 0.3, 0.2)
Note: The metrics (P@k, AP@k, and MAP@k) by design operate only with binary relevancies, which means oracle-assessed relevancies for A and B can be (1, 1, 1) for the current example, where 1 is relevant and 0 irrelevant.
Question:
- Now, which CBR system is more fair or reliable?
- Should we choose system A or B?
What are some of the reliable suppliers in india for gray cast iron billets.
current state of the art of the classical fragility contain epistemic uncertainty! That is because:
- Displacement depend on the direction and magnitude of the force, so displacement based fragility are not reliable!
- Defining fragility in terms of PGA or other characteristics of the hazard is another misunderstanding! and wrong conception!
- Each fragility curve is for all systems! Definition of a fragility curve for a system is a wrong concept!
- Fragility is the sole concept of the structure!
- There is only one fragility curve for all structures, the abscissa is a property of structures , such as relative slenderness ratio!
- Fragility and reliability (capacity) are two sides of a coin! Fragility curve at each point is equal to 1 minus the design (reliability) curve!
-This great misunderstanding should be corrected!
-For more info. look for Persian Curves in the literature!
Dear Sir/Madam,
I am using time-series secondary data in my research. But I am not sure as to how to test validity and reliability of the data.
Hi there!
I wish to do a comparative study of two available treatments in peptic ulcer disease (PUD). I'm in need of a reliable tool/method to help me measure efficacy of the given treatment. Aside from endoscopy, how can I determine which treatment has been more effective?
Thanks.
Hello everyone
I want to provide high purity and reliable laboratory materials
Which of the following companies do you approve of?
Have you had experience using its materials?
Acros
Alfa aser
Tci
Florchem
Fisherchem
Santachem
santa cruz
Carbosynth
Glenthem
Tocris
Hellochem
Biozol
Nigeria's data on tertiary education attainment, research and development, international technology transfer, and domestic technology investment from 1990 - 2021. Regards
What other similar graphical approaches/tools do you know when we attempt to depict the degradation state or reliability performance of a system, aside from Markov chain and Petri net?
(Any relevant references are welcome.)
Thank you in advance.
The abstract is as follows:
Title: Quality of life of Iranian adults with neurofibromatosis 1: validity and reliability study
Abstract
Aim: Evaluate the quality of life of Iranian patients with neurofibromatosis and the validity and reliability of the Persian version of the NF1 adult HRQOL questionnaire (NF1-AdQOL).
Methods: This methodological study was conducted on 414 adults’ patients with neurofibromatosis 1 in Iran, using convenience sampling. Based on permission from the developer of the scale, it was back‐translated. Content validity, exploratory and confirmatory factor analyses were tested. The reliability of the questionnaire was evaluated with test-retest and internal consistency.
Results: The 31-item Quality of Life questionnaire was translated into Persian, and based on content validity two of the items were removed. The adequacy of the sample was acceptable (KMO = 0.940). Exploratory factor analysis revealed four factors. The scale was good reliability (Cronbach’s alpha: 0.953), and the intraclass coefficient was 0.91. The total mean Quality-of-life score was 93±25.18.
The phylomatic.com supported comparitive analysis of multiple species at same time. Results from statistical analysis with or without phylogeny consideration could be quite different, for instance, correaltions between two plant traits. how to quantify the contribution of phylogeny when environmental, biological factors together explained the responsive variable ? so far, there are many paper invloved this aspect, what is the most reliable way to deal with it?
Please suggest some reliable tools for computing multiple sequence alignment and generating heatmaps?
I am currently developing a proposal on microplastics (MPs) in the terrestrial environment, kindly assist with a reliable methodology to determine MPs in soil, water and plants. Is there any permissible limit in terms of concentration?
Isn't it really time to use Omega test instead of Cronbach's alpha in reliability?
I have values of two treatment groups with similar gene target and all 3 groups have the same beta actin value. Treated 1 = 21.00 cq and Treated 2 = 20.50 cq, while both treatments having the same Control = 19.00 cq. From this rough data, is it reliable to say Treated 2 have higher expression than Treated 1 against Control due to lower cq value by 0.5? Thank you in advance
Due to the knowledge that one protein kinase have many Intracellular target proteins, for making the experiment, are there any methods that make protein kinase enzyme phosphorylates only one desired target protein? Or there are any strategies to make sure that protein kinase phosphorylates only desired target in the experiment to ensure the reliability of results from phosphorylation study?
Thank you in advance for your answer.
It would be great if anyone could suggest a few datasets where GHI, DHI, and DNI (Solar data) can be downloaded in the viewable format.
What are the appropriate time to estimate the incidence for any medical problem and how can differentiate from prevalence to be more reliable and realistic number figures?
The section on physical activity will be part of a large survey that intends to collect information on adolescent and young adults health behavior.
We are aware of the limitations of the self-reported method, yet this will be the most suitable method for our study.
Thank you in advance
Hi,
Has anyone doing microfluidics work for cell culture started having problems recently with getting their Sylgard 184 PDMS to reliably and permanently bond to the glass substrate? We have tried replacing our PDMS elastomer base and curing agent recently but this has not fixed our problem. We have also tried using 10% less curing agent which didn’t help either.
Briefly our protocol involves the initial bake at 60 degrees C for 1.5 hours, cutting out chips the next day then cleaning the PDMS surface with adhesive tape, cleaning the glass surface with 70% ethanol and a blast of nitrogen, plasma treatment for 30 seconds at 50% power/~15W (Zepto ONE, Diener Electronic plasma cleaner), and then sticking the surfaces together. Final annealing overnight at 60 degrees C before checking my chips for any bonding issues.
Normally when I have improperly bonded chips it’s immediately obvious and I’d see a portion of the bonded PDMS stuck onto the glass. Recently I’ve noticed that too many of the chips come off the glass cleanly, leaving no residual PDMS. I also sometimes find that a chip may be properly bonded but then it comes undone at a much later stage. All this leads me to think that the bonding is no longer permanent like it should be. I’d appreciate any feedback or advice, or if anyone else has noticed this and what may have worked for you if you did. Thanks very much in advance!
One of my questionnaires has one item: one open-ended question. Is there any method to check the reliability of one question?
I have calculated the correlation coefficient for these groups: Group A: r= 0.8, P= 0.02 Group B: r= 0.8, P= 0.0005 .
Can I say that the correlation was stronger in group B compared to group A?
Hi there:
I am trying to establish a reliable and low cost protocols for human Th1, Th2 and Th17 polarization, does anyone has good experience to share?
Many thanks for any information
Shiqiu
We are in the market for a vacuum concentrator unit for a range of applications but one of the challenging ones is drying up to 24 samples of 1:1 water:methanol, with 10mL per sample!
One of the many systems we are lookig at is the Buchi syncore polyvap r48 or even the R96 model.
Does anyone have any experience with this system, such as its ease of use, reliability, adaptability that you would like to share with me?
Thanks
Greg
I’m making a research about tb knowledge in Indonesia and some of my questions in my questionnaire are multiple answer question.
Mainstream theoretical physics applies the stationary action principle to derive Lagrangian equations. This cannot explain the origin of electrical charges and cannot explain the shortlist of elementary particle types described in the Standard Model of the experimental particle physicists. See
Preprint The setbacks of theoretical physics
I am seeing to buy a low-cost rotary evaporator that can be as efficient as the best models of the market. Can anybody suggest me based on their experience? Please avoid well-known models that are highly expensive.
Thanks in advance.
Venkat
Dear colleagues,
I just receive an email from info@vebleo.org as follows. Is it reliable? Anyone gets the same email?
......
I am delighted to inform you that your name has been nominated for Fellow of Vebleo by the committee members for your notable contribution in the field of materials science research including graphene & 2D materials, biomaterials & devices, functional, composite, polymer, energy- and nano science, and technology. ........
Actually I am new in this field. Also this research topic is readily new. So, I need a preferable software to work with this topic. I can't progress my work without any reliable software
I am planning to conduct psychological well-being of students in the age group 18-25 yrs. I need a standardised reliable tool to conduct survey research on the subject. kindly suggest the best available tool( instrument) to go ahead with my plans
Dear Researchers,
I am looking for a research paper that is published in a good journal and confirms the reliability of using NASA-POWER data in hydro-climatic studies.
Best wishes,
Mohammed
Can anyone recommend reliable (and cheap) FBS suppliers for muscle cell culture?
Hair 2017, says "Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct"
Whereas, following discussion says 0.9 is acceptable unless items are not repeating and calls data as highly reliable.
May i know any literature to justify composite reliability ranging between 0.9 to 0.95 ?
What do you think? I am having a little trouble for organizing this. What is/are the better way to put it?
To store my research data, I am seeking options to digitally store scientific data reliably (immutable). What are your thoughts on it?
An already existing Brief resilience scale (6 items, 5-point Likert scale) was translated and administered among 20 samples for a pilot study, after scoring (3 items were reverse scored) reliability analysis produced a cronbach's alpha of -0.184, with the following error in SPSS: The value is negative due to a negative average covariance among items. This violates reliability model assumptions. You may want to check item codings. What can be done now to increase the Alpha? Kindly provide your valuable input.
What step are involved in establishing validity and Reliability of a translated test? Whether it is necessary to measure these or not. For an instance lets suppose an achievement test is constructed to measure the science achievement of students in English language. Its was well validated and checked for reliability over a large sample. Now It is a well constructed valid and reliable test. Now if someone:
1) Just translate it to another language, what will be the validity and reliability of this new translated test. Whether they need to be again validated and if yes what steps are involved in the establishment of validity and reliability?
2) One just translate the test with few changes only. Most of the items are same and only few items are changed.
The statistic most commonly used for interrater reliability is Cohen’s kappa, but some argue that it’s overly conservative in some situations (Feinstein & Cicchetti, 1990. High agreement but low kappa). For binary outcomes with a pair of coders, for example, if the probability of chance agreement is high, as few as two disagreements out of 30 could be enough to pull kappa below levels considered acceptable. I’m wondering whether any consensus has emerged for a solution to this problem, or how others address it.
I do not think that there is a consensus and I would like to collect opinions on the more reliable soluble platelet activation marker in plasma.
Thank you
Expect more Rainey session next winter.. reliable change in weather forecast for middle east region
What is the validity of the use of reliability and validity criteria in content analysis? If that is systematically true, what method is appropriate?
Dear colleagues working with psychometrics or test theory,
I have a challenge – if not even a problem. Hope you have time to read my note.
We have two ways to think and compute the average standard errors of the measurement (S.E.m.). On the one hand, we have the traditional way based on the reliability of the score, that is, SError = SQRT(SError^2) =SQRT(S^2X×(1-REL) where S^2X is the estimated population variance and REL is an estimate of reliability. This is the basic formula based on definition of reliability. On the other hand, we can calculate the standard error based on the measurement model related to factor models, that is, SError = SQRT(SError^2) =SQRT(SUM(1-Lambda^2) = k - SQRT(SUM(Lambda^2), where k is the number of items in the test and Lambda^2 is the square of the factor loading, that is, (essentially) the correlation between an item and the factor score.
A challenge in these forms is obvious if both the items and score are standardized, as is assumed in the latter form. Then, the estimated population variance is S^2X = 1, and the former estimator gets the form SError = SQRT(1-REL). Always, the magnitude of the standard errors will be SError < 1. Contrary, the latter estimator may give estimates with magnitude greater than 1 up to SError > 8.0 when the number of items increase. That is, the outcomes differ radically from each other. Which one is more credible or are they both off the truth?
If we take credible the idea that the error variance has a cumulative nature related to the number of items, it seems that the estimator based on factor loadings gives us more credible estimate of standard error of the measurement instead of the formula based on reliability (e.g., Omega). We may ask, why would the true standard error not exceed SError = 0.63 – this would be obtained with test reliability of REL = 0.60? However, let us assume a hypothetical test with 100 items with the item–score correlation of wi = 0.4 in each item We would come up with the standard error of SError > 7.7. A relevant question is, how it would be even possible that a score compiled of 100 items with standardized scores to have S.E.m. of a magnitude of more than 7 standard units when the lower boundaries of reliability of the score would be very high?
This phenomenon puzzles me. Have you been thinking this issue? Any ideas of what explains the discrepancy between the outcome of the formulae.
Hello
I have been contacted by both LAMBERT and NOVA publishing to publish a book with them. Which one would be the best? Are they real publishers? I have seen their titles on Amazon (with the latter being very expensive), but are they worth or would they actually harm me as an author?
Thank you
What would be the future trends of solar and energy storage for large-scale asset management for improved reliability, increased revenue, higher energy cost reduction and better asset life extension?
I'm looking to purchase premium plagiarism detection software. Which one is the most reliable and accurate on the market?
Hi,
my study sample size is 355, and after i performing Mahalanobis test, 7 out of 355 outliners were identified. the initial reliability score of each sub-scale items ranged 0.7 - 0.9, but after i removing these 7 outliners the reliability of all sub-scales were drastically dropped below 0.6, even 0.5. does it mean i cannot remove these outliners because they're naturally generated? (my data set is not normally distributed)
many thanks!!
Dear colleagues,
I'm looking for assays to measure the effects of treatments on IgE release by B cells. Isolation of primary B cells from mouse spleen or human blood, then cultivation, stimulation protocols. Does anyone know whether Interleukin-4 is sufficient (it is known to stimulate IgE release)?
Thank you very much,
Best wishes,
Tineke Vogelaar
Would anyone recomend reliable monoclonal antibody against GIP receptor reactive for murine antigen?
Best to all RG community,
Arek
Hi, I'm looking for a real fog/edge dataset/trace that contains resource and tasks events such as failure, completion, to model trust, reliability, and availability.
Which is more reliable, arithmetic means or geometric means? The study will be comparing means from two sample population.
Augmented Virtuality (AV) already overlays most parts of the users' environment, so I wonder if Diminished Reality (DR) could be seen as a sub-term of AV with a special focus on intentionally removing particular objects. Or is it better to see DR as a feature of Augmented Reality (AR), as in “do only remove few particular objects”? Maybe it is better to say it applies to both and therefore could be seen as a feature of Mixed Reality (MR)? (I use AV, AR, and MR according to Milgram and Kishino's Reality-Virtuality Continuum here)
Does anyone have a reliable definition about the differences and similarities between those concepts?