ArticlePDF Available

Abstract and Figures

The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
Content may be subject to copyright.
Measurement and Evaluation in
Counseling and Development
2014, Vol 47(1) 79 –86
© The Author(s) 2013
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0748175613513808
mecd.sagepub.com
Methods Plainly Speaking
Introduction
Content validation refers to a process that
aims to provide assurance that an instrument
(checklist, questionnaire, or scale) measures
the content area it is expected to measure
(Frank-Stromberg & Olsen, 2004). One way
of achieving content validity involves a panel
of subject matter experts considering the
importance of individual items within an
instrument. Lawshe’s method, initially pro-
posed in a seminal paper in 1975 (Lawshe,
1975), has been widely used to establish and
quantify content validity in diverse fields
including health care, education, organiza-
tional development, personnel psychology,
and market research (Wilson, Pan, & Schum-
sky, 2012). It involves a panel of subject mat-
ter “experts” rating items into one of three
categories: “essential,” “useful, but not
essential,” or “not necessary.” Items deemed
“essential” by a critical number of panel
members are then included within the final
instrument, with items failing to achieve this
critical level discarded. Lawshe (1975) sug-
gested that based on “established psychophys-
ical principles,” a level of 50% agreement
gives some assurance of content validity.
The CVR (content validity ratio) proposed
by Lawshe (1975) is a linear transformation of
a proportional level of agreement on how
many “experts” within a panel rate an item
“essential” calculated in the following way:
CVR
nN
N
e
=
()
/
/
,
2
2
where CVR is the content validity ratio, ne is
the number of panel members indicating an
item “essential,” and N is the number of panel
members.
Lawshe (1975) suggested the transforma-
tion (from proportion to CVR) was of worth
as it could readily be seen whether the level of
agreement among panel members was greater
than 50%. CVR values range between −1
(perfect disagreement) and +1 (perfect agree-
ment) with CVR values above zero indicating
that over half of panel members agree an item
essential. However, when interpreting a CVR
for any given item, it may be important to
513808MECXXX10.1177/0748175613513808Measurement and Evaluation in Counseling and Development 47(1)Ayre and Scally
research-article2013
1University of Bradford, Bradford, UK
Corresponding Author:
Colin Ayre, University of Bradford, School of Health
Studies, Bradford BD7 1DP, UK.
Email: c.a.ayre1@bradford.ac.uk
Critical Values for Lawshe’s
Content Validity Ratio: Revisiting
the Original Methods of Calculation
Colin Ayre1 and Andrew John Scally1
Abstract
The content validity ratio originally proposed by Lawshe is widely used to quantify content
validity and yet methods used to calculate the original critical values were never reported.
Methods for original calculation of critical values are suggested along with tables of exact
binomial probabilities.
Keywords
measurement, content validity, validation design, content validity ratio
by guest on July 6, 2015mec.sagepub.comDownloaded from
80 Measurement and Evaluation in Counseling and Development 47(1)
consider whether the level of agreement is
also above that which may have occurred by
chance. As a result, Lawshe reported a table
of critical CVR (CVRcritical) values computed
by his colleague Lowell Schipper, where
CVRcritical is the lowest level of CVR such that
the level of agreement exceeds that of chance
for a given item, for a given alpha (Type I
error probability, suggested to be .05 using a
one-tailed test). CVRcritical values can be used
to determine how many panel members need
to agree an item essential and thus which
items should be included or discarded from
the final instrument. To include or discard
items from a given instrument appropriately,
it is imperative that the CVRcritical values are
accurate. Recently, concern has been raised
that the original methods used for calculating
CVRcritical were not reported in Lawshe’s arti-
cle on content validity, and as both Lawshe
and Schipper have since passed away, it is
now not possible to gain clarification (Wilson
et al., 2012). Furthermore, an apparent anom-
aly exists in the table of critical values between
panel sizes of 8 and 9, where CVRcritical unex-
pectedly rises to 0.78 from 0.75 before mono-
tonically decreasing with increasing panel
size up to the calculated maximum panel size
of 40. This led Wilson et al. (2012) to try and
identify the method used by Schipper to calcu-
late the original CVRcritical values in Lawshe
(1975) in the hope of providing corrected
values.
Despite their attempts, Wilson et al.
(2012) fell short of their aims. They sug-
gested that Schipper had used the normal
approximation to the binomial distribution
for panel sizes of 10 or more, yet these claims
were theoretical as they were unable to
reproduce the values of CVRcritical reported in
Lawshe (1975). As values for CVRcritical cal-
culated by Wilson and colleagues differed
significantly from those reported in Lawshe
(1975), it was suggested that instead of the
one-tailed test reported Schipper had, in fact,
used a two-tailed test as this more closely
resembled their results. In addition, for panel
sizes below 10 no satisfactory explanation
was provided of how CVRcritical may have
originally been calculated. Furthermore, they
were unable to provide a satisfactory expla-
nation for the apparent anomaly between
panel sizes of 8 and 9.
In their article, Wilson et al. (2012) pro-
duced a new table of CVRcritical values using
the normal approximation to the binomial dis-
tribution. This method we believe to be inferior
to calculation of exact binomial probabilities
as, by definition, it is ultimately just an approx-
imation and may not be valid for small sample
sizes and for proportions approaching 0 or 1
(Armitage, Berry, & Matthews, 2002). It is
understandable that a normal approximation
was used for larger panel sizes when Schipper
calculated the original CVRcritical values in
1975, but as statistical programs can now read-
ily calculate exact binomial probabilities, it
would seem more appropriate to do so in the
present day. We had further concerns regard-
ing the methods used by Wilson et al. (2012)
to calculate the normal approximation, as it
appeared a continuity correction had not been
employed. In cases where the continuous nor-
mal distribution has been used to approximate
the discrete binomial distribution more accu-
rate results are obtained through use of a con-
tinuity correction (Gallin & Ognibene, 2007;
Rumsey, 2006).
Based on the wide discrepancy between
CVRcritical reported by Wilson et al. (2012) and
Lawshe (1975), we intended to answer the
following questions:
1. Did Wilson et al. (2012) correctly
employ a method for calculating bino-
mial probabilities?
2. What method was employed by Schip-
per to calculate CVRcritical in Lawshe
(1975) for all panel sizes?
3. Are there anomalies in Schipper’s
table of critical values in Lawshe
(1975)?
4. Did Lawshe report CVRcritical for a one-
tailed test or a two-tailed test?
As a result of our belief that exact binomial
probabilities were more appropriate than nor-
mal approximations, we also intended to cal-
culate exact binomial probabilities for all
panel sizes between 5 and 40.
by guest on July 6, 2015mec.sagepub.comDownloaded from
Ayre and Scally 81
Method
We calculated the minimum number of
experts required to agree an item “essential”
for a given panel size, such that the level of
agreement exceed that of chance. In keeping
with previous work, we assumed the outcome
as dichotomous (i.e., “essential” or “not
essential”) although we acknowledge it could
be considered trichotomous as there are three
possible outcomes when rating any given item
(“essential,” “important, but not essential,”
and “not necessary”). As the CVR is designed
to show a level of agreement above that of
chance, we are only concerned with testing in
one direction. Thus, in this case a one-tailed
hypothesis test is appropriate.
Hypothesis:
H0: ne = N/2
Significance (α) was set at .05.
Using a one-tailed test, we would reject H0
if P(ne ncritical) .05, where ncritical is the low-
est number of experts required to agree an
item “essential” for agreement to be above
that of chance and ne = the number of experts
rating an item as “essential.”
We calculated exact CVRcritical values for
panel sizes between 5 and 40, based on the
discrete binomial distribution, computed
using Stata Statistical Software: Release 12
(StataCorp, College Station, TX). The follow-
ing command was used:
bitesti N ne p
where N is the total number of panel mem-
bers, ne is the number of experts agreeing
“essential,” and p is the hypothesized proba-
bility of success (agreeing the item as essen-
tial) = ½.
Using this method we produced a table of
the minimum number of experts (ne) required
to agree an item essential such that we could
reject H0 (i.e., the minimum number of experts
such that p .05). Values for CVRcritical were
then calculated on the basis of the minimum
number of experts required using the formula
for calculating CVR given previously in the
article. Exact one-sided p values are reported.
To allow direct comparison, we calculated
the exact binomial probabilities according to
the method used by Wilson et al. (2012),
described in their article, using the Microsoft
Excel function:
ncritical = CRITBINOM (n,p,1-α)
where ncritical is the minimum number of
experts required to agree an item essential, n
is the panel size, p is the probability of suc-
cess = ½, and α = .05.
Normal approximation to the binomial was
calculated using the following formula incor-
porating a continuity correction (Armitage et al.,
2002). This subtracts 0.5 from the number of
panel experts required to agree an item essen-
tial to account for using the continuous nor-
mal distribution for approximation of the
discrete binomial distribution.
znNp
Np pN
e
=−−
√−
()
(.)
[]
~(
,)
05
101
Therefore, as p = ½:
nzNN
e=
+
+
22
05
.,
where z is normal approximation of the bino-
mial, N is the total number of panel members,
ne is the number of experts agreeing “essen-
tial,” p is the probability of agreeing each
item essential = ½, and 0.5 is the continuity
correction.
CVR based on the normal approximation
was calculated in the following way:
CVR =
()
+
()
[/
].
/
zN
N
20
5
2
Therefore,
CVR=
()
+
[]
.
zN
N
1
Normal approximations for CVR critical were
calculated using this method for all panel
sizes to allow comparison with previous work.
Results
The calculations for CVRcritical based on exact
binomial probabilities for panel sizes of 5 to
by guest on July 6, 2015mec.sagepub.comDownloaded from
82 Measurement and Evaluation in Counseling and Development 47(1)
40 are shown in Table 1. Calculations using
the CRITBINOM function returned values for
the critical number of experts 1 fewer for all
panel sizes compared with our calculations
(Table 1).
Figure 1 shows a comparison of CVRcritical
values from our exact binomial and normal
approximation to the binomial calculations
and those reported by Lawshe (1975) and Wil-
son et al. (2012). Normal approximation using
the continuity correction returned values equal
to those reported in Lawshe (1975) for all
given panel sizes of 10 and above other than a
minor difference of 0.01 for a panel size of 13.
Table 1. CVRcritical One-Tailed Test (α = .05) Based on Exact Binomial Probabilities.
N (Panel Size)
Proportion
Agreeing
Essential
CVRCritical
Exact
Values
One-Sided
p Value
Ncritical (Minimum Number
of Experts Required to
Agree Item Essential)—
Ayre and Scally, This Article
Ncritical Calculated
From CRITBINOM
Function—Wilson
et al. (2012)
5 1 1.00 .031 5 4
6 1 1.00 .016 6 5
7 1 1.00 .008 7 6
8 .875 .750 .035 7 6
9 .889 .778 .020 8 7
10 .900 .800 .011 9 8
11 .818 .636 .033 9 8
12 .833 .667 .019 10 9
13 .769 .538 .046 10 9
14 .786 .571 .029 11 10
15 .800 .600 .018 12 11
16 .750 .500 .038 12 11
17 .765 .529 .025 13 12
18 .722 .444 .048 13 12
19 .737 .474 .032 14 13
20 .750 .500 .021 15 14
21 .714 .429 .039 15 14
22 .727 .455 .026 16 15
23 .696 .391 .047 16 15
24 .708 .417 .032 17 16
25 .720 .440 .022 18 17
26 .692 .385 .038 18 17
27 .704 .407 .026 19 18
28 .679 .357 .044 19 18
29 .690 .379 .031 20 19
30 .667 .333 .049 20 19
31 .677 .355 .035 21 20
32 .688 .375 .025 22 21
33 .667 .333 .040 22 21
34 .676 .353 .029 23 22
35 .657 .314 .045 23 22
36 .667 .333 .033 24 23
37 .649 .297 .049 24 23
38 .658 .316 .036 25 24
39 .667 .333 .027 26 25
40 .650 .300 .040 26 25
by guest on July 6, 2015mec.sagepub.comDownloaded from
Ayre and Scally 83
Discussion
We have produced a table of exact values for
CVRcritical including the minimum number of
panel members required such that agreement
is above that of chance. We believe we are the
first to produce a table of values for CVRcritical
from exact binomial probabilities. In contrast
to previous work, all of the values for CVR-
critical are calculated based on an achievable
CVR, given the discrete nature of the vari-
ables under investigation.
Comparison with previous work is given
below.
Comparison to Lawshe (1975)
The exact critical values for CVR we have
produced are equal to those given in Lawshe
(1975) for panel sizes below 10, allowing for
adjustments and rounding (see Figure 1). We
therefore believe that Lawshe (1975) calcu-
lated exact binomial probabilities for panel
sizes below 10. This approach is reasonable as
the use of a normal approximation for a bino-
mial distribution is only justifiable when:
Np > 5 and N(1-p) > 5 (Rumsey, 2006). Where
N = the number of panel members and p = the
probability of success in any trial.
This would be satisfied for panel sizes above
10 assuming p = ½.
We do not believe that there is an anomaly
for panel sizes between 8 and 9 in CVRcritical
reported in Lawshe (1975). It can be seen in
Figure 1 that CVRcritical does increase between
panel size of 8 and 9, related to the discrete
nature of both the panel size and number of
experts who can agree any item is essential. It
can be seen from our calculations that,
although the overall pattern is for CVRcritical to
fall with increasing panel size, there are a
number of instances where CVRcritical increases.
This is an important consideration when deter-
mining panel size for those using the CVR
method to gain content validity.
For panel sizes of 10 and above the nor-
mal approximation to the binomial has been
calculated and we have been able to repro-
duce the same values reported in Lawshe
(1975) notwithstanding a minor discrepancy
for a panel size of 13 (see Figure 1). As the
normal distribution is based on a continuous
distribution and it is being used to approxi-
mate a discrete distribution, Schipper has
correctly used a continuity correction which
will more likely result in more accurate
approximations. It would appear that Schip-
per and Wilson et al. used identical methods
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
510152025303540
Content validity rao (CVR)
Panel size
CVR crical exact
CVR crical normal
approximaon (Ayre and
Scally, this paper)
CVR crical normal
approximaon (Wilson et
al, 2012)
CVR crical (Lawshe, 1975)
Figure 1. Chart showing comparison between critical values for content validity ratio.
by guest on July 6, 2015mec.sagepub.comDownloaded from
84 Measurement and Evaluation in Counseling and Development 47(1)
for calculating CVRcritical with the exception
of the continuity correction.
As the values we have calculated are the
same as those of Lawshe (1975) it is apparent
they have also used a one-tailed test at p = .05
as they originally reported, and not a two-
tailed test as suggested by Wilson et al. (2012).
Can the critical CVR values given by
Lawshe (1975) be used to accurately deter-
mine panel size?
In general, use of the originally calculated
CVR values from Lawshe (1975) yields an
equal value for the critical number of experts
required as shown in our exact calculations.
The only discrepancy occurs for a panel size
of 13 where the exact CVRcritical is marginally
under that reported by Lawshe. Importantly,
our findings would suggest that question-
naires and checklists developed using the
CVRcritical values originally reported by Law-
she (1975) remain valid.
Comparison to Wilson et al. (2012)
CVRcritical Based on Exact Binomial Probabili-
ties. The exact CVRcritical based on binomial
probabilities we have calculated using Stata
differ from those given by the CRITBINOM
function in Microsoft Excel employed by
Wilson et al. (2012) as a result of the discrep-
ancy in the critical number of experts required
to agree an item “essential” produced by each
method (see Table 1). We believe that Wilson
et al. (2012) have incorrectly interpreted the
result returned from the CRITBINOM func-
tion and therefore the CVRcritical based on the
exact binomial probabilities shown in Figure 1
of their article are incorrect. The method used
by Wilson et al. (2012) returns one fewer than
the true critical number of experts required to
ensure agreement above that of chance for a
given value of α (Table 1) yet no mention of
this can be seen in their article. This can be
illustrated through an example using a panel
size of 15.
Example: Considering a panel size (n) of 15,
probability of success (p) of .5 and α = .05.
CRITBINOM (15, 0.5, 0.95) = 11
As this utilizes the cumulative binomial prob-
ability the interpretation of this result is “there
is at least a probability of 0.95 of getting 11 or
fewer successes.” Thus, there is at most a
probability of .05 of getting 12 or more suc-
cesses. This is the critical number we are
interested in to assure a level of agreement
above that of chance at α set at .05.
The error in calculating exact binomial
probabilities may explain why Wilson et al.
(2012) failed to realize that Shipper had calcu-
lated exact binomial probabilities up to a
panel size of 10.
CVRcritical Based on the Normal Approximation to
the Binomial. CVRcritical values reported by
Wilson et al. (2012) based on a normal approx-
imation to the binomial distribution are mark-
edly lower than those we have calculated using
a continuity correction for all given panel sizes
(see Figure 1). It is clear from their formula for
calculating the critical value for CVR that a
continuity correction was not used by Wilson
et al. Conversely, the values given in Lawshe
are consistent with the normal approximation
using the continuity correction and are there-
fore closer to the exact binomial probabilities
we have reported in this article. On this basis,
we believe that the recalculated values for
CVRcritical reported in Wilson et al. (2012) are
inaccurate and therefore should not be used.
Wilson et al. (2012) and Lawshe (1975)
have both calculated CVRcritical values for
panel sizes of 10 or more (Wilson et al., 2012,
used the normal approximation for all panel
sizes) based on a normal approximation of the
binomial distribution. We believe this is an
inferior method to the exact calculations we
have reported for the following reasons:
1. If the normal approximation value for
CVRcritical is higher than that produced
from exact calculations of binomial
probability the panel size deemed nec-
essary will be higher than required.
2. If the normal approximation value for
CVRcritical is lower than that produced
from exact calculations of binomial
probability the panel size deemed nec-
essary may be lower than required.
by guest on July 6, 2015mec.sagepub.comDownloaded from
Ayre and Scally 85
Table 2. Simplified Table of CVRcritical Including the Number of Experts Required to Agree an Item
Essential.
Panel Size
Ncritical (Minimum Number of
Experts Required to Agree an
Item Essential for Inclusion)
Proportion
Agreeing
Essential CVRcritical
5 5 1 1.00
6 6 1 1.00
7 7 1 1.00
8 7 .875 .750
9 8 .889 .778
10 9 .900 .800
11 9 .818 .636
12 10 .833 .667
13 10 .769 .538
14 11 .786 .571
15 12 .800 .600
16 12 .750 .500
17 13 .765 .529
18 13 .722 .444
19 14 .737 .474
20 15 .750 .500
21 15 .714 .429
22 16 .727 .455
23 16 .696 .391
24 17 .708 .417
25 18 .720 .440
26 18 .692 .385
27 19 .704 .407
28 19 .679 .357
29 20 .690 .379
30 20 .667 .333
31 21 .677 .355
32 22 .688 .375
33 22 .667 .333
34 23 .676 .353
35 23 .657 .314
36 24 .667 .333
37 24 .649 .297
38 25 .658 .316
39 26 .667 .333
40 26 .650 .300
Presented above is a simplified table of CVR-
critical values, calculated using exact binomial
probabilities, which includes the number of
experts required to agree any given item is
essential (Table 2).
It can be seen from Table 2 that preferred
panel sizes exist, when the addition of a
further panel member leads to a significant
reduction in the required proportion level of
agreement that an item is “essential” for it to
be included (e.g., between panel sizes of 12
and 13). In addition, it is also immediately
apparent that increasing the panel size by 1
will actually increase the required proportion
level of agreement on occasions (e.g.,
between panel sizes of 13 and 14). We believe
by guest on July 6, 2015mec.sagepub.comDownloaded from
86 Measurement and Evaluation in Counseling and Development 47(1)
this table is of most use to researchers’ wish-
ing to quantify content validity using the
CVR method, both to decide the most appro-
priate panel size and when determining
whether a critical level of agreement has
been reached.
Conclusions
The method used by Schipper to calculate
the original critical values reported in Law-
she’s article has been suggested and we have
been able to successfully reproduce the val-
ues using discrete calculation for panel sizes
below 10 and normal approximation to the
binomial for panel sizes of 10 and above.
We have identified problems with both the
discrete calculations and normal approxima-
tion to the binomial suggested by Wilson et al.
Consequently, we do not believe that values
for CVRcritical reported in Wilson et al. should
be used to determine whether a critical level
of agreement has been reached and therefore
whether items should be included or
excluded from a given instrument. Although
it is safe to use the values for CVRcritical pro-
posed by Lawshe to determine whether
items should be included on an instrument,
we believe that exact CVRcritical based
on discrete binomial calculations is most
appropriate.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of
interest with respect to the research, authorship,
and/or publication of this article.
Funding
The author(s) received no financial support for the
research, authorship, and/or publication of this article.
References
Armitage, P., Berry, G., & Matthews, J. N. S.
(2002). Statistical methods in medical research
(4th ed.). Oxford, England: Blackwell.
Frank-Stromberg, M., & Olsen, S. J. (2004).
Instruments for clinical health-care research.
London, England: Jones & Bartlett.
Gallin, J. I., & Ognibene, F. P. (2007). Principles
and practice of clinical research (2nd ed.).
Boston, MA: Elsevier.
Lawshe, C. H. (1975). A quantitative approach to con-
tent validity. Personnel psychology, 28, 563–575.
Rumsey, D. (2006). Probability for dummies.
Indianapolis, IN: Wiley.
Wilson, F. R., Pan, W., & Schumsky, D. A. (2012).
Recalculation of the critical values for Lawshe’s
content validity ratio. Measurement and
Evaluation in Counseling and Development,
45, 197–210. doi:10.1177/0748175612440286
Author Biographies
Colin Ayre is a PhD student and honorary lecturer
in the School of Health Studies, University of
Bradford. He is also a qualified physiotherapist
working in Bradford Teaching Hospitals NHS
Foundation Trust.
Andrew John Scally is a senior lecturer in the
School of Health Studies, University of Bradford. He
is a qualified radiographer, a graduate physicist and
has Master’s degrees in computational physics and
medical statistics. His current professional role com-
bines work in both radiation physics/radiological
protection and the application of medical statistics in
a wide range of medical and health disciplines.
by guest on July 6, 2015mec.sagepub.comDownloaded from
... The questionnaire for the first round included the following items: (a) explanatory text about research data; (b) closed questions to characterize the sample; and (c) recommendations of the guideline, each evaluated with a four-point Likert scale ranging from The analysis in the second step was performed by descriptive statistics, analysis of the experts' suggestions, and the content validity ratio (CVR), proposed by Lawshe (1975) [26], which is a linear transformation of a proportional level evaluating how many specialists consider an item "essential"-in this case, "strongly agree". ...
... The analysis in the second step was performed by descriptive statistics, analysis of the experts' suggestions, and the content validity ratio (CVR), proposed by Lawshe (1975) [26], which is a linear transformation of a proportional level evaluating how many specialists consider an item "essential"-in this case, "strongly agree". ...
... Lawshe's (1975) [26] table of critical CVR values was adopted as a reference, considering Critical CVR = 0.77 in the first round, with nine specialists. Thus, the second questionnaire was composed of 13 recommendations for evaluation, which were reformulated based on the results obtained in the previous round (CVR ≤ 0.77 and/or descriptive suggestions from the participants). ...
Article
Full-text available
External ventricular drains (EVDs) are common in intensive care for neurocritical patients affected by different illnesses. Nurses play an essential role to ensure safe care, and guidelines are tools to implement evidence-based care. Thus, the aim of this study was to develop and evaluate the quality of a clinical guideline for critically ill patients with EVDs. Methodological research was conducted. The guideline development was based on a scoping review about nursing care to patients with EVDs. The guideline evaluation occurred in two phases: evaluation of its methodological rigor, with application of the Appraisal of Guidelines Research and Evaluation II to four experts on guidelines evaluation; and the Delphi technique, with a panel of nine specialists in neurocritical care, performed in two rounds. Data were analyzed by descriptive statistics and content validity ratio. In the first phase of the evaluation, three domains did not reach consensus, being reformulated. The second phase was conducted in two rounds, with nine and eight participants respectively, with 13 recommendations being reformulated and reassessed between rounds, inclusion of an EVD weaning category, and two flowcharts on patient’s transport and mobility. Therefore, the guideline can be incorporated into nursing care practices. Further studies are necessary to assess its impact on clinical practice.
... We followed the development process of a COS reported by the COMET initiative and COS standard guidelines [18]. It consists of four phases based on the COS for stroke sequelae previously published by our team [19]. First, we established a project management group (PMG) and created and refined a list of outcomes based on a literature review. ...
... Similar to the COS previously published by our team for stroke outcomes [19], we used the content validity ratio (CVR) and the degree of agreement and convergence. The critical CVR value, degree of agreement, and convergence were ≥0.636, ≥0.75, and ≤0.5, respectively, for 11 Delphi panelists [20,21]. ...
... The COS was considered for use in KM clinics, which lack the employees and equipment to conduct research, as in the case of the previously developed KM-COS for stroke sequelae [19]. Thus, we prioritized the feasibility of using the COS in KM clinics. ...
Article
Full-text available
The aim of this study was to develop a Korean medicine (KM) core outcome set (COS) for primary dysmenorrhea to evaluate the effectiveness of herbal medicine (HM) in treating primary dysmenorrhea in patients visiting KM primary clinics. Previously reported outcomes were identified through a literature review to define outcomes and effect modifiers (EMs) for the questionnaire. Experts were invited to conduct modified Delphi consensus exercises, and primary care clinicians were invited to conduct Delphi consensus exercises to evaluate suitability and feasibility. Finally, an additional round of a modified Delphi exercise was conducted with experts to obtain a final agreement on the COS. Seventeen outcomes and 15 EMs were included from a literature review, and one effect modifier was suggested by the experts (Phase 1). In Phase 2, after the modified Delphi consensus exercises by experts, 10 outcomes and 11 EMs were included in the COS. The clinicians all agreed on the feasibility of COS (Phase 3). Finally, 10 outcomes and 6 EMs were included in the COS-PD-KM after the final modified Delphi consensus exercise (Phase 4). The effectiveness of HM used in primary clinics could be evaluated with this COS in patients with primary dysmenorrhea. Further studies that involve more relevant stakeholder groups, such as patient representatives and gynecological experts, are needed.
... N the total number of experts. CVR values range between À 1 (perfect disagreement) and + 1 (perfect agreement), with CVR values above zero, indicating that over half of experts agree with an item essential [26]. ...
Article
Full-text available
One of the significant challenges in heritage risk assessment is concentrating on investigating physical characteristics in assessing man-made hazards and vulnerabilities without addressing the social aspects that may affect the potential rates of man-made risks on heritage buildings and surrounding historic fabric. This article aims to investigate the predictive relationship between human-made hazards in historic districts and the socio-economic weaknesses that represent Social Vulnerability of Historic Districts (SVHD). The methodology comprises a literature review for extracting the most relevant items of SVHD. Subsequently, a content validation was performed. To enhance the quality and effectiveness of the study, a pilot study was implemented for seventy-three historic districts in historic Cairo, Egypt. Then, using IBM SPSS statistics 20, exploratory factor analysis was executed to develop fewer factors of SVHD from the extracted items and establish their validity and reliability. Finally, multiple linear regression was carried using the surveyed data of human-made hazards rates occurring in the study cases. As a result, the regression analysis developed three predictive models: (the humans model, the heritage buildings model, and the context model). These models have been succussed in predicting the potential rates of human-made hazards significantly. The resulting models highlighted the importance of investigating the social component to predict human-made hazards in historic districts using a quantitative assessment tool. This would help authorities in formulating suitable strategies for the effective performance of historic districts.
... CVI was calculated by dividing the number of experts who classified an item as "suitable" (S) by the total number of experts, dividing by 2 (N/2) and subtracting 1 from the resulting number. 21 This calculation was made for each statement and it was evaluated whether it was suitable according to the table value calculated according to the number of experts. 16 Then the CVR was determined by summing the CVI scores and dividing by the number of items in the scale. ...
Article
Objective: This study, which was carried out in order to determine the Turkish validity and reliability of the "Male Genital Self-Image Scale" in a population sample of Turkish men, is of methodological type. Methods: In the study, language, content, construct validity, and reliability methods were used for the intercultural adaptation of the scale. The data collection process of the scale was carried out with 336 men who applied to the Family Medicine Polyclinic of a hospital. In the language and content validity phase, the opinions of experts with technical and cultural knowledge were consulted. The data of the study were collected with the Sociodemographic Characteristics Form and the Male Genital Self-Image Scale. Results: As a result of experts evaluation, the Content Validity Ratio value was determined as 0.83. At the stage of construct validity, the suitability of the single-factor model of the items of the Male Genital Self-Image Scale was tested. It was determined that all items contributed significantly to the factor (0.62-0.92). As a result of the Confirmatory Factor Analysis, the measurement model was statistically validated (χ2 = 32.083, p = 0.001, χ2/df = 2.917, RMSEA = 0.076). The Cronbach's alpha reliability coefficient was calculated as α = 0.92 in the analysis performed to evaluate the internal consistency of Male Genital Self-Image Scale. Conclusion: The results of the study revealed that Male Genital Self-Image Scale is a valid and reliable tool to evaluate genital self-image in Turkish men.
Article
Introduction Milk donation is allowed in Islam and considered a virtue, though according to Islam Sharia, feeding donated milk of other mothers leads to kinship between infants, complicating milk donation programs in Islamic countries. This study aimed to determine the knowledge and attitude of Iranian Muslim mothers regarding milk donation and milk banks. Methods In this cross-sectional descriptive-analytic study, 634 mothers of infants below 1 year of age were recruited using cluster random sampling from health care centers in Tabriz, Iran. Data were collected by questionnaire. Results The findings revealed a low level of knowledge but relatively positive attitudes. Knowledge predictors were education level, income, type of birth, breastfeeding experience, encouragement to donate milk, and hearing about milk donation ( p ≤ .02). Predictors of attitude were knowledge score and encouragement to donate milk ( p ≤ .001). Discussion Comprehensive, culturally congruent education of mothers during pregnancy and post-pregnancy related to milk banks is recommended.
Article
Introduction/aims: Comprehensive and valid bulbar assessment scales for use within amyotrophic lateral sclerosis (ALS) clinics are critically needed. The aims of this study are to develop the Clinical Bulbar Assessment Scale (CBAS) and complete preliminary validation. Methods: The authors selected CBAS items from among the literature and expert opinion, and content validity ratio (CVR) was calculated. Following consent, the CBAS was administered to a pilot sample of English-speaking adults with El Escorial defined ALS (N=54) from a multidisciplinary clinic, characterizing speech, swallowing, and extrabulbar features. Criterion validity was assessed by correlating CBAS scores with commonly used ALS scales and internal consistency reliability was obtained. Results: Expert raters reported strong agreement for the CBAS items (CVR=1.00; 100% agreement). CBAS scores yielded a moderate, significant, negative correlation with ALS Functional Rating Scale-Revised (ALSFRS-R) total scores (r=-0.652, p<0.001), and a strong, significant, negative correlation with ALSFRS-R bulbar subscale scores (r=-0.795, p<0.001). There was a strong, significant, positive correlation with Center for Neurologic Studies Bulbar Function Scale (CNS-BFS) scores (r=0.819, p<0.001). CBAS scores were significantly higher for bulbar onset (mean=38.9% of total possible points, SD=22.6) than spinal onset (mean=18.7%, SD=15.8; p=0.004). Internal consistency reliability (Cronbach's alpha) values were: (a) total CBAS, α=0.889, (b) Speech subscale, α=0.903, and (c) Swallowing subscale, α=0.801. Discussion: The CBAS represents a novel means of standardized bulbar data collection using measures of speech, swallowing, respiratory and cognitive-linguistic skills. Preliminary evidence suggests the CBAS is a valid, reliable scale for clinical assessment of bulbar dysfunction. This article is protected by copyright. All rights reserved.
Article
Full-text available
Aim: This study aimed to determine clinical competency and psychological empowerment among ICU Nurses Caring for COVID-19 patients. Background: Nurses need clinical competency (skills pertaining to knowledge, reasoning, emotions, and communication) and psychological empowerment (regard for one's organizational role and efforts) to deliver quality care. Methods: This cross-sectional study was conducted with 207 nurses working in ICUs in Iran. A clinical competency survey instrument consisting of basic demographic questions and the Spreitzer psychological empowerment questionnaire were completed online. Descriptive and inferential statistics were used to analyze the data in SPSS software version 13 to address the primary research question. Results: There was a significant positive relationship between clinical competency and psychological empowerment (r = 0.55, p < 0.001). Clinical competency had a significant positive relationship with work experiences (r = 0.17, p = 0.01). Conclusion: Clinical competency has been tied to nurse health and quality of care. Given the significant positive relationship between clinical competency and psychological empowerment, attention must be given to ways to psychologically empower nurses. Implications for nursing management: Nursing managers can consider the promotion of psychological empowerment related to its significant positive relationship to clinical competency. Psychological empowerment can be bolstered through the promotion of servant leadership, organizational justice, and empowering leadership practices.
Article
Computationally intensive science (CIS)‐related disciplines and careers are predicted to be among the cutting‐edge fields and high‐demand jobs. Such disciplines and careers demand expertise in both traditional scientific knowledge and computer science (CS)‐related understanding and skills. Preparing educational systems for developing interest and abilities for such career paths require a new set of research tools. In this study, we developed and validated a multidimensional instrument called computationally intensive science career interests (CISCI) to measure middle school students' interests in CIS careers. We also explored several predictors of students' interests in such careers and examined the impact of a 4‐day online synchronous scientific computational modeling activity on students' career interests. A total of 934 Indonesian middle school students (aged 11–14) participated in this study. A combination of the classical test theory and item response theory approaches was used to validate the CISCI instrument. Multiple linear regression tests were run to identify the predictors of students' career interests, and paired‐sample t‐tests were used to examine a scientific computational modeling activity's impact on students' career interests. The results revealed that CISCI was a psychometrically valid and reliable instrument to measure students' career interests. We also found that science and CS attitudes, computational thinking, and prior experience in CS‐related activities were significant predictors of students' career interests. In addition, computational modeling activity significantly influenced the frequency of students discussing such career paths with their parents. This study underscores the importance of engaging students in CS‐integrated science learning activities to help develop their interests in CIS jobs.
Article
Full-text available
Education, teaching, and learning topics, known to have gained an international dimension with technological developments, are still seen as the most discussed themes and subject to change. It is clear that in the 21st century, the increasing information density, the means of transfer, and the technological adaptation skills of the teacher and the learner are at the forefront, and more efforts are required to develop them. The integration of technologies in education and training is related to the necessity of preparing learners in the most suitable way for future work and lifelong learning within the information society. For this reason, for the COVID-19 pandemic process and thereafter, starting with known education models makes it necessary to enable the development of education, teaching, and learning under better conditions and situations by blending them with technological developments. Everyone has understood the ever-changing and developing universal digital world much better during this pandemic. The 7E model of the Constructivist Learning Theory (CLT), known as the student-centered model based on distance education, has been mandatory for the entire education community during the first global pandemic of the digital age. Augmented Reality (AR) is another web-based technological development that can work in harmony with the 7E model. In the 7E model, the teaching of the lessons was at the forefront since the learners learn by doing, experiencing, and applying, directly participating in the lesson, and sharing opinions. For the present study, a scale was developed to determine the perceptions of the learners about the 7E model-based AR-enriched computer lesson. Validity and reliability studies were also conducted on the data obtained from the developed scale. The scale, which was prepared using a five-point Likert scale, was applied to 400 students who fit the profile of the sample group. A statistical analysis of the results concluded that 26 low-factor loading items should be removed from the questionnaire, and the final version of the 28-item scale was a six-factor structure. The statistical analysis concluded that the scale was suitable for all criteria in terms of validity and reliability. Considering the values revealed in the study, it was concluded that the overall scale (α = 0.932) was highly reliable.
Book
The second edition of this innovative work again provides a unique perspective on the clinical discovery process by providing input from experts within the NIH on the principles and practice of clinical research. Molecular medicine, genomics, and proteomics have opened vast opportunities for translation of basic science observations to the bedside through clinical research. As an introductory reference it gives clinical investigators in all fields an awareness of the tools required to ensure research protocols are well designed and comply with the rigorous regulatory requirements necessary to maximize the safety of research subjects. Complete with sections on the history of clinical research and ethics, copious figures and charts, and sample documents it serves as an excellent companion text for any course on clinical research and as a must-have reference for seasoned researchers. *Incorporates new chapters on Managing Conflicts of Interest in Human Subjects Research, Clinical Research from the Patient's Perspective, The Clinical Researcher and the Media, Data Management in Clinical Research, Evaluation of a Protocol Budget, Clinical Research from the Industry Perspective, and Genetics in Clinical Research *Addresses the vast opportunities for translation of basic science observations to the bedside through clinical research *Delves into data management and addresses how to collect data and use it for discovery *Contains valuable, up-to-date information on how to obtain funding from the federal government.
Book
The second edition of this innovative work again provides a unique perspective on the clinical discovery process by providing input from experts within the NIH on the principles and practice of clinical research. Molecular medicine, genomics, and proteomics have opened vast opportunities for translation of basic science observations to the bedside through clinical research. As an introductory reference it gives clinical investigators in all fields an awareness of the tools required to ensure research protocols are well designed and comply with the rigorous regulatory requirements necessary to maximize the safety of research subjects. Complete with sections on the history of clinical research and ethics, copious figures and charts, and sample documents it serves as an excellent companion text for any course on clinical research and as a must-have reference for seasoned researchers. *Incorporates new chapters on Managing Conflicts of Interest in Human Subjects Research, Clinical Research from the Patient's Perspective, The Clinical Researcher and the Media, Data Management in Clinical Research, Evaluation of a Protocol Budget, Clinical Research from the Industry Perspective, and Genetics in Clinical Research *Addresses the vast opportunities for translation of basic science observations to the bedside through clinical research *Delves into data management and addresses how to collect data and use it for discovery *Contains valuable, up-to-date information on how to obtain funding from the federal government.
Article
The content validity ratio (Lawshe) is one of the earliest and most widely used methods for quantifying content validity. To correct and expand the table, critical values in unit steps and at multiple alpha levels were computed. Implications for content validation are discussed.
Article
CIVIL rights legislation, the attendant actions of compliance agencies, and a few landmark court cases have provided the impetus for the extension of the application of content validity from academic achieve- ment testing to personnel testing in business and industry. Pressed by the legal requirement to demonstrate validity, and constrained by the limited applicability of traditional criterion-related methodologies, practitioners are more and more turning to content validity in search of solutions. Over time, criterion-related validity principles and strate- gies have evolved so that the term, "commonly accepted professional practice" has meaning. Such is not the case with content validity. The relative newness of the field, the proprietary nature of work done by professionals practicing in industry, to say nothing of the ever present legal overtones, have predictably militated against publication in the journals and formal discussion at professional meetings. There is a paucity of literature on content validity in employment testing, and much of what exists has eminated from civil service commissions. The selectipn of civil servants, with its eligibility lists and "pass-fail" con- cepts, has always been something of a special case with limited trans- ferability to industry. Given the current lack of consensus in profes- sional practice, practitioners will more and more face each other in adversary roles as expert witnesses for plaintiff and defendant. Until professionals reach some degree of concurrence regarding what con- stitutes acceptable evidence of content validity, there is a serious risk that the courts and the enforcement agencies will play the major determining role. Hopefully, this paper will modestly contribute to the improvement of this state of affairs (1) by helping sharpen the content ' A paper presented at Content Validity (1, a conference held at Bowling Green
Instruments for clinical health-care research
  • M Frank-Stromberg
  • S J Olsen
Frank-Stromberg, M., & Olsen, S. J. (2004). Instruments for clinical health-care research. London, England: Jones & Bartlett.
Probability for dummies
  • Rumseyd
Rumsey, D. (2006). Probability for dummies. Indianapolis, IN: Wiley.
Instruments for clinical health-care research
  • Frank-Strombergm
  • Armitage P.
Recalculation of the critical values for Lawshe’s content validity ratio. Measurement and Evaluation in Counseling and Development
  • R Wilsonf
  • Panw
  • A Schumskyd