ArticlePDF Available

Abstract and Figures

Evidence of longitudinal relations between language and early literacy skills in early childhood and later reading (and other) achievement is growing, along with an expanding array of early education programs designed to improve later academic outcomes and prevent, reduce, or close later academic achievement gaps across groups. Assessment systems to support this intervention have been developed, but to date we have little evidence of these systems’ outcomes when used at a broad scale in community-based preschool programs. For this broad purpose, two research questions were addressed: (a) How much progress do children make on language and early literacy skills over the course of one school year? and (b) What is the relationship between child characteristics, baseline performance, and growth on language and early literacy skills? Results indicated growth over time for all measures and relations between child age, gender, and free-or-reduced-price status and students’ performance at the beginning of the school year, but (with one exception) no relation between these covariates and growth over time. Discussion centers on current status of language and early literacy assessment in early childhood education as well as needs and issues to be addressed in future research and program development.
Content may be subject to copyright.
https://doi.org/10.1177/1534508418799173
Assessment for Effective Intervention
1 –11
© Hammill Institute on Disabilities 2018
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1534508418799173
aei.sagepub.com
Original Research
Long-term reading and literacy outcomes are influenced by
skills and competencies acquired during the preschool years
(National Early Literacy Panel, 2008; Walker, Greenwood,
Hart, & Carta, 1994). Acquisition of these language and
early literacy skills and competencies can and, in many
instances, should be monitored and taught (Carta et al.,
2016). Much of this teaching of language and early literacy
skills occurs in the context of multitiered systems of support
(MTSS) such as response to intervention, where universal
screening and other assessment approaches help identify
individual children for whom supplemental or more inten-
sive intervention is required (Carta et al., 2016). As a result,
systems for evaluating current performance and monitoring
progress of language and early literacy development among
preschool children are increasingly common (Greenwood,
Carta, & McConnell, 2011).
Improved assessment is particularly important as the
field of early education expands the array, content, and
empirical support for interventions that teach critical lan-
guage and early literacy skills (Mashburn, Justice, McGinty,
& Slocum, 2016; Noe, Spencer, Kruse, & Goldstein, 2014),
and as evidence of longer term effects of these interventions
accumulates (Johanson, Justice, & Logan, 2016). As these
efforts continue, assessment practices that support more
effective and differentiated intervention in early education
will become increasingly important.
Assessment to support improved intervention in early
childhood education rests on four core ideas. First, develop-
mental and early academic achievement during the pre-
school years is linked to later reading skill, a critical
component of achievement across the grades (National
Early Literacy Panel, 2008; Walker et al., 1994). Second,
we have growing evidence that interventions promoting
language and early literacy competencies in early childhood
produce better short- and long-term literacy outcomes over
time (Diamond, Justice, Siegler, & Snyder, 2013; Lonigan,
Farver, Phillips, & Clancy-Menchetti, 2011; Noe et al.,
2014). Third, we have evidence (although still largely cor-
relational) of relations between assessed performance of
language and early literacy in preschool and reading in
early elementary school (Missall et al., 2007; Walker et al.,
799173AEIXXX10.1177/1534508418799173Assessment for Effective InterventionKincaid et al.
research-article2018
1University of Minnesota, Minneapolis, USA
Corresponding Author:
Scott R. McConnell, Department of Educational Psychology, University
of Minnesota, Room 346 Education Sciences Building (Mail Stop 4101A),
56 E River Pkwy, Minneapolis, MN 55455, USA.
Email: smcconne@umn.edu
Assessing Early Literacy Growth in
Preschoolers Using Individual Growth
and Development Indicators
Aleksis P. Kincaid, PhD1, Scott R. McConnell, PhD1,
and Alisha K. Wackerle-Hollman, PhD1
Abstract
Evidence of longitudinal relations between language and early literacy skills in early childhood and later reading (and other)
achievement is growing, along with an expanding array of early education programs designed to improve later academic
outcomes and prevent, reduce, or close later academic achievement gaps across groups. Assessment systems to support
this intervention have been developed, but to date we have little evidence of these systems’ outcomes when used at a
broad scale in community-based preschool programs. For this broad purpose, two research questions were addressed:
(a) How much progress do children make on language and early literacy skills over the course of one school year? and (b)
What is the relationship between child characteristics, baseline performance, and growth on language and early literacy
skills? Results indicated growth over time for all measures and relations between child age, gender, and free-or-reduced-
price status and students’ performance at the beginning of the school year, but (with one exception) no relation between
these covariates and growth over time. Discussion centers on current status of language and early literacy assessment in
early childhood education as well as needs and issues to be addressed in future research and program development.
Keywords
preschool, literacy, growth, seasonal assessment
2 Assessment for Effective Intervention 00(0)
1994), suggesting functional intervention targets that pro-
mote later academic achievement. Finally, assessment sys-
tems, including those designed for MTSS, are being
deployed to assist teachers and early childhood programs in
data-based decision making (Buysse & Peisner-Feinberg,
2013; Carta et al., 2015; Carta et al., 2016)
To date, however, we are in early stages in both develop-
ment and validation of assessment needed to support lan-
guage and early literacy intervention in early childhood
programs. A recent review identified two measurement sys-
tems “that are sufficiently developed to support instruc-
tional decision making within . . . language and literacy
instruction” (McConnell, Bradfield, & Wackerle-Hollman,
2014, p. 147): Get Ready to Read! Revised (GRTR—R;
Lonigan & Wilson, 2008), and the “second generation”
Individual Growth and Development Indicators (IGDIs 2.0;
McConnell, Wackerle-Hollman, Bradfield, & Rodriguez,
2011). The review concluded:
. . . it appears that both GRTR-R and IGDIs 2.0 demonstrate
adequate technical adequacy to be used for making universal
screening decisions regarding language and early literacy
development. GRTR-R has higher levels of sensitivity when
used alone. IGDIs, however more comprehensively cover the
four essential domains of early literacy development, as the
GRTR-R does not include items that sample. . . oral language
development or comprehension. (McConnell et al., 2014, p. 156)
As a result, further analysis of IGDIs, particularly when
used at scale in community-based classrooms, may be par-
ticularly helpful.
IGDIs
IGDIs (McConnell et al., 2011) are general outcome mea-
sures (Fuchs & Deno, 1991) of language and early literacy
development for preschool children. These measures are
designed to be easy to use, relate to important long-term
outcomes (here, early elementary reading), and sensitive to
both needs for and effects of intervention. Initial evidence
of item and measure characteristics, including reliability
and validity relations, suggest that these measures are
appropriate for use in broad scale (e.g., district- or state-
wide) assessment and intervention models (McConnell,
Wackerle-Hollman, Roloff, & Rodriguez, 2015).
Although evidence of item and scale reliability, concur-
rent validity, and sensitivity to growth over time is available
for these measures (McConnell et al., 2011), to date there is
little evidence of measure characteristics when imple-
mented at broad scale by practitioners in real-world class-
room settings; such evaluation of both logistics and
in-the-field scale characteristics is appropriate for any mea-
sure intended to support practice. Specifically, relations
among different measures of language and early literacy
and changes in these measures across time are important to
planning and decision making by local and state education
agencies, and to ongoing research.
The primary purposes for implementing any educational
assessment system are to document changes in student per-
formance and to provide information for ongoing program
evaluation and improvement. However, in meeting these
purposes, assessment systems may face conceptual chal-
lenges. For instance, test developers and users reasonably
expect that new measures are sensitive to expected differ-
ences in performance of known groups of children—that
these measures can detect differences where they are theo-
retically or historically expected to occur (Cizek & Bunch,
2007). Some of these differences are logical and acceptable;
for instance, we might reasonably expect differences in
assessed skill as a function of age of child (or, in a school
setting, at entry to the current grade). However, other differ-
ences may be predictable but less acceptable; for instance,
differences associated with race, social or economic status,
or disability status are causes for concern in contemporary
educational policy and practice (Haycock, 2001; Stipek,
2002). To reduce this achievement gap, early childhood
programs must detect differences between certain groups at
initial enrollment (when many children are beginning for-
mal intervention), provide differentiated intervention based
on assessed performance, and demonstrate smaller or no
differences in achievement over the course of each aca-
demic year.
The purpose of this study was to explore growth of lan-
guage and early literacy skills in preschool-aged children as
measured by IGDIs in a statewide assessment initiative, and
to assess how student characteristics and growth are related.
Specifically, we sought to answer two questions:
1. How much progress do children make on language
and early literacy skills over the course of one
school year in various component measures of lan-
guage and early literacy?
2. To what degree is there a relation between child
characteristics, baseline performance, and growth
on language and early literacy skills?
Method
Analytic Sample
Data for this study came from 943 children in Iowa who
attended a publicly funded preschool during the 2013-2014
school year, and whose schools and teachers opted to par-
ticipate in Phase 1 of Iowa TIER, a statewide assessment
and intervention initiative designed to promote reading pro-
ficiency for all students by the end of third grade (Iowa
Department of Education, 2014). Benchmark screening
data were collected at three time points during the school
Kincaid et al. 3
year (i.e., fall, winter, and spring) by certified teachers and
support staff. Data available for analysis included children’s
birth date, sex, and information on receipt of special educa-
tion services, free or reduced-price lunch (FRL) status, and
primary home language.
Measures
IGDIs, Seasonal Screening Measures: (McConnell et al.,
2011) Five measures of language and early literacy devel-
opment were collected. All are designed for screening
within an early childhood MTSS system; each measure is
untimed and 15 items in length (with four preceding sample
items to teach the task). Each seasonal measure has items
selected to increase information gathering and child assess-
ment near established “cut scores” used for screening-level
decisions (for more information, see McConnell et al.,
2015). Together, the five measures represent four domains
of early literacy development: oral language (Picture
Naming IGDI), phonological awareness (Rhyming IGDI
and First Sounds IGDI), alphabet knowledge (Sound
Identification IGDI) and early comprehension (Which One
Doesn’t Belong IGDI).
All measures were developed using Rasch modeling to
evaluate and locate items, contrasting groups designs with
teacher judgments of intervention need as criterion to set
preliminary cut scores for decision making in each season,
and relation between cut scores and individual item loca-
tions to select items for seasonal measures (while a more
detailed description of these procedures is beyond the scope
of this paper, McConnell et al., 2015, provides more detail
on item development and testing, scale development, and
standard setting). Classification accuracy estimates (sensi-
tivity, specificity, and area under the curve) are available,
and are similar, for all seasonal scales; for brevity, only fall
estimates are provided here (see McConnell et al., 2015, for
more complete reporting.)
Picture Naming is a measure of oral language develop-
ment. Each item presents a color photo of a single object.
The child is asked to name the image, and items are scored
correct or incorrect against pre-established standards.
Picture Naming psychometric characteristics of fall sea-
sonal measures yielded estimates of sensitivity (.72), speci-
ficity (.64), and area under the curve (.77) when predicting
classification based on standardized test performance, and
internal consistency, or congeneric reliability (r = .74;
McConnell et al., 2015).
Rhyming assesses phonological awareness. The exam-
iner labels one target and two or three potential response
images, and asks the child to identify which of the potential
response images rhymes with the target; responses can be
verbal or gestural. Psychometric characteristics of Rhyming
in fall assessment are moderate to fair, both for congeneric
estimate of internal consistency (r = .90) and accuracy
when predicting classification based on standardized test
performance (sensitivity = .71, specificity = .57, Area
Under Curve (AUC) = .70; McConnell et al., 2015).
Sound Identification (Sound ID) is a measure of alphabet
knowledge. Children are presented three or four letters and
directed to identify the letter that corresponds to an exam-
iner-presented sound. Psychometric characteristics of
Sound ID were similar to those of other IGDIs in fall, with
evidence of internal consistency (congeneric r = .81) and
accuracy when predicting classification based on standard-
ized test performance (sensitivity = .71, specificity = .51,
and AUC = .67; McConnell et al., 2015).
Which One Doesn’t Belong (WODB) is a measure of
early comprehension. Each item presents three images on
one card, and the child is asked to identify one image that
does not fit into the same category as the others. The exam-
iner does not identify the category. Fall psychometric esti-
mates for this measure were acceptable, but lower than
those for other measures (congeneric internal consistency r
= .81, sensitivity = .71, specificity = .57, AUC = .70)
First Sounds is a measure of phonological awareness,
administered in winter and spring only. Children are pre-
sented with two to three images per item and asked to iden-
tify which one begins with an onset sound provided by the
examiner. First Sounds psychometric characteristics were
in similar range (congeneric internal consistency r = .76,
sensitivity = .73, specificity = .59, AUC = .70; McConnell
et al., 2015).
Data Collection
Children were assessed 3 times across the school year in
fall, winter, and spring using myIGDIs: Literacy+ card sets.
Measures were identical across children each season, but
items varied for each measure across seasons. Teachers or
other local staff in Iowa preschools administered each IGDI
task to every child in their classroom. All teachers adminis-
tering the assessments completed in-person training and
viewed online modules that explained the purposes of IGDI
assessment and provided information and modeling of
proper administration technique, opportunities to practice
scoring administrations with immediate computer feed-
back, and a content and scoring assessment.
Child Performance Variables
IGDI scores. Student’s IGDI responses were recorded online
at the item level by classroom teachers or other assessors
during task administration, with both item-level and total
raw score (i.e., number correct, 0–15) data retained. From
these data, Rasch scores were calculated to describe child
performance by measure by season.
4 Assessment for Effective Intervention 00(0)
Time. Teachers were required to administer assessments
during standard assessment periods each season. With
minor variation, data were gathered in October (“Fall”),
January (“Winter”), and April (“Spring”) of the school year.
The fall assessment was anchored at 0 to allow for the inter-
cept to be interpreted as the mean score that children
received during the fall benchmark period.
Covariates
Children’s age, sex, and whether or not they have received
FRL were included separately in analyses. Children’s sex
was coded dichotomously with males as the referent.
Children’s age was calculated for the beginning of the
school year, September 15, 2013, and then centered at 48
months to represent the youngest children admitted to pre-
school. FRL was collected through school records and
dichotomously coded with those who had not received it as
the referent. Some schools provided meals to all students
free of charge, thus some students may have been captured
as receiving FRL when they would normally not qualify for
those services. Miscategorization of children into or out of
FRL is common in secondary data records from schools and
can be expected to make detection of effects more conser-
vative (for discussion, see Harwell & LeBeau, 2010).
Analyses
Linear mixed-effects regression (LiMER) models were
used to investigate children’s status at the beginning of the
school year and growth across seasons. Because models
were run separately for each IGDI measure, sample sizes
change according to assessment being studied, and are
reported separately.
Random coefficients model for child outcomes. For each
LiMER analysis, a random coefficients model that allowed
both initial status and slope to vary was fit to the data before
proceeding with additional analyses:
Yr r
ti ii ii ti
=+
()
++
()
+ββ
00 11
In the above model, Yti represents Rasch score on a specific
IGDI for child i at time t, β0i and β1i are the group mean
intercept and slope coefficients, ri0 and ri0represent ran-
dom variation around the intercept and slopes, and ti rep-
resents error variance not otherwise explained.
Models including covariates. Following the random coeffi-
cients model, three covariates were added and evaluated
singularly (i.e., each model included only one covariate).
These models were fitted such that the covariate was
allowed to influence both the intercept and slope, allowing
for analysis of effects of each child characteristic on both
child performance in fall (i.e., intercept) and change over
time (i.e., slope):
YVar
rV
ar r
ti
ii
ti
=+
[]
+
()
++
[]
+
()
+ββ ββ
00 01 0101
11
11
In the above model, Yti represents Rasch score on a spe-
cific IGDI for child i at time t; β00 and β10 are the mean
intercept and slope, respectively; β01 and β11 are the effect
that the additional covariate have on the mean intercept and
slope; ri0 and ri1 represent random variation around the
intercepts and slopes, and ti represents error not accounted
for by the model. Var1 represents age, female, and FRL
variables. Because the models with the additional covariate
are not nested, fit statistics for each covariate analysis
should be compared with the unconstrained random coef-
ficients model.
Missing Data
LiMER is robust to missing data under the “missing at ran-
dom” (MAR) assumption (Raudenbush & Bryk, 2002). We
examined correlations between missing status for all stu-
dents at all times and estimated intercept and slope for
Rasch scores and all covariates (detailed results of these
analyses are available from the corresponding author).
Missing scores were not related to a student’s age, sex, or
FRL status, however, missingness was related to an overall
lower average Rasch score. To account for this, missing
time points were assessed using a pattern-mixture model
(Hedeker & Gibbons, 1997; Little & Schenker, 1995).
Models were compared using Akaike information criterion
(AIC) and impact on covariates of interest (intercept, slope,
and demographic information). Missingness was signifi-
cantly related to intercept of each model. Including miss-
ingness-by-time interactions only significantly affected
model fit for WODB; changes to slope of this model were
within one standard error of measurement. Therefore,
Picture Naming, Rhyming, and Sound Identification include
covariates that are adjusted for patterns of missingness on
the intercept, and WODB for missingness on both intercept
and slope.
Results
As shown in Table 1, half of the children in the study were
female (49.73%). The majority of students spoke English in
their home (95.27%) and did not qualify for FRL (87.38%)
or special education services (90.13%). The breakdown of
student ages by month is provided in Table 1.
IGDIs by Season
Table 2 provides means, standard deviations, skew, kurto-
sis, and standard error of student scores by measures and
Kincaid et al. 5
seasons in Rasch score units, or logits. With the exception
of First Sounds, measure means change by .5 to more than
1.5 logits across seasons (M = .974), standard deviations
are relatively stable at or near 1.5 logits, and skew and kur-
tosis measures are within limits for MTSS decision making
(McConnell & Wackerle-Hollman, 2016).
Assessment Completion Across Seasons
The majority of children who participated completed
assessments across all three seasons. Table 3 represents
unduplicated counts of students completing assessments by
every combination of season. Generally, more than two
thirds of the sample (71% for three-season measures and
89% for First Sounds, the two-season measure) completed
each IGDI in each season; only a small portion (5.5% for
Picture Naming, Rhyming, Sound Identification, and
WODB and 11.3% for First Sounds) completed individual
measures in only one season.
LiMER of Growth Across Seasons
Picture Naming. Table 4 presents results for LiMER analy-
ses of Picture Naming. The unconstrained, or random coef-
ficients, model indicated that student scores changed
significantly across seasons, and that there was variation
across students (intraclass correlation [ICC] = 0.72), indi-
cating that a hierarchical model was appropriate. Across all
three analytic models, child characteristics significantly
predicted fall scores in Picture Naming, but did not predict
slopes. Older children scored significantly higher at the
beginning of the year (intercept β = .07, p < .05) than
younger children. Females scored higher in fall than males
(β = .21, p < .05), but did not have a significantly different
rate of growth (β = –.01, ns). Students who received FRL
scored lower than their peers (β = –.37). The AIC for each
model indicated that adding each covariate improved model
fit, but that child age improved model fit the most. The vari-
ance reduction factor (Raudenbush & Bryk, 2002) indicated
that age explained 5% of the variance in the intercept
(detailed results of these analyses are available from the
corresponding author).
Rhyming. Rhyming results are provided in Table 5. Uncon-
strained model results indicated that student scores changed
significantly across seasons (β = 1.16, p < .05) and that
hierarchical modeling was appropriate (ICC = 0.74).
Across all three covariate models, child characteristics sig-
nificantly predicted fall scores in Rhyming, but did not pre-
dict slopes. Older children scored higher than younger
children in fall (β = .11, p < .05). Females scored higher in
fall than males (β = .52, p < .05), and children who
received FRL scored lower than classmates (β = –.89, p <
.05). According to AIC, the model with age as the predictor
had the best fit and reduced intercept variance by 4%.
Sound Identification. Table 6 presents LiMER results for
Sound Identification. The unconstrained model supported
the use of hierarchical modeling (ICC = 0.68). Across all
three analytic models of covariates, child characteristics
significantly predicted fall scores in Sound Identification,
but did not predict slopes. Older children scored higher than
younger children in fall (β = .09, p < .05). Females scored
higher than males in fall (β = .32, p < .05). Children who
received FRL scored lower than their classmates (β = –.45,
p < .05). The AIC for each model indicated that adding
each covariate improved model fit, but that child age
Table 1. Sociodemographic Characteristics (N = 943).
Subject Characteristic n%
Sex
Male 474 50.27
Female 469 49.73
Age in fall (months)a
48 33 3.50
49 51 5.41
50 80 8.49
51 82 8.70
52 74 7.86
53 68 7.22
54 87 9.24
55 83 8.81
56 74 7.86
57 96 10.19
58 66 7.01
59 62 6.58
60 46 4.88
61 14 1.49
62 12 1.27
63 7 0.74
64 1 0.11
65 2 0.21
66 4 0.42
Home language
English 806 95.27
Hmong 1 0.12
Somali 1 0.12
Spanish 38 4.49
FRL
Non-FRL 824 87.38
FRL 119 12.62
SPED services
No services 785 90.13
Receive services 86 9.87
Note. FRL = free or reduced-price lunch; SPED = Special Education.
aOnly N = 942 students provided birth date information. Therefore,
birth date percentages are based on 942 students.
6 Assessment for Effective Intervention 00(0)
Table 2. IGDI Rasch Score Means, Standard Deviations, Skew, Kurtosis, and Standard Error of Student Scores by Measure and
Season.
Assessment Season n M SD Skew Kurtosis SE
Picture Naming Fall 773 1.07 1.32 −1.02 2.63 0.05
Winter 885 1.79 1.4 −0.89 2.5 0.05
Spring 849 2.34 1.28 −0.3 1.8 0.04
Rhyming Fall 771 0.53 2.53 −0.38 −0.77 0.09
Winter 885 1.98 2.57 −0.53 −0.85 0.09
Spring 848 2.97 2.32 −1.07 0.2 0.08
Sound ID Fall 772 0.04 1.85 −0.27 0.37 0.07
Winter 883 1.56 1.96 −0.11 0.02 0.07
Spring 849 2.51 1.85 −0.57 0.22 0.06
WODB Fall 771 −0.02 1.93 −0.76 0.11 0.07
Winter 884 0.46 1.73 −1.04 1.22 0.06
Spring 849 1.59 1.44 −1.34 3.5 0.05
First Sounds Winter 882 2.49 1.92 −0.93 1.21 0.06
Spring 848 2.77 1.76 −0.6 0.62 0.06
Note. IGDI = individual growth and development indicator; Sound ID = Sound Identification; WODB = Which One Doesn’t Belong.
Table 3. Unduplicated Counts of Students Completing Assessments by Every Combination of Season.
IGDI F × W × S
Two Seasons Only One Season
Only TotalF × W F × S W × S
Picture Naming 673 61 16 141 52 943
Rhyming 671 61 16 143 51 942a
Sound ID 672 60 16 142 52 942a
WODB 670 61 17 143 52 943
First Sounds 813 104 917b
Note. F = fall; W = winter; S = spring; Sound ID = Sound Identification; WODB = Which One Doesn’t Belong.
aOne student never took this assessment.
bTwenty-six students were never tested with First Sounds.
Table 4. LiMER Model Results for Picture Naming (N = 942): Unconstrained Model and Models Covarying Age at Start, Gender, and
FRL Status.
LIMER Model Parameter
Unconstrained Age Female FRL
B (SE)B (SE)B (SE)B (SE)
Fixed effects
Initial status
Intercept β00 1.26 (0.05)* 0.81 (0.09)* 1.16 (0.07)* 1.32 (0.05)*
Covariate β01 0.07 (0.01)* 0.21 (0.09)*−0.43 (0.13)*
Rate of change
Intercept β10 0.61 (0.02)* 0.67 (0.04)* 0.61 (0.03)* 0.59 (0.02)*
Covariate β11 −0.01 (0.01) 0.00 (0.04) 0.12 (0.06)
Variance components
Level 1
Within persons σ
20.52 0.52 0.52 0.52
Level 2
In initial status σ0
21.27 1.20 1.26 1.25
In time σ1
20.08 0.08 0.08 0.08
Covariance −0.32 −0.11 −0.32 −0.31
Goodness of fit
AIC 7,498 7,460 7,494 7,492
Note. Each model is controlled for missingness using pattern-matching modeling (Hedeker & Gibbons, 1997; Little & Schenker, 1995). Covariate
represents individual child variables, age at start, gender, and FRL when included in each model. LiMER = linear mixed-effects regression; FRL = free
or reduced-price lunch; AIC = Akaike information criterion.
*p < .05 as indicated by having a t score greater than 1.97.
Kincaid et al. 7
Table 5. LiMER Model Results for Rhyming (N = 941): Unconstrained Model and Models Covarying Age at Start, Gender, and FRL
Status.
LIMER Model Parameter
Unconstrained Age Female FRL
B (SE)B (SE)B (SE)B (SE)
Fixed effects
Initial status
Intercept β00 0.94 (0.09)* −0.20 (0.17) 0.67 (0.12)* 1.05 (0.10)*
Covariate β01 0.11 (0.02)* 0.52 (0.17)* −0.89 (0.25)*
Rate of change
Intercept β10 1.16 (0.04)* 1.22 (0.08)* 1.16 (0.05)* 1.15 (0.04)*
Covariate β11 −0.01 (0.01) 0.00 (0.08) 0.10 (0.12)
Variance components
Level 1
Within persons σ
21.66 1.66 1.66 1.66
Level 2
In initial status σ0
24.45 4.28 4.38 4.36
In time σ1
20.28 0.28 0.28 0.28
Covariance −0.35 −0.35 −0.35 −0.35
Goodness of fit
AIC 10,465 10,430 10,456 10,455
Note. Each model is controlled for missingness using pattern-matching modeling (Hedeker & Gibbons, 1997; Little, 1995). Covariate represents individual
child variables, age at start, gender, and FRL when included in each model. LiMER = linear mixed-effects regression; FRL = free or reduced-price
lunch; AIC = Akaike information criterion.
*p < .05 as indicated by having a t score greater than 1.97.
Table 6. LiMER Model Results for Sound Identification (N = 941): Unconstrained Model and Models Covarying Age at Start, Gender,
and FRL Status.
LIMER Model Parameter
Unconstrained Age Female FRL
B (SE)B (SE)B (SE)B (SE)
Fixed effects
Initial status
Intercept β00 0.25 (0.07)* −0.31 (0.13)* 0.08 (0.09) 0.30 (0.07)*
Covariate β01 0.09 (0.02)* 0.32 (0.12)* −0.45 (0.19)*
Rate of change
Intercept β10 1.19 (0.03)* 1.20 (0.06)* 1.16 (0.05)* 1.18 (0.03)*
Covariate β11 0.00 (0.01) 0.05 (0.06) 0.08 (0.10)
Variance components
Level 1
Within persons σ
21.19 1.19 1.20 1.19
Level 2
In initial status σ0
22.35 2.25 2.32 2.32
In time σ1
20.17 0.17 0.17 0.17
Covariance −0.20 −0.20 −0.21 −0.19
Goodness of fit
AIC 9,440 9,401 9,431 9,438
Note. Each model is controlled for missingness using pattern-matching modeling (Hedeker & Gibbons, 1997; Little, 1995). Covariate represents individual
child variables, age at start, gender, and FRL when included in each model. LiMER = linear mixed-effects regression; FRL = free or reduced-price
lunch; AIC = Akaike information criterion.
*p < .05 as indicated by having a t score greater than 1.97.
8 Assessment for Effective Intervention 00(0)
improved model fit the most, reducing variance in the inter-
cept by 4%.
WODB. WODB results are presented in Table 7. The uncon-
strained model demonstrated that hierarchical modeling
was appropriate for the data (ICC = 0.67). All three child
characteristics were associated with beginning-of-the-year
intercept, and age was associated with a significantly differ-
ent rate of growth. Older children scored higher in the fall
than their peers (β = .10, p < .05), whereas younger chil-
dren grew at a faster rate than older children (β = –.02, p <
.05). Females scored higher than males in fall (β =.55, p <
.05). Children on FRL scored lower than their classmates in
fall (β = –.50, p < .05).
First Sounds. Table 8 presents results for First Sounds.
Because the measure was administered only in winter and
spring, the intercept for First Sounds is the average Rasch
score children received on the winter assessment and slope
is equivalent to average per-child change score from winter
to spring. All three child characteristics were associated
with significant, and in some cases relatively large, differ-
ences in intercept, but were not associated with differences
in seasonal change. Older children scored higher in winter
than younger children in winter (β = .08, p < .05). Females
scored higher than males in winter (β = .45, p < .05). Chil-
dren on FRL scored lower than classmates in winter (β =
–.57, p < .05).
Discussion
This study investigated assessment of early language and
literacy performance over the course of a single preschool
year for children in a statewide assessment and intervention
initiative, and the extent to which observed rates of growth
were associated with individual children’s age at the start of
the preschool year, their gender, or recorded eligibility for
FRL. All measures showed growth over the time of assess-
ment, fall to spring for four measures and winter to spring
for Sound Identification. Consistent with previous literature
(Roseth, Missall, & McConnell, 2012), children who were
younger, male, or had received FRL started the year behind
peers across all early literacy and language measures. In
addition, with the exception of WODB, a measure of com-
prehension, rates of change across the year in assessed per-
formance did not vary as a function of these child
characteristics.
These findings have at least three implications for assess-
ment and intervention in early childhood education. First,
seasonal growth for all IGDIs is consistent with at least one
criterion for general outcome measurement, sensitivity to
change (Fuchs & Deno, 1991; McConnell & Wackerle-
Hollman, 2016). Earlier versions of the measures evaluated
here demonstrated empirical relations to elementary-grade
reading outcomes (Missall et al., 2007), and similar evi-
dence for revised tools used here will be important for
future research.
Table 7. LiMER Model Results for Which One Doesn’t Belong (N = 942): Unconstrained Model and Models Covarying Age at Start,
Gender, and FRL Status.
LIMER Model Parameter
Unconstrained Age Female FRL
B (SE)B (SE)B (SE)B (SE)
Fixed effects
Initial status
Intercept β00 0.06 (0.07)* −0.58 (0.13)* −0.48 (0.09)* −0.14 (0.07)*
Covariate β01 0.10 (0.02)* 0.57 (0.13)* −0.45 (0.19)*
Rate of change
Intercept β10 0.77 (0.03)* 0.88 (0.06)* 0.86 (0.04)* 0.80 (0.03)*
Covariate β11 −0.02 (0.01)* −0.10 (0.06) 0.09 (0.10)
Variance components
Level 1
Within persons σ
21.21 1.21 1.22 1.22
Level 2
In initial status σ0
22.31 2.18 2.23 2.29
In time σ1
20.16 0.16 0.15 0.16
Covariance −0.79 −0.77 −0.78 −0.78
Goodness of fit
AIC 9,099 9,053 9,079 9,094
Note. Each model is controlled for missingness using pattern-matching modeling (Hedeker & Gibbons, 1997; Little, 1995). Covariate represents individual
child variables age at start, gender, and FRL when included in each model. LiMER = linear mixed-effects regression; FRL = free or reduced-price lunch;
AIC = Akaike information criterion.
*p < .05 as indicated by having a t score greater than 1.97.
Kincaid et al. 9
Second, these results demonstrate differences associated
with demographic characteristics (age at entry, eligibility for
FRL, sex) in language and early literacy in fall of the year
before kindergarten but no differences (except for a measure
of comprehension) in rates of change across the year. Given
wide-scale concern for persistent differences in academic
achievement across demographic groups (the “achievement
gap”; for example, Lee, 2002) and suggestions that these dif-
ferences might be minimized by reducing differences prior
to school entry (Magnusson & Duncan, 2016), early educa-
tion programs must be designed that produce different
results; differences present in fall must shrink, if not close,
over time. An assessment system that evaluates progress on
this objective at the individual, classroom, and system level
will assist in that effort; the measures used here appear to be
candidates for such an application.
Third, differentiated intervention may be an important
tool for reducing initial differences in language and early
literacy skills. MTSS, or response to intervention in early
childhood, have been developed to implement such differ-
entiated intervention (Carta et al., 2016). MTSS in early
childhood requires an assessment system for seasonal uni-
versal screening and aligned progress monitoring, and the
measures evaluated here were designed for that purpose
(Greenwood et al., 2011). The study reported here demon-
strates that such screening can be implemented at broad
scale with results designed to produce educationally rele-
vant data for teachers and systems.
These results also suggest other potentially useful direc-
tions for future research. First, evidence of relations between
preschool IGDIs and reading (and other subjects) in later
years is needed. Strong relations would support validity
claims of the measures used here, and would provide impor-
tant data for refining data-based decision-making criteria
for MTSS in early childhood. Second, evidence is needed
that explicitly links IGDIs to effective intervention at the
individual student, classroom and, program level. The long
history of research on curriculum-based measurement (cf.,
Deno, 1997) provides a strong model for this future research.
However, findings that gaps present in fall were relatively
unchanged suggests opportunities for investigating ways
close such gaps and, in turn, to promote educational equity.
Third, ongoing refinements in IGDI assessment, including
adaptations to computer-adaptive testing and intelligent
intervention recommendations currently under develop-
ment, are likely to both create new opportunities and new
requirements for testing feasibility and efficacy.
Limitations
Although results presented here describe the use of a new
measure in applied educational settings, several potential
limitations should be noted. First, measures were collected
by classroom teachers for the practical purpose of preschool
education assessment. This was intentional with all teachers
trained explicitly in administration of these measures and
Table 8. LiMER Model Results for First Sounds (N = 916): Unconstrained Model and Models Covarying Age at Start, Gender, and
FRL Status.
LIMER Model Parameter
Unconstrained Age Female FRL
B (SE)B (SE)B (SE)B (SE)
Fixed effects
Initial status
Intercept β00 2.47 (0.06)* 1.94 (0.13)* 2.24 (0.09)* 2.53 (0.07)*
Covariate β01 0.08 (0.02)* 0.45 (0.13)* −0.57 (0.19)*
Rate of change
Intercept β10 0.23 (0.05)* 0.12 (0.10) 0.29 (0.07)* 0.24 (0.05)*
Covariate β11 0.02 (0.01) −0.11 (0.10) −0.03 (0.15)
Variance Components
Level 1
Within persons σ
20.69 0.68 0.68 0.69
Level 2
In initial status σ0
23.01 2.93 2.97 2.97
In time σ1
20.70 0.71 0.70 0.69
Covariance −0.59 −0.62 −0.59 −0.60
Goodness of fit
AIC 6,504 6,470 6,495 6,496
Note. Each model is controlled for missingness using pattern-matching modeling (Hedeker & Gibbons, 1997; Little, 1995). Covariate represents individual
child variables age at start, gender, and FRL when included in each model. LiMER = linear mixed-effects regression; FRL = free or reduced-price lunch;
AIC = Akaike information criterion.
*p < .05 as indicated by having a t score greater than 1.97.
10 Assessment for Effective Intervention 00(0)
site supervisors instructed to monitor fidelity of administra-
tion. However, inter-rater reliability was not assessed, and
may vary. Nonetheless, one would expect that variations in
examiner adherence to reduce reliability of resulting data,
and in turn, reduce the likelihood of significant findings
reported here.
Second, IGDIs were designed as tools for universal sea-
sonal screening. Seasonal scales within each measure were
designed specifically to assess student performance at or
above an identified temporal standard of performance (i.e.,
expected performance for that particular season); using
Rasch values, items were selected that maximized informa-
tion of child performance relative to that standard. As such,
measures such as these may be more useful in evaluating
proficiency at seasonal time points but less useful for detect-
ing growth over time for children whose performance var-
ies significantly from seasonal screening standards. As
such, estimates of change in child language and literacy
ability or skill may be less reliable for children whose per-
formance differs markedly from each seasonal standard.
Third, while we investigated effects of variation across
demographic characteristics, base rates were quite low for
students who receive special education services, those eli-
gible for FRL, and English-language learners. As a result,
caution should be exercised in interpreting these effects.
Finally, no information is available about the quality or
type of instruction, the length of each preschool program
day, or specific intervention practices that may account for
both overall results and variations reported here. Like other
emerging research in early childhood education, assessment
of these “process” variables (cf., Mashburn et al., 2008) will
likely be important.
Conclusion
As publicly funded early education continues to expand,
and as efforts continue to better differentiate intervention
prior to kindergarten, assessment systems that support these
efforts become increasingly important. In design, these
assessment systems must be psychometrically rigorous:
demonstrating both strong internal consistency and rela-
tions to intermediate and long-term general outcomes.
These systems also need to be easy to deploy at broad scale,
and to produce information that is both accessible and
actionable to classroom teachers and administrators. Finally,
these measures must be aligned with progress monitoring
assessment that monitors the efficacy of intervention over
short periods of time. Ongoing work like that reported here
will be an essential part of ongoing development of effec-
tive early education.
Acknowledgments
The authors gratefully acknowledge assistance in conducting this
research from Drs. Amy Williamson, Sarah Brown, Greg Feldman,
and Janell Brandhorst from the Iowa Department of Education, and
the teachers and students who participated in Phase 1 of Iowa TIER.
Declaration of Conflicting Interests
The authors declared the following potential conflicts of interest
with respect to the research, authorship, and/or publication of this
article: Scott McConnell and Alisha Wackerle-Hollman have
developed assessment tools and related resources known as
Individual Growth & Development Indicators and Get it, Got it,
Go! This intellectual property is subject of technology commer-
cialization by the University of Minnesota, and portions have been
licensed to Early Learning Labs, Inc., a company which may com-
mercially benefit from the results of this research. McConnell and
Wackerle-Hollman have equity interest in Early Learning Labs,
Inc. The University of Minnesota also has equity and royalty inter-
ests in Early Learning Labs which, in turn, may benefit the authors.
These relationships have been reviewed and are being managed by
the University of Minnesota in accordance with its conflict of
interest policies.
Funding
The authors disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: This
work was supported in part by a contract from the Iowa Department
of Education to the University of Minnesota, however, no official
endorsement should be inferred.
ORCID iD
Scott R. McConnell https://orcid.org/0000-0003-0897-9236
References
Buysse, V., & Peisner-Feinberg, E. (Eds.). (2013). Handbook of
response to intervention in early childhood. Baltimore, MD:
Paul H. Brookes.
Carta, J. J., Greenwood, C. R., Atwater, J., McConnell, S. R.,
Goldstein, H., & Kaminski, R. A. (2015). Identifying preschool
children for higher tiers of language and early literacy instruction
within a response to intervention framework. Journal of Early
Intervention, 36, 281–291. doi:10.1177/1053815115579937
Carta, J. J., Greenwood, C. R., Goldstein, H., McConnell, S.
R., Kaminski, R., Bradfield, T. A., . . . Atwater, J. (2016).
Advances in multi-tiered systems of support for prekinder-
garten children: Lessons learned from 5 years of research and
development from the Center for Response to Intervention in
Early Childhood. In S. R. Jimerson, M. K. Burns, & A. M.
VanDerHeyden (Eds.), The handbook of response to interven-
tion: The science and practice of multi-tiered systems of sup-
port (2nd ed., pp. 587–606). New York, NY: Springer.
Cizek, G. J., & Bunch, M. B. (Eds.). (2007). Standard setting: A
guide to establishing and evaluating performance standards
on tests. Thousand Oaks, CA: SAGE.
Deno, S. L. (1997). Whether thou goest. . . Perspectives on prog-
ress monitoring. In J. W. Lloyd, E. J. Kameenui & D. Chard
(Eds.), Issues in educating students with disabilities (pp. 77–
99). Mahwah, NJ: Lawrence Erlbaum.
Diamond, K. E., Justice, L. M., Siegler, R. S., & Snyder, P. A.
(2013). Synthesis of IES research on early intervention and
Kincaid et al. 11
early childhood education (NCSER 2013-3001). Washington,
DC: National Center for Special Education Research, Institute
of Education Sciences, U.S. Department of Education.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions
between instructionally relevant measurement models.
Exceptional Children, 57, 488–500.
Greenwood, C. R., Carta, J. J., & McConnell, S. (2011). Advances
in measurement for universal screening and individual
progress monitoring of young children. Journal of Early
Intervention, 33, 254–267. doi:10.1177/1053815111428467
Harwell, M., & LeBeau, B. (2010). Student eligibility for a free
lunch as an SES measure in education research. Educational
Researcher, 39, 120–131.
Haycock, K. (2001). Closing the achievement gap. Educational
Leadership, 58(6), 6–11.
Hedeker, D., & Gibbons, R. D. (1997). Application of random-
effects pattern-mixture models for missing data in longitudi-
nal studies. Psychological Methods, 2, 64–78.
Iowa Department of Education. (2014). Early Literacy
Implementation (ELI). Retrieved from Retrieved August 30
2018 from https://www.educateiowa.gov/sites/files/ed/docu-
ments/IowaTIERBrochure.pdf
Johanson, M., Justice, L. M., & Logan, J. (2016). Kindergarten
impacts of a preschool language-focused intervention.
Applied Developmental Science, 20, 94–107. doi:10.1080/10
888691.2015.1074050
Lee, J. (2002). Racial and ethnic achievement gap trends: Reversing
the progress toward equity? Educational Researcher, 31, 3–12.
Little, R. J., & Schenker, N. (1995). Missing data. In G. Arminger,
C. C. Clogg & M. E. Sobel (Eds.), Handbook of statistical
modeling for the social and behavioral sciences (pp. 39–75).
New York, NY: Springer.
Lonigan, C. J., Farver, J. M., Phillips, B. M., & Clancy-Menchetti,
J. (2011). Promoting the development of preschool children’s
emergent literacy skills: A randomized evaluation of a liter-
acy-focused curriculum and two professional development
models. Reading and Writing, 24, 305–337.
Lonigan, C. J., & Wilson, S. B. (2008). Report on the revised get
ready to read! Screening tool: Psychometrics and normative
information (Final Technical Report prepared for the National
Center on Learning Disabilities). Retrieved from http://get-
readytoread.org/images/content/downloads/GRTR_screen-
ing_tool/grtrnormingreportfinal-july-2008.pdf
Magnusson, K., & Duncan, G. J. (2016). Can early childhood
interventions decrease inequality of economic opportunity?
RSF: The Russell Sage Foundation Journal of the Social
Sciences, 2, 123–141. doi:10.7758/rsf.2016.2.2.05
Mashburn, A., Justice, L. M., McGinty, A., & Slocum, L. (2016). The
impacts of a scalable intervention on the language and literacy
development of rural pre-kindergartners. Applied Developmental
Science, 20, 61–78. doi:10.1080/10888691.2015.1051622
Mashburn, A. J., Pianta, R. C., Hamre, B. K., Downer, J. T.,
Barbarin, O. A., Bryant, D., … Howes, C. (2008). Measures
of classroom quality in prekindergarten and children’s
development of academic, language, and social skills. Child
Development, 79, 732–749.
McConnell, S. R., Bradfield, T. A., & Wackerle-Hollman, A. K.
(2014). Early childhood literacy screening. In R. Kettler, T.
Glover, C. Albers & K. A. Feeney-Kettler (Eds.), Universal
screening in educational settings: Identification, implica-
tions, and interpretation (pp. 141–170). Washington, DC:
American Psychological Association.
McConnell, S. R., & Wackerle-Hollman, A. K. (2016). Can
we measure the transition to reading? General outcome
measures and early literacy development from pre-
school to early elementary grades AERA Open, 2(3).
doi:10.1177/2332858416653756
McConnell, S. R., Wackerle-Hollman, A. K., Bradfield, T. A., &
Rodriguez, M. C. (2011). Individual growth and development
indicators: Early literacy plus. St. Paul, MN: Early Learning
Labs.
McConnell, S. R., Wackerle-Hollman, A. K., Roloff, T. A.
B., & Rodriguez, M. (2015). Designing a measurement
framework for Response to Intervention in early child-
hood programs. Journal of Early Intervention, 36, 263–280.
doi:10.1177/1053815115578559
Missall, K. N., Reschly, A., Betts, J., McConnell, S. R., Heistad,
D., Pickart, M., & Marston, D. (2007). Examination of the
predictive validity of preschool early literacy skills. School
Psychology Review, 36, 433–452.
National Early Literacy Panel. (2008). Developing early literacy:
Report of the National Early Literacy Panel—A scientific
synthesis of early literacy development and implications for
intervention. Jessup, MD: National Institute for Literacy.
Noe, S., Spencer, T. D., Kruse, L., & Goldstein, H. (2014).
Effects of a Tier 3 Phonological Awareness Intervention on
Preschoolers’ Emergent Literacy. Topics in Early Childhood
Special Education, 34, 27–39.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear
models: Applications and data analysis methods (2nd ed.).
Thousand Oaks, CA: SAGE.
Roseth, C. J., Missall, K. N., & McConnell, S. R. (2012). Early
Literacy Individual Growth and Development Indicators
(EL-IGDIs): Growth trajectories using a large, internet-based
sample. Journal of School Psychology, 50, 483–501.
Stipek, D. (2002). At what age should children enter kindergar-
ten? A question for policy makers and parents. Social Policy
Report, 16, 3–16.
Walker, D., Greenwood, C. R., Hart, B., & Carta, J. J. (1994).
Prediction of school outcome based on early language pro-
duction and socioeconomic factors. Child Development, 65,
606–621
... In the most disadvantaged social strata, the environment does not provide as many and varied linguistic experiences, which does not allow for the same knowledge of words. This means that children who do not have "decoding tools" are unable to consolidate their knowledge at the right time, leading to future learning and language difficulties (Kincaid, McConnell, & Wackerle-Hollman, 2020;Silva, 2014). ...
Article
Full-text available
Adequate language development is central to a child's academic and social development. This study aimed to assess the language of 35 preschool and school-aged Portuguese children in residential care in four social institutions, using the Grelha de Observação da Linguagem (GOL-E) developed by Kay and Santos (2014), a validated tool in Portuguese. The results of the study showed that in terms of language competence, compared to the normative results expected for their age: a) Of the thirty-five children assessed, only three were at or above the 50th percentile; b) Twelve children were between the 5th and 25th percentiles; c) Eight children were in the 10th percentile; d) Of the children between the 11th and 12th percentiles, only one was in the 90th and 75th percentiles; e) Eleven children were in the 10th and 25th percentiles; f) One child at the age of 12 was in the 5th percentile. Most of the children were in percentiles below those expected for their age group. According to the definition of speech and language disorders, we can observe that a group of these children fall under the condition of speech and language disorders, but have not been formally identified in the educational system, putting them at risk of failure in school and life. This study highlights the importance of language assessment and special education services for children living in institutions in Portugal. More studies with this population in these age groups are needed to better understand the language competencies of children living in residential care. Article visualizations: </p
... Most research that has examined language gains (1) considered these gains as the outcome of interest, such as intervention studies or correlational studies of associations between classroom quality or language practices and language learning (e.g., Burchinal et al., 2021;Hadley et al., 2022;Herrera et al., 2021;Rogde et al., 2019); (2) examined longitudinal associations between gains in one aspect of language and other language outcomes (e.g., Donnelly & Kidd, 2021;Rowe et al., 2012); or (3) described children's language gains over time (e.g., Kincaid et al., 2020;Schmitt et al., 2017). To our knowledge, studies examining language change as a predictor of other longitudinal outcomes are very limited; we are aware of a few studies examining this within the area of socioemotional development (e.g., Petersen & LeBeau, 2021) and two studies that used latent change scores (LCSs) to examine bidirectional associations between change in vocabulary and change in reading comprehension in samples of elementary school students (Quinn et al., 2015(Quinn et al., , 2020. ...
Article
In this preregistered study, we used latent change score models to address two research aims: (1) whether preschool‐aged children's language gains, over a year of early childhood education, were associated with later performance on state‐mandated, literacy‐focused kindergarten readiness and Grade 3 reading achievement assessments, and (2) whether gains in language, a more complex skill, predicted these outcomes after controlling for more basic emergent literacy skills. There were 724 participating children (mean = 57 months; 51% male; 76% White, 12% Black, 6% multiple races, and 5% Hispanic or Latino). We found that language gains significantly predicted kindergarten readiness when estimated in isolation (effect = 0.24 SDs, p < .001), but not when gains in letter knowledge and phonological awareness were also included.
... Most research that has examined language gains (1) considered these gains as the outcome of interest, such as intervention studies or correlational studies of associations between classroom quality or language practices and language learning (e.g., Burchinal et al., 2021;Hadley et al., 2022;Herrera et al., 2021;Rogde et al., 2019); (2) examined longitudinal associations between gains in one aspect of language and other language outcomes (e.g., Donnelly & Kidd, 2021;Rowe et al., 2012); or (3) described children's language gains over time (e.g., Kincaid et al., 2020;Schmitt et al., 2017). To our knowledge, studies examining language change as a predictor of other longitudinal outcomes are very limited; we are aware of a few studies examining this within the area of socioemotional development (e.g., Petersen & LeBeau, 2021) and two studies that used latent change scores (LCSs) to examine bidirectional associations between change in vocabulary and change in reading comprehension in samples of elementary school students (Quinn et al., 2015(Quinn et al., , 2020. ...
Preprint
This preregistered study used latent change score models to address two primary aims: (1) whether preschool-aged children’s language gains over the early childhood year were associated with later performance on state-mandated, literacy-focused kindergarten readiness and Grade 3 reading achievement assessments and (2) whether gains in language, a more complex skill, predicted these outcomes after controlling for more basic emergent literacy skills. There were 724 participating children (mean = 57 months, 51% male, 76% White, 12% Black, 6% multiple races, and 5% Hispanic or Latino). The study found that language gains were significantly predictive of kindergarten readiness when estimated in isolation (effect = 0.24 SDs, p < .001), but not when gain in letter knowledge and phonological awareness were also included.
... The early detection and intervention may decrease the number of young children with difficulties in the future. Moreover, assessment systems that support these efforts are increasingly important because can be indicators of the developmental difficulties that may affect emotional and social outcomes for an individual across their life span Haley et al., 2017;Kincaid, McConnell & Wackerle-Hollman, 2020;Rogde, Melby-Lervåg, & Lervåg, 2016). ...
Article
Full-text available
Early literacy development is an indicator of a child’s overall cognitive-linguistic development and affects their academic, social, emotional and behavioural skills. Research suggests that early detection in preschool years can have an important role in the prevention of academic failure. There is a lack of early literacy screening tools for Portuguese preschool children. This study aims to present preliminary data results of the development and validation of the Preschool Early Literacy Screening Tool (Rastreio de Literacia Emergente Pré-escolar- RaLEPE). A pilot study was carried out with a sample of 128 screenings, answered by the parents/caregivers of the Portuguese children in the target age groups. The analysis of results shown the reliability of the tool, with a very good internal consistency for RaLEPE total scale and the different sections. Therefore, preliminary results of this study indicate internal validity of the RaLEPE and confirm this as screening tool usefulness for early intervention childhood, to provide early diagnosis and contribute to early intervention for children with language and learning disorders.
... R esearchers have advocated for early childhood teachers to use language and literacy assessment data to inform instruction (e.g., Kincaid, McConnell, & Wackerle-Hollman, 2020;Lonigan, Allan, & Lerner, 2011;Piasta, 2014;Stecker, Lembke, & Foegen, 2008) based on evidence that planned, targeted instruction positively affects student language and literacy outcomes (Connor et al., 2009;Denton et al., 2010;Denton, Fletcher, Anthony, & Francis, 2006;Fuchs, Fuchs, & Stecker, 2010;Lonigan & Phillips, 2016;Simmons et al., 2011;Slavin, Cheung, Holmes, Madden, & Chamberlain, 2013) and that teacher training interventions incorporating the use of language and literacy data have led to improved outcomes for young children (e.g., Al Otaiba et al., 2011;Landry, Anthony, Swank, & Monseque-Bailey, 2009;Lembke et al., 2018;Marsh, Bertrand, & Huguet, 2015;Weiland & Yoshikawa, 2013). As a result, the gathering of information from myriad data sources has become an increasing focus in early childhood education via policy initiatives (Center on Standards & Assessment Implementation, 2016; Quality Compendium, 2020; K. Snow, 2011;U.S. ...
Article
Early childhood research and policy have promoted the use of language and literacy assessment data to inform instruction. Yet, there is a limited understanding of preschool teachers’ data practices and sensemaking, particularly when considered from the perspectives of practicing teachers. In this multicase study, we used a phenomenological approach to generate a theory about preschool teachers’ data practices in relation to supporting children’s language and literacy outcomes. Twenty preschool teachers participated in a series of three observations, planning interviews, and stimulated recall interviews designed to tap their pedagogical reasoning and data use practices. The framework that emerged through iterative within‐ and cross‐case analyses comprised three major elements (what teachers knew, how they knew it, and the way they used the data) and suggested that teachers could be characterized into three data use profiles (data gatherers, in‐the‐moment data users, and integrated data users). Findings indicate (a) teachers may understand data differently than researchers or policymakers do, (b) teachers’ understanding of data sources goes beyond traditional conceptualizations, (c) a continuum in teachers’ data use practices, and (d) a need to better support teachers in moving from simply doing assessment to using data in ways that are meaningful for practice and children’s language and literacy outcomes.
... With increased demands on educators' time, implementing strategies that are easy to use, fast to implement, and result in desired student outcomes is imperative. Recently, researchers have focused on incorporating technology into DBDM (e.g., cloud-based systems: Buzhardt et al., 2010; Johnson, 2017 and app-based: Kincaid et al., 2018). ...
Article
Full-text available
Assessment is prevalent throughout all levels of education in the United States. Even though educators are familiar with the idea of assessing students, familiarity does not necessarily ensure assessment literacy, or data-based decision making (DBDM) knowledge. Assessment literacy and DBDM are critical in education, and early childhood education in particular, to identify and intervene on potential barriers to students' development. Assessment produces data and evidence educators can use to inform intervention and instructional decisions to improve student outcomes. Given this importance and educators' varying levels of assessment literacy and DBDM knowledge, researchers, and educators should be willing to meet in the middle to support these practices and improve student outcomes. We provide perspective for half of the equation– how researchers can contribute– by addressing important considerations for the design and development of assessment measures, implications of these considerations for future measures, and a case example describing the design, and development of an assessment tool created to support DBDM for educators in early childhood education.
Article
Engaging, focusing, and persisting in the completion of tasks are among the skills needed for school success. Tracking whether a child is learning cognitive problem-solving skills is essential in knowing if they are acquiring skills important for development and school readiness; and if not, how they are responding to early intervention. Use of the Early Cognitive Problem-Solving Indicator (EPSI) was documented by data for 2,614 children (6–42 months of age) collected by the early childhood staff from 45 programs. Results indicated that the EPSI was (a) scalable across programs, assessors, and assessment occasions, (b) reliable, (c) sensitive to growth over months of age, (d) comprised a dynamic continuum of skills within and across skills over time, and (e) moderated by children’s disability status but not gender or home language. Implications for research and practice are discussed.
Article
Full-text available
This study evaluated the extent to which existing measures met standards for a continuous suite of general outcome measures (GOMs) assessing children’s early literacy from preschool through early elementary school. The study assessed 316 children from age 3 years (2 years prekindergarten) through Grade 2, with 8 to 10 measures of language, alphabetic principle, phonological awareness, and beginning reading. We evaluated measures at each grade group against six standards for GOMs extracted from earlier work. We found that one measure of oral language met five or six standards at all grade levels, and several measures of phonological awareness and alphabetic principle showed promise across all five grade levels. Results are discussed in relation to ongoing research and development of a flexible and seamless system to assess children’s academic progress across time for effective prevention and remediation, as well as theoretical and empirical analyses in early literacy, early reading, and GOMs.
Article
This paper considers whether expanding access to center-based early childhood education (ECE) will reduce economic inequality later in life. A strong evidence base indicates that ECE is effective at improving young children's academic skills and human capital development. We review evidence that children from low-income families have lower rates of preschool enrollment than their more affluent peers. Our analysis indicates that increasing enrollments for preschoolers in the year before school entry is likely to be a worthy investment that will yield economic payoffs in the form of increased adult earnings. The benefits of even a moderately effective ECE program are likely to be sufficient to offset the costs of program expansion, and increased enrollment among low-income children may reduce later economic inequality.
Article
In this article, we delineate essential commonalities and distinctions between two approaches to measurement for instructional decision making. Specific subskill mastery measurement is explained using a case study, and salient features of this predominant model are described. Then, a major contrasting approach, the general outcome measurement model, is explained; a curriculum-based measurement case study is provided to illustrate general outcome measurement; and the essential features of this alternative model are reviewed. Finally, we describe how general outcome measurement represents an innovative approach to assessment by bridging traditional and contemporary paradigms.
Chapter
While response to intervention (RTI) is in widespread use in K–12 programs, it is still an emerging practice in programs serving preschool-aged children. In 2008, the Institute of Education Sciences funded the Center on Response to Intervention in Early Childhood (CRTIEC): (1) to conduct a focused program of research to develop and rigorously evaluate and replicate intensive interventions for preschool language and early literacy skills and (2) to develop and validate an assessment system linked to these interventions. This chapter briefly describes some of the differences between preschool and K–12 educational settings and examines some of the challenges to implementing RTI in light of these contextual differences. Lessons learned and implications derived from a multisite study of the quality of early literacy in tier 1 across preschool programs are outlined along with programmatic research carried out to develop tier 2 and tier 3 language and literacy interventions, and measures for identifying and monitoring the progress of children needing additional tiers of support in these interventions. Also described are a specific investigation of children who are dual language learners and annual surveys of states showing a growing trend in the implementation of RTI programs and policies for preschool-aged children.
Article
Article
Many preschool language-focused interventions attempt to boost language and literacy skills in young children at risk in these areas of development, though the long-term effects of such interventions are not well-established. This study investigated kindergarten language and reading skills, specifically the subcomponents of vocabulary, decoding, and reading comprehension, for children exposed to the language-focused intervention Learning Language and Loving It (LLLI; Weitzman & Greenberg, 2002) during preschool. End of kindergarten skills were examined, comparing children whose teachers implemented LLLI (n = 25) or business-as-usual (BAU) instruction (n = 24). Hierarchical linear modeling results showed the LLLI intervention to have significant effects on children's decoding and reading comprehension in kindergarten for children who had high levels of language skill at preschool, as compared to their counterparts in the BAU condition. Study findings therefore indicate that preschool language-focused interventions may primarily benefit children with higher skill levels. This suggests the need to explore avenues for addressing the needs of children with relatively low language skills during preschool and the eventual transition to reading. 2015
Article
Read It Again (RIA) is a curriculum for pre-kindergarten (pre-K) classrooms that targets children's development of language and literacy skills. A cluster randomized trial was conducted in which 104 pre-K classrooms in the Appalachian region of the United States were randomly assigned to one of three study conditions: Control (n = 30), RIA only (n = 35), or RIA with expanded professional development components (n = 39). This study tested the impacts of RIA on six measures of children's (n = 506) language and literacy development. There was a significant positive impact of RIA on print concepts, and the impacts of RIA on print knowledge and alphabet knowledge were significantly stronger in classrooms with lower-quality literacy instruction. There were no impacts of RIA on children's language development and no impacts of the professional development components. Implications of the findings for implementing scalable, effective strategies to improve key school readiness outcomes for children from economically-disadvantaged backgrounds are discussed.