Previous research point to diﬃculties for teachers to interpret reading assessment data with
regard to instructional decisions. This study explored Swedish primary teachers' use of
assessments and recommendations, in order to be able to target individual needs. Eight
teachers participated in a reading program and were inter- viewed in focus-group meetings.
The analysis of teacher narratives stemming from assessment use resulted in three themes:
Awareness of student learning, Changes in the organization of teaching, but not regarding
individualized content and Strengthened teacher role, but modest professional growth. The
themes indicated that the teachers had become aware of their students’ learning, had
employed teaching based on informed decisions, and showed initial professional growth.
However, the assessment details and the recommendations allowed for more adjustments
than was evident in the teachers’ narratives. The results point to the relative diﬃculty of
targeting individual needs in the general classroom education, and to the challenges of
changing teaching practices.
There is a global as well as a national Swedish desire from educators, researchers and
policy makers to increase students’ reading performance, including preventing reading
diﬃculties, as well as to stimulate reading development for those who advance quickly. The
aim of this study is to explore primary school teacher’s use of assessment data, including
teaching recommendations, in order to target individual needs in students.
There is now evidence that non-ﬂuent readers need to practice on the components
underlying word recognition, for example, phonological awareness, letter knowledge and
letter-sound correspondence (Foorman & Torgesen, 2001; Hatcher, Hulme, & Ellis, 1994;
Wolﬀ, 2016) whereas ﬂuent readers need focus on reading strategies that promote reading
comprehension, including how to understand pragmatic facets of written language (Antoniou
& Souvignier, 2007; Guthrie et al., 2004). To teach based on assessments of these
components is not a new phenomenon; it has characterized the ‘literacy movements’
throughout the modern school era. By providing teachers with knowledge of their student’s
level of reading achievement and growth in reading, a convincing body of evidence shows
that it aﬀects student reading outcomes (Förster & Souvignier, 2015; Stecker, Fuchs, & Fuchs,
2005). In this tradition, the challenge of the teacher is to become knowledgeable about student
progress and to act in accordance with student needs relative to learning goals (Hoogland et
Response to intervention (RTI) is perhaps the most notable framework regarding the use of
assessments and teaching adjustments. In this tradition, the goal is to prevent early reading
diﬃculties, by providing increasingly more adjusted and more intense reading instruction in-
terventions (Vaughn & Fuchs, 2003). Starting with Tier 1, which usually covers the ordinary
classroom teaching, students who are be- hind in the reading developmental process are
detected through assessments, which can vary in scope, ranging from standardized
assessments of basic skills, to include more comprehensive eﬀorts of mapping reading
components. Students who respond poorly to this “generally eﬀective” Tier 1 classroom
instruction are transferred to additional interventions, usually containing a three-tiered
intervention framework in total. While research on Tier 2 and Tier 3 interventions is
comprehensive (Grosche & Volpe, 2013; Tran, Sanchez, Arellano, & Lee Swanson, 2011),
and includes close monitoring of student progress in relation to more intense instruction
conducted in small groups and individual teaching, Tier 1 procedures conducted as part of the
ordinary classroom education are often not explicitly evaluated in RTI-research (Lam &
However, there are strong reasons to also improve the ordinary classroom education, in
order to better detect those who need additional support, but also to facilitate reading skills
among normally developing students. If teachers do not provide generally eﬀective classroom
instruction, it will especially aﬀect students in need of more support (Grigorenko, 2008).
Using assessment data to inform teaching is also a critical aspect in research of data driven
decision making teaching, DDDM. This tradition usually covers both how teachers manage to
use assessments for instructional decisions (Hoogland et al., 2016; Means, Chen, Debarger, &
Padilla, 2011; Schildkamp & Kuiper, 2010), but also the eﬀect of providing targeted
instruction to improve students reading skills (Förster & Souvignier, 2015). The eﬀect of
targeting instruction based on student reading data has for example been studied by Connor et
al. (2009). They found that the eﬀect of instruction upon diﬀerentiated reading proﬁles
depended on the match between the student’s skill level and type of instruction used;
“precisely providing recommended amounts of instruction, which change systematically over
the course of the school year, as well as in response to students’ changing skills, is associated
with stronger student reading outcomes” (p. 16). This means that there is great potential in
providing teachers with matched teaching recommendations in order to improve learning
outcomes. The concept diﬀerentiation used here is in line with a proposed deﬁnition by
Algozzine and Anderson (2007), which they mean involves taking into account the students’
diﬀerences, how they learn and, possibly, the individual’s interest. One of the most important
aspects for teachers who work with diﬀerentiation is to create learning environments that do
not exclude any child. Research in this ﬁeld also point to the diﬃculties for teachers to use
assessments with regard to instructional decisions. This could especially be the case for
Swedish teachers as systematic assessment use, directly and explicitly linked to instruction, is
a relatively uncommon practice in Swedish schools. If teachers are not provided clear
recommendations of what to be considered for an individual student or for the class,
assessment data could become redundant, that not all student information is properly or
responsibly used, or that information is used only for decision making at the classroom level
(Lai & McNaughton, 2016; Schildkamp & Kuiper, 2010). Teachers need to be able to
adequately interpret assessment data and transform it into instructional decisions (Gersten et
al., 2008) if the potential is to be realized as the study by Connor et al. (2009) demonstrates. A
study by van den Bosch, Espin, Chung, and Saab (2017) showed that, although, teachers may
be good at understanding what assessment data reveal (i.e., to read the data), they often failed
to link the data to instruction, that is, what the student should practice, and what the teaching
should consist of. Another study by Zeuch, Förster, and Souvignier (2017) showed that
teachers tend to skip much of assessment information when being provided thorough
assessments data, and focused more on how student achievements were related to grades,
rather than to focus on how to structure the in- formation into instructional needs.
A review by Wayman, Wallace, Wiley, Tichdt, and Espin (2007) of using CBM
(Curriculum Based Measurement) reading data to promote reading development, shows that
little is known about teachers’ understanding of how to use reading test scores and how they
use that information to make instructional decisions in the classroom. In fact, most
international studies show teachers’ lack of skills of using data adequately (Hoogland et al.,
2016; Means et al., 2011; Schildkamp & Kuiper, 2010) and that teachers seldom have access
to support re- garding how to use data for instruction (Davidson & Frohbieter, 2011). There is
little reason to believe that the situation would be any diﬀerent for Swedish teachers. Dunn,
Airola, Lo and Garrison (2013) argue that we need to better understand how teachers can be
supported in the decision-making processes of teaching considerations.
LegiLexi, a Swedish assessment and instruction program initiative
Research has provided us with evidence of successful educational reading instruction
(Cunningham & O’Donnell, 2015; Gustafson, Nordström, Andersson, Fälth, & Ingvar, in
press; Torgesen et al., 2001). However, much remains to be known about how teachers in
Sweden make use of detailed information about student reading development and teaching
recommendations and how they use that information to promote future learning. In this study,
we explored through teachers narratives how teachers can use information and be supported in
these processes, as an example for making changes in their teaching through the educational
reading program LegiLexi (LL henceforth), a Swedish program run as pro bono and free of
charge for schools, with the aim of developing all students reading ability
(www.LegiLexi.org). We also investigated how the teachers were professionally aﬀected by
being supported by the systematic assessment tool.
The LL-program was developed as a result of public and academic debates on declining
reading performance among Swedish youth, evident from several PISA-reports (OECD
2010;2013). The program was built on the supposition that recurrent monitoring of the
multiple components underlying reading development, along with supporting teachers to act
upon students’ reading performance patterns, positively aﬀects reading development for all
students in a teacher’s class. In order to compensate for the identiﬁed diﬃculties associated
with teachers’ use of assessment data (Dunn, Airola, Lo, & Garrison, 2013; Hoogland et al.,
2016; Schildkamp & Kuiper, 2010; Zeuch et al., 2017), the LL-tool was explicitly designed to
provide teachers with reading assessments, and individual and class teaching
recommendations with regard to learning goals for Grade 3 and for end-of-term performance
in Grade 1 and 2 (see method section for details of the recommendations). Reading
proﬁciency data was available at both student and class level to inform teachers how to plan
instruction at each grade level. Assessments are carried out three times a school year, starting
in the beginning of the semester, at the middle of the year and at end-of-term, and, as such
translates to the features of Tier 1 procedures within the RTI- frame- work.
Furthermore, the LL-tool, which was developed by reading researchers, teachers and
special ed. teachers, is based on the Simple view of reading model (SVR), Response to
intervention (RTI) and Formative assessment (FA) (Black, 2015; Black & Wiliam, 1998).
Both FA and RTI are forward-looking assessments approaches, aimed to develop both
students learning and supporting teachers to teach forward (Andersson, Löfgren, & Gustafson,
2019). The LL-tool can be seen in the systematized testing from RTI theory and from FA in a
more loosely assessment practice were teachers are freer to make decisions about teaching
eﬀorts. Teacher knowledge of these approaches, including their merits and limitations, can be
used to develop an eﬃcient assessment practice. In this study, we are referring to a forward-
looking teaching approach of using assessments, aimed to let teachers make more reﬁned
assessments by combining elements of FA (i.e., forward-looking approaches in relation to
learning goals) and RTI (i.e., using test results, comprehensive assessments including
recommendations and individual adjustments).
In a previous study (Gustafson, Nordström, Andersson, Fälth, & Ingvar, 2019), we
investigated the eﬀect of the program on students reading skills in Grade 1. Teachers were
randomly assigned to three conditions, 1) Teachers having full access to the program,
including teacher course, proﬁles and recommendations, 2) Teachers having access to proﬁles
and recommendations only (and not the course material), and 3), no access, which served as a
control group. Results showed positive results for students participating in the program. The
largest gains in word reading and reading comprehension were obtained in the full access
group, as assessed between the initial assessments at school start and the third assessment at
the end of the semester.
The present study
In the present study, we included teachers from the whole of primary school (Grade 1–3), and
explored how the teachers made use of the program guidelines, as we do not know enough of
how teachers in Sweden make use of such procedures for instructional change in the
classroom, or how teachers think about and act regarding assessment use for targeting
individual needs. To study how teachers used the tool, we chose focus group interviews as
data collection method (Breen, 2006), by asking them to tell us about their work with the tool.
This includes to study what information teachers receive and what support the tool provides.
This method also enabled us to control the conversation with interview questions, listen to the
teachers' stories and take part of the in-depth conversations that arose between the teachers
based on their experiences as LL users. This study took place after the second assessment
period during the pilot year of implementing LL. Since this way of teaching was new for the
participating teachers, we were also interested in any self-reported change in professional
growth; a known eﬀect of teacher participation in interventions that aim to increase
instructional eﬀectiveness (Van der Scheer & Visscher, 2016; Vescio, Ross, & Adams, 2008).
Adapted teaching that corresponds to individual needs has been found to be one of the most
advanced skills of teachers’ professional competence, and is usually mastered only after
several years of teaching (Van de Grift, 2007). Studies show, however, that it is possible to
support teachers to achieve eﬀective instructional change (Van den Hurk, Houtveen, & Van
de Grift, 2016). This study aims to explore what can be learned when primary teachers use a
comprehensive assessment tool, including recommendations and teaching material in their
own teaching practice.
The following research questions were asked. For what purposes were students' assessments
of their reading ability and recommendations used (RQ1)? How did the teachers use this
information regarding changes in their teaching practice (RQ2)? How were the teachers
professionally aﬀected by being supported by a systematic assessment tool (RQ3)?
To be able to recruit active LL-teachers, the project leaders of LL was contacted, which
provided us with teachers who also was responsible for the teaching of reading in their
respective classes. Initially, twenty- ﬁve teachers were asked for participation in the study, of
which ten responded positively. The main argument for not volunteering was re- ported high
workload. At the time of the data collection, one teacher reported being ill, and a second
teacher was prevented due to other causes. Because of the challenge of being able to recruit
active LL-teachers who would ﬁt the requirement of the study, we decided to carry through
focus group number two with fewer teachers, and let the quality of the collected data
determine its usability for the analyses, rather than the number of participants.
Eight female teachers working in Grade 1–3 from diﬀerent schools participated in the
study. All teachers had entered the LegiLexi program the same year the program was
introduced to selected schools. Years of teaching varied, as one teacher had only two years of
teaching experience, whereas the remaining teachers had 10–22 years of experience, which
can be seen Table 1.
Typical class size was somewhere between 20 and 28 students, whereas one teacher had
only 13 students and one teacher worked in a class with 39 students (although, the students
were divided into smaller units during diﬀerent school activities). As such, the teachers were
both homogeneous as a group (as all share the same experience as being part of the program)
and heterogeneous (in terms of grade-level teaching and teaching experience). Some of the
teachers had joined the program together with colleagues, working either in the same or in
other classes, although none was accompanied by a colleague in the interviews. The schools
were located in several municipalities adjacent to a large city in Sweden with variation
between the schools in terms of size and socioeconomic status. The teachers had joined the
program well before the start of the autumn semester the previous year. Six to eight months
had passed between the ﬁrst assessment sessions at school start (in August) until the
interviews (which were conducted in late January and in April). During this time the teachers
had participated in a mandatory teacher course (which is described below), had carried out
two sessions of assessments (with provided reading proﬁles and re- commendations), and had
been encouraged to use a variety of teaching methods and reading instructions found in the
course material for the teachers. The teachers were all familiar with the aim of the program to
improve all students’ reading development by using the program features to enhance the
quality of teaching.
Background: LegiLexi (https://www.LegiLexi.org/) evolved as a program to counteract
declining reading performance in Swedish schools, and is intended to be used alongside
ordinary teaching and to support teachers with the explicit aim of making all students better
readers. The program includes an assessment tool used to promote in- formed data-driven
decisions of teaching considerations with regard to students’ learning needs, and teaching
material that is to be used to promote teachers’ professional growth by improving competence
in teaching early literacy. The LL-tool provides assessments and re- commendations for
teachers, along with material for each area of re- commendation. It does not, however,
provide speciﬁc instructional sessions for each recommendation. The material consist of
already established evidence-based instructions and theories of reading and for- ward-looking
assessment for teachers to use and develop in their practice (see also Teaching material).
The assessment tool: The assessment tool constituted, at the time of this study, a pen-and-
paper version, although the teachers received computer-generated compilations that contained
test scores, reading proﬁles and teaching recommendations. Assessments are carried out three
times per school year (August, December and May) during each year of primary school
(Grade 1–3). The teachers followed an extensive test manual (Fälth, Gustafson, Kugelberg, &
Nordström, 2017). The assessment tool provided diﬀerentiated reading proﬁles on single
components of reading proﬁciency and aggregated scores in line with the simple view of
reading model. The diﬀerentiation contributes to the diﬀerent focus of the reading proﬁles
being made visible to the teacher, while the whole of the student's reading development
(according to the Simple view of reading) is also supported. Scores from single components
were aggregated to form factors higher up in the model, namely decoding and reading
comprehension, with successively higher order skill assessments in decoding and
comprehension as the student advances through primary school.
Test scores for individual students were available for each of the nine tests divided into
pre-reading skills and automatic word decoding, language comprehension and reading
comprehension. This information was also imbedded in the class proﬁles where each reading
ability and aggregated score was displayed for the entire class in the form of matrices. Visual
representations of growth in reading ability over time was also available, including graphics
that displayed the gap between cur- rent achievement and expected Grade 3 learning goals.
The scale for each reading ability, as well as the aggregated scores, were calculated on a six-
point scale (where scale level 6 represented achievement be- yond learning goals for Grade 3).
In comparison with most other tests found in Swedish schools, the assessment tool was aimed
to target achievement level for Grade 3 and end-of-year performance at Grade 1 and 2. Thus,
the assessment of reading proﬁciency were not relative to other students, but were used as
benchmarks, developed as to be in line with the curriculum and learning goals for primary
Teaching recommendations: The recommended focus areas for individual students were
computer generated and added to the reading proﬁles. These were generated according to the
logic originated from the reading proﬁciency assessments. For example, students who had not
yet achieved an automatized decoding ability were mainly, but not only, recommended to
practice on that component. There were eight distinctively diﬀerent recommendations that
followed from the assessments, and from a combination of assessed levels of the decoding-,
reading- and language comprehension measures. The most common recommendation in the
program focused on decoding as this reading skill constitute the basis of reading in the early
school years, but other recommendations also included ﬂuency, vocabulary and reading
In the following section, we provide two of these diﬀerent recommendations. In the ﬁrst
example, the teacher was informed of a student who shows strengths in language
comprehension skills but were to develop an automated word reading ability.
Focus area for the student: Alphabetic decoding
Individual recommendation: The student has a well-developed understanding of language, but
needs to develop the decoding ability. Focus on developing the student's word decoding
ability in order to get the student to start practice his/her own reading. In the second ex-
ample, the student had not yet developed reading skills and the teacher was encouraged to
focus on pre-reading skills such as letter knowledge and phonological awareness.
Focus area for the student: Phonological awareness
Individual recommendation: The student is in need to develop both decoding skills and
language comprehension. The teacher can promote both language and comprehension
development in teaching where the student does not have to read, such as listening to oral
reading and to participate in ﬁlm discussions. But the focus should now be on devel- oping the
student’s pre-reading skills and decoding in order to get the student started with his/her own
Teaching material: Teaching material, texts and articles, were available as part of the
program and covered topics that are to be used for preventing reading diﬃculties as well as
for teaching rapidly advancing readers. The material was written by Swedish researchers and
educators of reading instruction and reading development. Parts of the material were included
in the mandatory course that the teachers participated in. The material were in line with the
recommendations, as how to teach for example letter knowledge, ﬂuency or reading
comprehension. The material also covered chapters of, for example, reading development,
motivation, and special needs, as well as didactical con- tent (altogether 12 chapters). The
teachers were encouraged to make use of the material and adapt their teaching accordingly,
however, no particular method or instruction, as with a manual for each reading skill, were
provided (e.g., cooperative methods, repeated reading or peer-assisted learning). The material
instead consisted of a set of educational tools the teachers could choose from.
Focus groups interviews
Preparations and context of the study
Interest in the study aroused in connection with LL's implementation process. In connection
with this, the research questions that formed the basis of the study were formulated. The
research questions pointed to how teachers used the evaluation tool, recommendations and
teaching aids for forward-looking purposes to improve classroom instruction. The rationale of
choosing focus group interviews, conducted twice, over other methods, was justiﬁed for its
usefulness for illustrating the many individual perceptions that can be found within a group of
teachers with a shared experience in teaching. Since we in- tended to study teachers' use of a
speciﬁc teaching tool that was relatively new to them, the implementation process just started,
we considered it beneﬁcial for the study to let the teachers talk about and discuss their
experiences together in a social environment. It is argued (e.g., by Breen, 2006) that the group
dynamics in focus groups prompt participants to compare, reﬂect and develop reasoning of
their practice over and above what can be achieved in a one-to-one setting. In addition, by
having two subsequent interviews with diﬀerent participants (each teacher participated in only
one out of two interviews), we believe we were able to receive as much information as needed
in order to better understand the phenomenon under investigation.
Focus group interviews procedures
Both focus group interviews were conducted at approximately mid- term of the spring
semester, and took place in a meeting room located in a larger Swedish city, due to the
easiness for teachers to travel to the interview sessions. The room contained a conference
table, and the recording equipment was located at the table. Before the moderator (second
author) began to actively engage the teachers on the topics under investigation, teachers were
introduced and reminded of the aim and the work with the program, in order for the teachers
to be “warmed up” and to be better prepared to share their experiences in the inter- views,
both retrieved retrospectively and from on-going activities. By conducting the interviews in
this manner, the teachers were able to be introduced to the meaning of the meeting, in line
with Krueger (2000) guidelines of focus-group questioning (i.e., from general to speciﬁc
The moderator followed a semi-structured interview guide with broad questions that were
developed from the overall aim of the study and the research questions. The order of the
interview questions varied depending on the path that the conversation took. The interview
guide's content described below was used and additional follow-up questions were asked
when the moderator needed to get something clariﬁed or to deepen an issue. As we were
interested in as how the program support was used and how it aﬀected the teachers, the semi-
structured guide contained the following headings; Teachers’ understanding of forward-
looking uses of assessments, uses of tests and reading proﬁles, adjusted teaching in
accordance with individual and class recommendations and limitations of using reading
proﬁles to ensure high quality teaching. To guard against that the teachers’ self-reports
potentially did not reﬂect what the teachers were actually doing in the classroom; the
moderator encouraged the teachers to give concrete examples of how they used the program
in their practice, and asked follow-up questions on, for example, supportive or hindering
factors regarding the program usage.
The two interview sessions diﬀered in terms of sound recording method and number of
participants. When requesting participation in the study, all teachers had agreed to be ﬁlmed
with a computer. The purpose of ﬁlming the interviews was to facilitate the transcription
work, however, it turned out that the transcription work was not depended on visual material.
In addition, the quality of the ﬁlm sound was low, which made the transcription work
unnecessary longer. Therefore, only sound recorded from a dictaphone was used in the second
interview. There were no evidence of any negative impact of using the sound-recording
method only. The verbatim material, that is, the recorded material, along with notes of
nonverbal behavior, such as laughter and indications of agreements or disagreements between
participants from the interview, were transferred and transcribed into written material.
Both groups discussed, shared experiences, diﬃculties and good examples. The moderator
felt that participating teachers were involved in the interviews and that the atmosphere was
open and safe. This was conﬁrmed after the interviews when the participating teachers
expressed that they have learned a lot during the interview, partly by putting words on their
own thoughts but also by listening and talking to other teachers who shared their experiences
with working with the LL- tool.
The reﬂections that emerged from the interviews were analyzed by all four authors
following the guidelines of content analysis and peer debrieﬁng (Creswell & Miller, 2000;
Creswell & Poth, 2017; Lincoln & Guba, 1985) and were sentence coded by the ﬁrst author.
To guard against researcher bias, ﬁrst author codes were compared with the peer debrieﬁng,
using constant comparison, until themes were formed. In the analytic process, we had an
inductive approach but searched for patterns of how the teachers explicitly stated changing
beliefs about their practice and how they used the program in their teaching. We also used the
six-step procedure of thematic analysis provided by Braun and Clarke (2006), meaning that
the analytic process was recursive (i.e., searching, reviewing, deﬁning and ultimately naming
the themes). In addition, the research questions and the interview guide, by and large, steered
much of possible outcomes from the interviews, as well as in the analytical process. However,
as the interviews were conducted in a fairly semi-structured way, there were plenty of
opportunities for the teachers to address topics that the research team had not thought about
before the interviews.
The way the discussions developed were diﬀerent in the two groups, however, the coding
of statements, as well as the themes, were found in both focus group meetings. Some
reﬂections were coded as belonging to several diﬀerent themes. The teachers were for the
most part in agreement with each other. Teacher G and H (which, by the time of this study,
were teaching a grade 2 and a grade 1 class respectively) did make the most detailed
contribution in terms of developed quotable reasoning, although all the other teachers
contributed to the interviews by interacting, developing and by sharing their reﬂections. The
analysis of the two focus groups interviews resulted in three themes: Awareness of student
learning, Changes in the organization of teaching, but not regarding individualized content
and Strengthened teacher role, but modest professional growth.
This section describe the three themes extracted from the focus groups interviews. Table 2
presents the themes with corresponding quotes to illustrate each theme.
Theme 1: Awareness of student learning
According to the reﬂections that emerged during interviews, several teachers reported having
discovered their students’ skill level and reading progression in much more detail by having
access to the reading proﬁles. The test results made their students’ learning visible for the
teachers and they reported that recurrent testing not only con- ﬁrmed what they intuitively
knew before (but less detailed), but that they also had valuable knowledge of how diﬀerently
students’ progress, including that they were able to identify each student’s development. One
of the teachers stated; “I am strengthening both those students that are skilled, who already
has a developed language, and those who need to practice more. They have received an
educational form that provides improvements for the students”.
The teachers realized the link between how they were planning and carrying out teaching
with regard to students learning progression and how that related to future Grade 3 learning
goals, illustrated with these words; “The advanced readers become motivated to learn, that
they can get to the next level by working harder and to increase their eﬀorts, so that they can
reach the target level for Grade 3 early on”.
The teachers said that they now had a tool to meet more students' needs, and that they
believed they were in a position to positively inﬂuence their students’ development as they
were able to monitor both rapidly advancing readers and those who progressed slower. In
addition, the assessments gave the teachers a sense of relief as the proﬁles gave the teachers
insight to, and control of the processes of bringing the students’ towards learning goals. One
of the teachers expressed this as; “It has been truly informing to having access to the targets
levels between test occasions, especially as I have students with diﬃculties learning how to
read. After the second assessment, it turned out that they have improved their scores on
vocabulary and language comprehension. Then you can catch your breath and know that the
students are underway”. Thus, the reﬂections acknowledged that the teachers were viewing
students’ reading development in relation to their own teaching, in a systematic way, and that
they were aware about individual students’ learning needs.
Theme 2: Changes in the organization of teaching, but not regarding individualized content
The second theme illustrated the teachers’ use of diﬀerentiated teaching, such as ﬂexible
groups of students, which also changed during the course of the school year (i.e., dynamic
teaching). They consciously reported that they focused on making matches between groups of
students’ achievement levels with various diﬀerentiated forms of teaching, with ways of
activating low and non-achievers in reading activities, as they emphasized that no student
should be left behind, expressed in this quote; “In part based on the recommendations, we use
skill level groupings. For students having trouble with silent reading (because they usually
just browse through the books), we now have structured sessions of oral reading, so that
students actually read during sessions”. The focus of these activities were to improve
comprehension, and at the same time to practice word recognition skills, through shared oral
reading or by discussing books, also expressed by this quote of a Grade 1 teacher; “We
created a group of nine students who had the same book. They got to read in groups of three
students. Then they went together and discussed the book”.
What did not come out much in the interviews were how they were planning instructions
for speciﬁc focus areas for individual students, for example teaching pre-reading skills such as
phonological awareness, or for improving ﬂuency such as word recognition exercises, which
the assessments and recommendations from the LL-tool allows. As such, the teachers
displayed awareness of making informed decisions for diﬀerentiated teaching at group level,
aﬀecting the organization of teaching, but recommended teaching of single components for
individual students were not used.
Theme 3: strengthened teacher role, but modest professional growth
The third emerging theme was that the teachers had become strengthened in their role as
teachers by emphasizing increased knowledge of their students’ reading progression, which
impacted an increased sense of accountability in relation to parents, including communication,
as they experienced being more conﬁdent and entrusted as teachers, as shown in this quote;
“Using the recurrent assessments enhances my profession when I tell the parents about their
children's reading development. It gives the parents a sense of security. They witness
progression, even if they are worried”. They also put forth an increased accountability
towards other professions at school, such as special education teachers, as they, compared to
what they experienced before using the program, now were better informed about students’
needs. One teacher told; “The special education teacher in my school said I was so
knowledgeable about the students that it has helped the special education teacher to identify
students who have diﬃculties.
Nevertheless, although the teachers reported being strengthened in their role as a teacher,
the teachers can be considered as belonging to the initial phases of eﬃcient assessment use, as
they put forth potentials, rather than having developed a professional practice around its use,
as shown in this quote; “Assessments can be the basis for my own teaching, while it also can
be used to distribute school resources differently and to organize ourselves better, when we
know how diﬀerent the classes are”. As such, teacher narratives revealed that they had not
fully developed teaching that corresponded to individual needs of students, nor had they fully
developed a practice that enabled close teacher collaboration, but expressed awareness and
inspiration of its potential, in order to improve teaching practices at their schools.
A wealth of international research show the eﬀectiveness of monitoring student reading
progress and to match instruction according to student needs (Connor et al., 2009; Förster &
Souvignier, 2015; Stecker et al., 2005). However, as research has pointed out, teachers are
seldom adequately equipped to be able to use assessment data for instructional decisions
(Hoogland et al., 2016). In this study, Swedish teachers that followed the educational program
LegiLexi, were aided in the processes of making informed decisions of teaching, covering
both content (i.e., speciﬁc focus areas for individual students) and form (employing
differentiated teaching) that changed over the course of two assessment periods. The
narratives from the focus groups interviews were organized in three main themes. First, we
discuss the ﬁrst two themes as they explicitly related to research question 1 and 2, followed by
the third theme, which related to the third research question.
The ﬁrst and the second research question concerned how the teachers used student
information and how they used that information regarding changes in their teaching practice.
An observable eﬀect of participating in the program was that the teachers’ talk about
assessment use translated to some of the key features of response to intervention, including
data driven teaching traditions (Means et al., 2011). Student learning is a central part of
teaching within these frameworks, as long as assessments are used for additional and
continuous teaching with regard to future learning goals. Although, the teachers stated that
they already worked in the spirit of forward-looking teaching before entering the program,
they stated they had acquired more skills and showed more awareness in how to align learning
to instructional decisions, as both learning and teaching had become visible for the teachers.
The teachers held a belief that in order to improve reading development for each student,
teaching needs to match learning needs and to be diﬀerentiated. The recurring assessments
also helped the teachers to evaluate their teaching eﬀorts and to inform the teachers of
students that developed poorly over time. Based on excerpt in theme 2 (Changes in the
organization of teaching, but not regarding individualized content), the teachers viewed their
teaching as dynamic, that is, that instruction changed over the course of the school year and
that it became more intense for some of the students. Teaching was also viewed as ﬂexible as
the teachers said they were using diﬀerent and temporary constellations of students in learning
activities and that they used a variety of teaching methods based on informed decisions. The
LL-tool allowed for additional adjustments in the form of speciﬁc and structured training
sessions. Based on this, the teaching could also have included individual-based student tasks
to train reading skills in order to supplement the text reading in the reading groups. Reading
lists of suitable level is such a task (Wendick, 2015), concepts and vocabulary training
another. There are challenges with achieving teaching that corresponds to the individual needs
of students and similar results have been found in other studies (Schildkamp & Kuiper, 2010).
In terms of teacher development that promoted the ability to make informed de- cisions at the
individual level, it was diﬃcult to achieve all teaching possibilities the tool allowed for during
the implementation process of the program. However, it is very likely that grouping students
based on ability level can be a ﬁrst realistic adjustment when being supported by an
assessment tool. Therefore, it is plausible that teachers manage to employ further
individualization when they have evaluated the tool and have integrated the tool as a natural
part of their practice regarding how teaching is organized in their classrooms. Systematic
student assessments seem to provide teachers with tools to evaluate their practice
continuously, including how to organize students into groups, as well as to monitor student
progress over the year.
The results point to the relative diﬃculty of achieving targeted instruction that corresponds
to a comprehensive reading proﬁle, that includes both content (speciﬁc focus areas) and
ability-levels (proﬁle and recommendations) across diﬀerent students in class. This challenge
is closely intertwined with the processes of the teachers’ professional growth, discussed
The third research question concerned how the teachers were affected with regard to being
supported by a systematic assessment tool. Adapted teaching is seen as one of the most
diﬃcult professional levels teachers can acquire (Van de Grift, 2007). Earlier studies show
that this level of professional competence takes years to develop but can be reached a lot
sooner if teachers are supported in the processes of analyzing instructional behavior and with
the use of data feedback (Van den Hurk et al., 2016). This study showed that the participating
teachers had started to make conscious considerations as to improve student reading
development through the use of student data and to follow the recommendations that stemmed
from the assessment tool. Other studies of teachers’ professional growth point, among other
things, to the importance of teacher collaboration in order to build an eﬃcient practice for
assessment use for instruction (Blanc et al., 2010; Lai & McNaughton, 2016). Teacher
collaboration means that teachers work together with analyzing data and to create solutions
for classroom instruction. Blanc’s et al. study (2010) showed that teachers’ skill building is
shaped in their communities, and if given proper support by school leaders, can lead to greater
use of assessment data. Our results indicated that the teachers in this study were in the early
stages of such changes. The teachers expressed the beneﬁts of planning learning activities and
how to allocate school resources through teacher collaboration and assessment use in their
The LL-program and the teaching material was not designed as a protocol or manual that
the teachers could follow, rather the materials and suggested instructions were available for
the teachers to choose from between the assessment periods. Given the relatively challenging
task of implementing the assessments, providing additional targeted eﬀorts might require
further experience of working with assessments. Future studies of this program could reveal
more positive results, when teachers have evaluated the possibilities of the tool in relation to
their professional competence. Based on the results of this study, future re- search should
focus on how to further improve teacher’s use of assessments for instruction with a focus on
promoting teacher collaboration, in order for teachers being able to better align, and to better
explore all possibilities of the tool regarding individualized instruction. This should also
include to study the school-, and classroom organization at large, in order to identify potential
hindering factors concerning teachers’ possibilities for enabling structured level-based
adjustments at the individual level.
This study demonstrated that the teachers valued being informed about student’s ability levels
and progress, they also demonstrated that detailed assessments and teaching recommendations
can be used to organize teaching in accordance with individual learning needs. This raises
hopes to better being able to meet the learning needs of students in the early phases of
learning how to read, as well as for more advanced readers during primary school. The
teachers in this study mostly used assessments and recommendations for creating reading
groups of students, and the creation of these groups seemed to be based on in- formed
decisions, utilizing the LL-tool. However, the LL-tool allowed for even more targeted eﬀorts
in addition to reading groups, taking into account speciﬁc focus areas for students, which this
study did not ﬁnd evidence of. Therefore, enabling teachers to carry out even more directed
and targeted teaching eﬀort could be the next step after implementing a program like LL. This
includes teacher collaboration in order to utilize all possibilities of the tool, that is, the link
between a variety of student assessments in class and corresponding teaching efforts, as well
as to study and evade possible constraints or hindering factors in the school-, and class
The knowledge gained from this study is based on focus group interviews. Although, we are
convinced that the teachers made honest contributions to the study, we did not have the
opportunity to observe and monitor teachers’ everyday work in an applied setting. As the
interviews were conducted at a single time point, we could not study any possible changes in
teachers’ skills and beliefs as in a pre-post design and this limits our understanding of how
teachers gradually develop the skills needed for eﬀective assessment use. In addition,
although steps were taken to ensure that the teacher self-reports represented actual behavior, it
is still challenging to establish reliable conclusions about what teachers did and did not do
during the semester.
Two diﬀerent procedures were used to record data during the focus group interviews, by
video and audio or by audio only. However, the video recordings were only used in order to
possibly help clarify the content of what was being said and not to study group interactions or
actions. Our impression from the interview situation was that teachers were not particularly
aﬀected by being recorded on video or audio. They seemed to express their opinions openly
and freely. When data was analyzed, the video recordings did not contribute with much useful
information for this study and thus the diﬀerence in recording procedures were not that
Not all requested teachers volunteered to participate and there were some dropouts before
the interviews. We did not have details about the circumstances that prevented some teachers
from participating in the study, beyond details of reported heavy workload or being ill at the
time of the study. It is likely that the participating teachers were, on average, more motivated
users than those who did not participate. This does not mean that they were more positive but
rather that they might have felt that they had something to say about the practical work with
Furthermore, while it was evident that the participating teachers displayed professional
awareness of using the LL-tool (e.g., that they tried their best to use the tool to improve the
quality of teaching), we did not have access to details of how experienced and skilled these
teachers actually were, over and above information of how many years they had worked as
teachers. Less experienced teachers than the participating ones could have had a diﬀerent
experience of using the tool in an applied setting. They could, for example, have found the
tool even more useful than was evident, but they could also have found the tool complex to
use, if they were in a position where teaching was new to them, being unable to beneﬁt from
the rich feedback being provided. More experienced, and potentially more skilled teachers,
could also have experienced the beneﬁts of the tool diﬀerently than the participating ones.
However, the somewhat small sample still represented some variation and provided an
understanding of how teachers can use student information and an insight to challenges in
implementing a project like LegiLexi.
Qualitative data as presented here does not tell anything about the actual improvements in
students reading skills. The previous study (Gustafson, Nordström, Andersson, Fälth, &
Ingvar, 2019), revealed that Grade 1 students beneﬁted from having teachers who participated
in the program, which demonstrated that paying attention to students reading development,
and to act accordingly, can impact students reading development positively. This study
showed that participating teachers took these considerations seriously, particularly, aﬀecting
their awareness of how students develop their reading skills during the semester, and made
eﬀorts to employ teaching that corresponded to these skill levels.
Algozzine, B., & Anderson, K. M. (2007). Tips for teaching: Diﬀerentiating instruction to
include all students. Preventing School Failure, 51(3), 49–54.
Andersson, U. B., Löfgren, H., & Gustafson, S. (2019). Forward-looking assessments that
support students learning: A comparative analysis of two approaches. Studies in
Educational Evaluation, 60, 109–116.
Antoniou, F., & Souvignier, E. (2007). Strategy instruction in reading comprehension: An
intervention study for students with learning disabilities. Learning Disabilities: A
Contemporary Journal, 5(1), 41–57.
Black, P. (2015). Formative assessment–An optimistic but incomplete vision. Assessment in
Education Principles Policy and Practice, 22(February), 37–41.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in
Education Principles Policy and Practice, 5(1), 7–74.
Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010).
Learning to learn from data: Benchmarks and instructional communities. Peabody Journal
of Education, 85(2), 205–225.
van den Bosch, R. M., Espin, C. A., Chung, S., & Saab, N. (2017). Data‐based decision-
making: Teachers’ comprehension of curriculum‐based measurement progress‐monitoring
graphs. Learning Disabilities Research and Practice, 32(1), 46–60.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research
in Psychology, 3(2), 77–101.
Breen, R. L. (2006). A practical guide to focus-group research. Journal of Geography in
Higher Education, 30(3), 463–475.
Connor, C. M., Piasta, S. B., Fishman, B., Glasney, S., Schatschneider, C., Crowe, E., et al.
(2009). Individualizing student instruction precisely: Eﬀects of child by instruction
interactions on ﬁrst graders’ literacy development. Child Development, 80(1), 77.
Creswell, J. W., & Miller, D. (2000). Determining validity in qualitative inquiry. The Police
Journal, 39(3), 125–130.
Creswell, J. W., & Poth, C. N. (2017). Qualitative inquiry and research design: Choosing
among ﬁve approaches. Sage publications.
Cunningham, A. E., & O’Donnell, C. R. (2015). Teachers’ knowledge about beginning
Reading development and instruction. In A. Pollatsek, & R. Treiman (Eds.). The Oxford
handbook of reading.
Davidson, K. L., & Frohbieter, G. (2011). District adoption and implementation of interim
and benchmark assessments. CRESST report 806. National Center for Research on
Evaluation, Standards, and Student Testing (CRESST).
Dunn, K. E., Airola, D. T., Lo, W. J., & Garrison, M. (2013). What teachers think about what
they can do with data: Development and validation of the data driven decision- making
eﬃcacy and anxiety inventory. Contemporary Educational Psychology, 38(1), 87–98.
Fälth, L., Gustafson, S., Kugelberg, E., & Nordström, T. (2017). LegiLexis formativa
Foorman, B. R., & Torgesen, J. (2001). Critical elements of classroom and small‐group
instruction promote reading success in all children. Learning Disabilities Research and
Practice, 16(4), 203–212.
Förster, N., & Souvignier, E. (2015). Eﬀects of providing teachers with information about
their students’ reading progress. School Psychology Review, 44(1), 60–75.
Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S., et
al. (2008). Assisting students struggling with reading: Response to Intervention and multi-
tier intervention for reading in the primary grades. A practice guide. (NCEE 2009- 4045).
Retrieved from Washington, DC: National Center for Education Evaluation and Regional
Assistance, Institute of Education Sciences, US Department of Education. National Center
for Education Evaluation and Regional Assistance, Institute of Education Sciences.
Van de Grift, W. (2007). Quality of teaching in four European countries: A review of the
literature and application of an assessment instrument. Educational Research, 49(2), 127–
Grigorenko, E. L. (2008). Dynamic assessment and response to intervention: Two sides of one
coin. Journal of Learning Disabilities, 42(2), 111–132.
Grosche, M., & Volpe, R. J. (2013). Response-to-intervention (RTI) as a model to facilitate
inclusion for students with learning and behaviour problems. European Journal of Special
Needs Education, 28(3), 254–269.
Gustafson, S., Nordström, T., Andersson, U. B., Fälth, L., & Ingvar, M. (2019). Eﬀects of a
formative assessment system on early reading development. Education.
Guthrie, J. T., Wigﬁeld, A., Barbosa, P., Perencevich, K. C., Taboada, A., Davis, M. H., et al.
(2004). Increasing reading comprehension and engagement through concept-oriented
reading instruction. Journal of Educational Psychology, 96(3), 403.
Hatcher, P. J., Hulme, C., & Ellis, A. W. (1994). Ameliorating early reading failure by
integrating the teaching of reading and phonological skills: The phonological linkage
hypothesis. Child Development, 65(1), 41–57.
Hoogland, I., Schildkamp, K., van der Kleij, F., Heitink, M., Kippers, W., Veldkamp, B., et
al. (2016). Prerequisites for data-based decision making in the classroom: Research
evidence and practical illustrations. Teaching and Teacher Education, 60, 377–386.
Van den Hurk, H. T. G., Houtveen, A. A. M., & Van de Grift, W. J. C. M. (2016). Fostering
eﬀective teaching behavior through the use of data-feedback. Teaching and Teacher
Education, 60, 444–451.
Krueger, R. (2000). Focus groups: A practical guide for applied research. London: Sage.
Lai, M. K., & McNaughton, S. (2016). The impact of data use professional development on
student achievement. Teaching and Teacher Education, 60, 434–443.
Lam, E. A., & McMaster, K. L. (2014). Predictors of responsiveness to early literacy
intervention: A 10-year update. Learning Disability Quarterly, 37(3), 134–147.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
Means, B., Chen, E., Debarger, A., & Padilla, C. (2011). Teachers’ ability to use data to
inform instruction: Challenges and supports. Oﬃce of Planning, Evaluation and Policy
Development, US Department of Education, 1–122.
PISA 2009 Results: Learning Trends: Changes in Student Performance since 2000. OECD
Publishing, (5). Organisation for Economic Co-operation and Development. (2013). PISA
2012 results in focus: What 15-year-olds know and what they can do with what they know.
Paris, France: Author.
Van der Scheer, E. A., & Visscher, A. J. (2016). Eﬀects of an intensive data-based decision
making intervention on teacher eﬃcacy. Teaching and Teacher Education, 60, 34–43.
Schildkamp, K., & Kuiper, W. (2010). Data-informed curriculum reform: Which data, what
purposes, and promoting and hindering factors. Teaching and Teacher Education, 26(3),
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to
improve student achievement: Review of research. Psychology in the Schools, 42(8), 795–
Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. K., &
Conway, T. (2001). Intensive remedial instruction for children with severe reading
disabilities: Immediate and long-term outcomes from two instructional approaches.
Journal of Learning Disabilities, 34(1), 33–58.
Tran, L., Sanchez, T., Arellano, B., & Lee Swanson, H. (2011). A meta-analysis of the RTI
literature for children at risk for reading disabilities. Journal of Learning Disabilities, 44,
Vaughn, S., & Fuchs, L. S. (2003). Redeﬁning learning disabilities as inadequate response to
instruction: The promise and potential problems. Learning Disabilities Research and
Practice, 18(3), 137–146.
Vescio, V., Ross, D., & Adams, A. (2008). A review of research on the impact of professional
learning communities on teaching practice and student learning. Teaching and Teacher
Education, 24(1), 80–91.
Wayman, M. M., Wallace, T., Wiley, H. I., Tichdt, R., & Espin, C. A. (2007). Literature
synthesis on curriculum-based measurement in reading. The Journal of Special Education,
Wendick, G. (2015). Wendickmodellen Intensivläsning (The wendick model intensive
Wolﬀ, U. (2016). Eﬀects of a randomized reading intervention study aimed at 9-Year-olds: A
5-Year follow-up. Dyslexia, 22(2), 85–100.
Zeuch, N., Förster, N., & Souvignier, E. (2017). Assessing teachers’ competencies to read and
interpret graphs from learning progress assessment: Results from tests and interviews.
Learning Disabilities Research and Practice, 32(1), 61–70.
Table 1 Details of participating teachers
Years in profession
Number of students in class
Table 2 Extracted themes with distinguished teacher quotes
Teacher quotes with grade level within parentheses
Awareness of student learning
Teacher #H (Grade 1). [by working with assessments] I am strengthening both
those students that are skilled, who already has a developed language, and those
who need to practice more. They have received an educational form that provides
improvements for the students.
Teacher #G (Grade 2). It was perhaps not so very surprising that I had a few
students there at the beginner level. I had already identiﬁed them. But the tests
cover a relatively large portion of reading skills. It conﬁrms what you are sensing
of the students' reading development.
Teacher #H (Grade 1). When I examined the test results for the class, I was able to
ﬁnd two or three students with not so much progress. I saw that one of the students
rather backed than stood still. It made a bell ring - now we need to get started.
Teacher #A (Grade 1-3). It has been truly informing to have access to the target
levels between test occasions, especially as I have students with diﬃculties
learning how to read. After the second assessment, it turned out that they have
improved their scores on vocabulary and language comprehension. Then you can
catch your breath and know that the students are underway.
Teacher #H (Grade 1). It was exciting to see that there were so many competent
students in the class. They should also be allowed to develop by adapted teaching
Teacher #B (Grade 1). The advanced readers become motivated to learn, that they
can get to the next level by working harder and to increase their eﬀorts, so that
they can reach the target level for Grade 3 early on.
Changes in the organization of
teaching, but not regarding
Teacher #H (Grade 1). Sometimes I put really advanced readers to read
together with students who are not advanced readers. They will then take
part in a diﬀerent kind of reading than had the group only consisted of
students who struggle with text.
Teacher #G (Grade 2). In part based on the recommendations, we use skill
level groupings. For students having trouble with silent reading (because
they usually just browse through the books), we now have structured
sessions of oral reading, so that students actually read during sessions.
Teacher #H (Grade 1). We created a group of nine students who had the same
book. They got to read in groups of three students. Then they went together
and discussed the book. It worked absolutely fantastic
Strengthened teacher role, but
modest professional growth
Teacher #B (Grade 1). Using the recurrent assessments enhances my profession
when I tell the parents about their children's reading development. It gives the
parents a sense of security. They witness progression, even if they are worried.
Teacher #F (Grade 1-2). The special education teacher in my school said I was so
knowledgeable about the students that it has helped the special education teacher
to identify students who have diﬃculties.
Teacher #C (Grade 1-2). It can be interesting to learn from colleagues with better
results at school year-end. What are the reasons behind those results? What can I
get out of it?
Teacher #C (Grade 1-2). Assessments can be the basis for my own teaching, while
it also can be used to distribute school resources diﬀerently and to organize
ourselves better, when we know how diﬀerent the classes are.