ArticlePDF Available
1 (27)
Teacher inquiry of using assessments and recommendations in teaching
early reading
Thomas Nordströma,,
Ulrika B. Anderssonb,
Linda Fältha,
Stefan Gustafsonb
a Linnaéus University
b Linköping University
*
Corresponding author. E-mail: thomas.nordstrom@lnu.se
2 (27)
Abstract
Previous research point to difficulties for teachers to interpret reading assessment data with
regard to instructional decisions. This study explored Swedish primary teachers' use of
assessments and recommendations, in order to be able to target individual needs. Eight
teachers participated in a reading program and were inter- viewed in focus-group meetings.
The analysis of teacher narratives stemming from assessment use resulted in three themes:
Awareness of student learning, Changes in the organization of teaching, but not regarding
individualized content and Strengthened teacher role, but modest professional growth. The
themes indicated that the teachers had become aware of their students’ learning, had
employed teaching based on informed decisions, and showed initial professional growth.
However, the assessment details and the recommendations allowed for more adjustments
than was evident in the teachers’ narratives. The results point to the relative difficulty of
targeting individual needs in the general classroom education, and to the challenges of
changing teaching practices.
3 (27)
Introduction
There is a global as well as a national Swedish desire from educators, researchers and
policy makers to increase students’ reading performance, including preventing reading
difficulties, as well as to stimulate reading development for those who advance quickly. The
aim of this study is to explore primary school teacher’s use of assessment data, including
teaching recommendations, in order to target individual needs in students.
There is now evidence that non-fluent readers need to practice on the components
underlying word recognition, for example, phonological awareness, letter knowledge and
letter-sound correspondence (Foorman & Torgesen, 2001; Hatcher, Hulme, & Ellis, 1994;
Wolff, 2016) whereas fluent readers need focus on reading strategies that promote reading
comprehension, including how to understand pragmatic facets of written language (Antoniou
& Souvignier, 2007; Guthrie et al., 2004). To teach based on assessments of these
components is not a new phenomenon; it has characterized the ‘literacy movements’
throughout the modern school era. By providing teachers with knowledge of their student’s
level of reading achievement and growth in reading, a convincing body of evidence shows
that it affects student reading outcomes (Förster & Souvignier, 2015; Stecker, Fuchs, & Fuchs,
2005). In this tradition, the challenge of the teacher is to become knowledgeable about student
progress and to act in accordance with student needs relative to learning goals (Hoogland et
al., 2016).
Response to intervention (RTI) is perhaps the most notable framework regarding the use of
assessments and teaching adjustments. In this tradition, the goal is to prevent early reading
difficulties, by providing increasingly more adjusted and more intense reading instruction in-
terventions (Vaughn & Fuchs, 2003). Starting with Tier 1, which usually covers the ordinary
classroom teaching, students who are be- hind in the reading developmental process are
detected through assessments, which can vary in scope, ranging from standardized
assessments of basic skills, to include more comprehensive efforts of mapping reading
components. Students who respond poorly to this “generally effective” Tier 1 classroom
instruction are transferred to additional interventions, usually containing a three-tiered
intervention framework in total. While research on Tier 2 and Tier 3 interventions is
comprehensive (Grosche & Volpe, 2013; Tran, Sanchez, Arellano, & Lee Swanson, 2011),
and includes close monitoring of student progress in relation to more intense instruction
4 (27)
conducted in small groups and individual teaching, Tier 1 procedures conducted as part of the
ordinary classroom education are often not explicitly evaluated in RTI-research (Lam &
McMaster, 2014).
However, there are strong reasons to also improve the ordinary classroom education, in
order to better detect those who need additional support, but also to facilitate reading skills
among normally developing students. If teachers do not provide generally effective classroom
instruction, it will especially affect students in need of more support (Grigorenko, 2008).
Using assessment data to inform teaching is also a critical aspect in research of data driven
decision making teaching, DDDM. This tradition usually covers both how teachers manage to
use assessments for instructional decisions (Hoogland et al., 2016; Means, Chen, Debarger, &
Padilla, 2011; Schildkamp & Kuiper, 2010), but also the effect of providing targeted
instruction to improve students reading skills (Förster & Souvignier, 2015). The effect of
targeting instruction based on student reading data has for example been studied by Connor et
al. (2009). They found that the effect of instruction upon differentiated reading profiles
depended on the match between the student’s skill level and type of instruction used;
“precisely providing recommended amounts of instruction, which change systematically over
the course of the school year, as well as in response to students’ changing skills, is associated
with stronger student reading outcomes” (p. 16). This means that there is great potential in
providing teachers with matched teaching recommendations in order to improve learning
outcomes. The concept differentiation used here is in line with a proposed definition by
Algozzine and Anderson (2007), which they mean involves taking into account the students’
differences, how they learn and, possibly, the individual’s interest. One of the most important
aspects for teachers who work with differentiation is to create learning environments that do
not exclude any child. Research in this field also point to the difficulties for teachers to use
assessments with regard to instructional decisions. This could especially be the case for
Swedish teachers as systematic assessment use, directly and explicitly linked to instruction, is
a relatively uncommon practice in Swedish schools. If teachers are not provided clear
recommendations of what to be considered for an individual student or for the class,
assessment data could become redundant, that not all student information is properly or
responsibly used, or that information is used only for decision making at the classroom level
(Lai & McNaughton, 2016; Schildkamp & Kuiper, 2010). Teachers need to be able to
5 (27)
adequately interpret assessment data and transform it into instructional decisions (Gersten et
al., 2008) if the potential is to be realized as the study by Connor et al. (2009) demonstrates. A
study by van den Bosch, Espin, Chung, and Saab (2017) showed that, although, teachers may
be good at understanding what assessment data reveal (i.e., to read the data), they often failed
to link the data to instruction, that is, what the student should practice, and what the teaching
should consist of. Another study by Zeuch, Förster, and Souvignier (2017) showed that
teachers tend to skip much of assessment information when being provided thorough
assessments data, and focused more on how student achievements were related to grades,
rather than to focus on how to structure the in- formation into instructional needs.
A review by Wayman, Wallace, Wiley, Tichdt, and Espin (2007) of using CBM
(Curriculum Based Measurement) reading data to promote reading development, shows that
little is known about teachers’ understanding of how to use reading test scores and how they
use that information to make instructional decisions in the classroom. In fact, most
international studies show teachers’ lack of skills of using data adequately (Hoogland et al.,
2016; Means et al., 2011; Schildkamp & Kuiper, 2010) and that teachers seldom have access
to support re- garding how to use data for instruction (Davidson & Frohbieter, 2011). There is
little reason to believe that the situation would be any different for Swedish teachers. Dunn,
Airola, Lo and Garrison (2013) argue that we need to better understand how teachers can be
supported in the decision-making processes of teaching considerations.
LegiLexi, a Swedish assessment and instruction program initiative
Research has provided us with evidence of successful educational reading instruction
(Cunningham & O’Donnell, 2015; Gustafson, Nordström, Andersson, Fälth, & Ingvar, in
press; Torgesen et al., 2001). However, much remains to be known about how teachers in
Sweden make use of detailed information about student reading development and teaching
recommendations and how they use that information to promote future learning. In this study,
we explored through teachers narratives how teachers can use information and be supported in
these processes, as an example for making changes in their teaching through the educational
reading program LegiLexi (LL henceforth), a Swedish program run as pro bono and free of
charge for schools, with the aim of developing all students reading ability
6 (27)
(www.LegiLexi.org). We also investigated how the teachers were professionally affected by
being supported by the systematic assessment tool.
The LL-program was developed as a result of public and academic debates on declining
reading performance among Swedish youth, evident from several PISA-reports (OECD
2010;2013). The program was built on the supposition that recurrent monitoring of the
multiple components underlying reading development, along with supporting teachers to act
upon students’ reading performance patterns, positively affects reading development for all
students in a teacher’s class. In order to compensate for the identified difficulties associated
with teachers’ use of assessment data (Dunn, Airola, Lo, & Garrison, 2013; Hoogland et al.,
2016; Schildkamp & Kuiper, 2010; Zeuch et al., 2017), the LL-tool was explicitly designed to
provide teachers with reading assessments, and individual and class teaching
recommendations with regard to learning goals for Grade 3 and for end-of-term performance
in Grade 1 and 2 (see method section for details of the recommendations). Reading
proficiency data was available at both student and class level to inform teachers how to plan
instruction at each grade level. Assessments are carried out three times a school year, starting
in the beginning of the semester, at the middle of the year and at end-of-term, and, as such
translates to the features of Tier 1 procedures within the RTI- frame- work.
Furthermore, the LL-tool, which was developed by reading researchers, teachers and
special ed. teachers, is based on the Simple view of reading model (SVR), Response to
intervention (RTI) and Formative assessment (FA) (Black, 2015; Black & Wiliam, 1998).
Both FA and RTI are forward-looking assessments approaches, aimed to develop both
students learning and supporting teachers to teach forward (Andersson, Löfgren, & Gustafson,
2019). The LL-tool can be seen in the systematized testing from RTI theory and from FA in a
more loosely assessment practice were teachers are freer to make decisions about teaching
efforts. Teacher knowledge of these approaches, including their merits and limitations, can be
used to develop an efficient assessment practice. In this study, we are referring to a forward-
looking teaching approach of using assessments, aimed to let teachers make more refined
assessments by combining elements of FA (i.e., forward-looking approaches in relation to
learning goals) and RTI (i.e., using test results, comprehensive assessments including
recommendations and individual adjustments).
7 (27)
In a previous study (Gustafson, Nordström, Andersson, Fälth, & Ingvar, 2019), we
investigated the effect of the program on students reading skills in Grade 1. Teachers were
randomly assigned to three conditions, 1) Teachers having full access to the program,
including teacher course, profiles and recommendations, 2) Teachers having access to profiles
and recommendations only (and not the course material), and 3), no access, which served as a
control group. Results showed positive results for students participating in the program. The
largest gains in word reading and reading comprehension were obtained in the full access
group, as assessed between the initial assessments at school start and the third assessment at
the end of the semester.
The present study
In the present study, we included teachers from the whole of primary school (Grade 1–3), and
explored how the teachers made use of the program guidelines, as we do not know enough of
how teachers in Sweden make use of such procedures for instructional change in the
classroom, or how teachers think about and act regarding assessment use for targeting
individual needs. To study how teachers used the tool, we chose focus group interviews as
data collection method (Breen, 2006), by asking them to tell us about their work with the tool.
This includes to study what information teachers receive and what support the tool provides.
This method also enabled us to control the conversation with interview questions, listen to the
teachers' stories and take part of the in-depth conversations that arose between the teachers
based on their experiences as LL users. This study took place after the second assessment
period during the pilot year of implementing LL. Since this way of teaching was new for the
participating teachers, we were also interested in any self-reported change in professional
growth; a known effect of teacher participation in interventions that aim to increase
instructional effectiveness (Van der Scheer & Visscher, 2016; Vescio, Ross, & Adams, 2008).
Adapted teaching that corresponds to individual needs has been found to be one of the most
advanced skills of teachers’ professional competence, and is usually mastered only after
several years of teaching (Van de Grift, 2007). Studies show, however, that it is possible to
support teachers to achieve effective instructional change (Van den Hurk, Houtveen, & Van
de Grift, 2016). This study aims to explore what can be learned when primary teachers use a
8 (27)
comprehensive assessment tool, including recommendations and teaching material in their
own teaching practice.
Research questions
The following research questions were asked. For what purposes were students' assessments
of their reading ability and recommendations used (RQ1)? How did the teachers use this
information regarding changes in their teaching practice (RQ2)? How were the teachers
professionally affected by being supported by a systematic assessment tool (RQ3)?
Method
Participants
To be able to recruit active LL-teachers, the project leaders of LL was contacted, which
provided us with teachers who also was responsible for the teaching of reading in their
respective classes. Initially, twenty- five teachers were asked for participation in the study, of
which ten responded positively. The main argument for not volunteering was re- ported high
workload. At the time of the data collection, one teacher reported being ill, and a second
teacher was prevented due to other causes. Because of the challenge of being able to recruit
active LL-teachers who would fit the requirement of the study, we decided to carry through
focus group number two with fewer teachers, and let the quality of the collected data
determine its usability for the analyses, rather than the number of participants.
Eight female teachers working in Grade 13 from different schools participated in the
study. All teachers had entered the LegiLexi program the same year the program was
introduced to selected schools. Years of teaching varied, as one teacher had only two years of
teaching experience, whereas the remaining teachers had 10–22 years of experience, which
can be seen Table 1.
[Table 1]
Typical class size was somewhere between 20 and 28 students, whereas one teacher had
only 13 students and one teacher worked in a class with 39 students (although, the students
9 (27)
were divided into smaller units during different school activities). As such, the teachers were
both homogeneous as a group (as all share the same experience as being part of the program)
and heterogeneous (in terms of grade-level teaching and teaching experience). Some of the
teachers had joined the program together with colleagues, working either in the same or in
other classes, although none was accompanied by a colleague in the interviews. The schools
were located in several municipalities adjacent to a large city in Sweden with variation
between the schools in terms of size and socioeconomic status. The teachers had joined the
program well before the start of the autumn semester the previous year. Six to eight months
had passed between the first assessment sessions at school start (in August) until the
interviews (which were conducted in late January and in April). During this time the teachers
had participated in a mandatory teacher course (which is described below), had carried out
two sessions of assessments (with provided reading profiles and re- commendations), and had
been encouraged to use a variety of teaching methods and reading instructions found in the
course material for the teachers. The teachers were all familiar with the aim of the program to
improve all students’ reading development by using the program features to enhance the
quality of teaching.
LegiLexi
Background: LegiLexi (https://www.LegiLexi.org/) evolved as a program to counteract
declining reading performance in Swedish schools, and is intended to be used alongside
ordinary teaching and to support teachers with the explicit aim of making all students better
readers. The program includes an assessment tool used to promote in- formed data-driven
decisions of teaching considerations with regard to students’ learning needs, and teaching
material that is to be used to promote teachers’ professional growth by improving competence
in teaching early literacy. The LL-tool provides assessments and re- commendations for
teachers, along with material for each area of re- commendation. It does not, however,
provide specific instructional sessions for each recommendation. The material consist of
already established evidence-based instructions and theories of reading and for- ward-looking
assessment for teachers to use and develop in their practice (see also Teaching material).
10 (27)
The assessment tool: The assessment tool constituted, at the time of this study, a pen-and-
paper version, although the teachers received computer-generated compilations that contained
test scores, reading profiles and teaching recommendations. Assessments are carried out three
times per school year (August, December and May) during each year of primary school
(Grade 13). The teachers followed an extensive test manual (Fälth, Gustafson, Kugelberg, &
Nordström, 2017). The assessment tool provided differentiated reading profiles on single
components of reading proficiency and aggregated scores in line with the simple view of
reading model. The differentiation contributes to the different focus of the reading profiles
being made visible to the teacher, while the whole of the student's reading development
(according to the Simple view of reading) is also supported. Scores from single components
were aggregated to form factors higher up in the model, namely decoding and reading
comprehension, with successively higher order skill assessments in decoding and
comprehension as the student advances through primary school.
Test scores for individual students were available for each of the nine tests divided into
pre-reading skills and automatic word decoding, language comprehension and reading
comprehension. This information was also imbedded in the class profiles where each reading
ability and aggregated score was displayed for the entire class in the form of matrices. Visual
representations of growth in reading ability over time was also available, including graphics
that displayed the gap between cur- rent achievement and expected Grade 3 learning goals.
The scale for each reading ability, as well as the aggregated scores, were calculated on a six-
point scale (where scale level 6 represented achievement be- yond learning goals for Grade 3).
In comparison with most other tests found in Swedish schools, the assessment tool was aimed
to target achievement level for Grade 3 and end-of-year performance at Grade 1 and 2. Thus,
the assessment of reading proficiency were not relative to other students, but were used as
benchmarks, developed as to be in line with the curriculum and learning goals for primary
school.
Teaching recommendations: The recommended focus areas for individual students were
computer generated and added to the reading profiles. These were generated according to the
logic originated from the reading proficiency assessments. For example, students who had not
yet achieved an automatized decoding ability were mainly, but not only, recommended to
practice on that component. There were eight distinctively different recommendations that
11 (27)
followed from the assessments, and from a combination of assessed levels of the decoding-,
reading- and language comprehension measures. The most common recommendation in the
program focused on decoding as this reading skill constitute the basis of reading in the early
school years, but other recommendations also included fluency, vocabulary and reading
comprehension strategies.
In the following section, we provide two of these different recommendations. In the first
example, the teacher was informed of a student who shows strengths in language
comprehension skills but were to develop an automated word reading ability.
Focus area for the student: Alphabetic decoding
Individual recommendation: The student has a well-developed understanding of language, but
needs to develop the decoding ability. Focus on developing the student's word decoding
ability in order to get the student to start practice his/her own reading. In the second ex-
ample, the student had not yet developed reading skills and the teacher was encouraged to
focus on pre-reading skills such as letter knowledge and phonological awareness.
Focus area for the student: Phonological awareness
Individual recommendation: The student is in need to develop both decoding skills and
language comprehension. The teacher can promote both language and comprehension
development in teaching where the student does not have to read, such as listening to oral
reading and to participate in film discussions. But the focus should now be on devel- oping the
student’s pre-reading skills and decoding in order to get the student started with his/her own
reading.
Teaching material: Teaching material, texts and articles, were available as part of the
program and covered topics that are to be used for preventing reading difficulties as well as
for teaching rapidly advancing readers. The material was written by Swedish researchers and
educators of reading instruction and reading development. Parts of the material were included
in the mandatory course that the teachers participated in. The material were in line with the
recommendations, as how to teach for example letter knowledge, fluency or reading
comprehension. The material also covered chapters of, for example, reading development,
12 (27)
motivation, and special needs, as well as didactical con- tent (altogether 12 chapters). The
teachers were encouraged to make use of the material and adapt their teaching accordingly,
however, no particular method or instruction, as with a manual for each reading skill, were
provided (e.g., cooperative methods, repeated reading or peer-assisted learning). The material
instead consisted of a set of educational tools the teachers could choose from.
Focus groups interviews
Preparations and context of the study
Interest in the study aroused in connection with LL's implementation process. In connection
with this, the research questions that formed the basis of the study were formulated. The
research questions pointed to how teachers used the evaluation tool, recommendations and
teaching aids for forward-looking purposes to improve classroom instruction. The rationale of
choosing focus group interviews, conducted twice, over other methods, was justified for its
usefulness for illustrating the many individual perceptions that can be found within a group of
teachers with a shared experience in teaching. Since we in- tended to study teachers' use of a
specific teaching tool that was relatively new to them, the implementation process just started,
we considered it beneficial for the study to let the teachers talk about and discuss their
experiences together in a social environment. It is argued (e.g., by Breen, 2006) that the group
dynamics in focus groups prompt participants to compare, reflect and develop reasoning of
their practice over and above what can be achieved in a one-to-one setting. In addition, by
having two subsequent interviews with different participants (each teacher participated in only
one out of two interviews), we believe we were able to receive as much information as needed
in order to better understand the phenomenon under investigation.
Focus group interviews procedures
Both focus group interviews were conducted at approximately mid- term of the spring
semester, and took place in a meeting room located in a larger Swedish city, due to the
easiness for teachers to travel to the interview sessions. The room contained a conference
table, and the recording equipment was located at the table. Before the moderator (second
author) began to actively engage the teachers on the topics under investigation, teachers were
13 (27)
introduced and reminded of the aim and the work with the program, in order for the teachers
to be “warmed up” and to be better prepared to share their experiences in the inter- views,
both retrieved retrospectively and from on-going activities. By conducting the interviews in
this manner, the teachers were able to be introduced to the meaning of the meeting, in line
with Krueger (2000) guidelines of focus-group questioning (i.e., from general to specific
questioning).
The moderator followed a semi-structured interview guide with broad questions that were
developed from the overall aim of the study and the research questions. The order of the
interview questions varied depending on the path that the conversation took. The interview
guide's content described below was used and additional follow-up questions were asked
when the moderator needed to get something clarified or to deepen an issue. As we were
interested in as how the program support was used and how it affected the teachers, the semi-
structured guide contained the following headings; Teachers’ understanding of forward-
looking uses of assessments, uses of tests and reading profiles, adjusted teaching in
accordance with individual and class recommendations and limitations of using reading
profiles to ensure high quality teaching. To guard against that the teachers’ self-reports
potentially did not reflect what the teachers were actually doing in the classroom; the
moderator encouraged the teachers to give concrete examples of how they used the program
in their practice, and asked follow-up questions on, for example, supportive or hindering
factors regarding the program usage.
The two interview sessions differed in terms of sound recording method and number of
participants. When requesting participation in the study, all teachers had agreed to be filmed
with a computer. The purpose of filming the interviews was to facilitate the transcription
work, however, it turned out that the transcription work was not depended on visual material.
In addition, the quality of the film sound was low, which made the transcription work
unnecessary longer. Therefore, only sound recorded from a dictaphone was used in the second
interview. There were no evidence of any negative impact of using the sound-recording
method only. The verbatim material, that is, the recorded material, along with notes of
nonverbal behavior, such as laughter and indications of agreements or disagreements between
participants from the interview, were transferred and transcribed into written material.
14 (27)
Both groups discussed, shared experiences, difficulties and good examples. The moderator
felt that participating teachers were involved in the interviews and that the atmosphere was
open and safe. This was confirmed after the interviews when the participating teachers
expressed that they have learned a lot during the interview, partly by putting words on their
own thoughts but also by listening and talking to other teachers who shared their experiences
with working with the LL- tool.
The reflections that emerged from the interviews were analyzed by all four authors
following the guidelines of content analysis and peer debriefing (Creswell & Miller, 2000;
Creswell & Poth, 2017; Lincoln & Guba, 1985) and were sentence coded by the first author.
To guard against researcher bias, first author codes were compared with the peer debriefing,
using constant comparison, until themes were formed. In the analytic process, we had an
inductive approach but searched for patterns of how the teachers explicitly stated changing
beliefs about their practice and how they used the program in their teaching. We also used the
six-step procedure of thematic analysis provided by Braun and Clarke (2006), meaning that
the analytic process was recursive (i.e., searching, reviewing, defining and ultimately naming
the themes). In addition, the research questions and the interview guide, by and large, steered
much of possible outcomes from the interviews, as well as in the analytical process. However,
as the interviews were conducted in a fairly semi-structured way, there were plenty of
opportunities for the teachers to address topics that the research team had not thought about
before the interviews.
The way the discussions developed were different in the two groups, however, the coding
of statements, as well as the themes, were found in both focus group meetings. Some
reflections were coded as belonging to several different themes. The teachers were for the
most part in agreement with each other. Teacher G and H (which, by the time of this study,
were teaching a grade 2 and a grade 1 class respectively) did make the most detailed
contribution in terms of developed quotable reasoning, although all the other teachers
contributed to the interviews by interacting, developing and by sharing their reflections. The
analysis of the two focus groups interviews resulted in three themes: Awareness of student
learning, Changes in the organization of teaching, but not regarding individualized content
and Strengthened teacher role, but modest professional growth.
15 (27)
Results
This section describe the three themes extracted from the focus groups interviews. Table 2
presents the themes with corresponding quotes to illustrate each theme.
[Table 2]
Theme 1: Awareness of student learning
According to the reflections that emerged during interviews, several teachers reported having
discovered their students’ skill level and reading progression in much more detail by having
access to the reading profiles. The test results made their students’ learning visible for the
teachers and they reported that recurrent testing not only con- firmed what they intuitively
knew before (but less detailed), but that they also had valuable knowledge of how differently
students’ progress, including that they were able to identify each student’s development. One
of the teachers stated; “I am strengthening both those students that are skilled, who already
has a developed language, and those who need to practice more. They have received an
educational form that provides improvements for the students”.
The teachers realized the link between how they were planning and carrying out teaching
with regard to students learning progression and how that related to future Grade 3 learning
goals, illustrated with these words; “The advanced readers become motivated to learn, that
they can get to the next level by working harder and to increase their efforts, so that they can
reach the target level for Grade 3 early on”.
The teachers said that they now had a tool to meet more students' needs, and that they
believed they were in a position to positively influence their students’ development as they
were able to monitor both rapidly advancing readers and those who progressed slower. In
addition, the assessments gave the teachers a sense of relief as the profiles gave the teachers
insight to, and control of the processes of bringing the students’ towards learning goals. One
of the teachers expressed this as; “It has been truly informing to having access to the targets
levels between test occasions, especially as I have students with difficulties learning how to
read. After the second assessment, it turned out that they have improved their scores on
16 (27)
vocabulary and language comprehension. Then you can catch your breath and know that the
students are underway”. Thus, the reflections acknowledged that the teachers were viewing
students’ reading development in relation to their own teaching, in a systematic way, and that
they were aware about individual students’ learning needs.
Theme 2: Changes in the organization of teaching, but not regarding individualized content
The second theme illustrated the teachers’ use of differentiated teaching, such as flexible
groups of students, which also changed during the course of the school year (i.e., dynamic
teaching). They consciously reported that they focused on making matches between groups of
students’ achievement levels with various differentiated forms of teaching, with ways of
activating low and non-achievers in reading activities, as they emphasized that no student
should be left behind, expressed in this quote; “In part based on the recommendations, we use
skill level groupings. For students having trouble with silent reading (because they usually
just browse through the books), we now have structured sessions of oral reading, so that
students actually read during sessions”. The focus of these activities were to improve
comprehension, and at the same time to practice word recognition skills, through shared oral
reading or by discussing books, also expressed by this quote of a Grade 1 teacher; “We
created a group of nine students who had the same book. They got to read in groups of three
students. Then they went together and discussed the book”.
What did not come out much in the interviews were how they were planning instructions
for specific focus areas for individual students, for example teaching pre-reading skills such as
phonological awareness, or for improving fluency such as word recognition exercises, which
the assessments and recommendations from the LL-tool allows. As such, the teachers
displayed awareness of making informed decisions for differentiated teaching at group level,
affecting the organization of teaching, but recommended teaching of single components for
individual students were not used.
Theme 3: strengthened teacher role, but modest professional growth
The third emerging theme was that the teachers had become strengthened in their role as
teachers by emphasizing increased knowledge of their students’ reading progression, which
17 (27)
impacted an increased sense of accountability in relation to parents, including communication,
as they experienced being more confident and entrusted as teachers, as shown in this quote;
“Using the recurrent assessments enhances my profession when I tell the parents about their
children's reading development. It gives the parents a sense of security. They witness
progression, even if they are worried”. They also put forth an increased accountability
towards other professions at school, such as special education teachers, as they, compared to
what they experienced before using the program, now were better informed about students’
needs. One teacher told; “The special education teacher in my school said I was so
knowledgeable about the students that it has helped the special education teacher to identify
students who have difficulties.
Nevertheless, although the teachers reported being strengthened in their role as a teacher,
the teachers can be considered as belonging to the initial phases of efficient assessment use, as
they put forth potentials, rather than having developed a professional practice around its use,
as shown in this quote; “Assessments can be the basis for my own teaching, while it also can
be used to distribute school resources differently and to organize ourselves better, when we
know how different the classes are”. As such, teacher narratives revealed that they had not
fully developed teaching that corresponded to individual needs of students, nor had they fully
developed a practice that enabled close teacher collaboration, but expressed awareness and
inspiration of its potential, in order to improve teaching practices at their schools.
Discussion
A wealth of international research show the effectiveness of monitoring student reading
progress and to match instruction according to student needs (Connor et al., 2009; Förster &
Souvignier, 2015; Stecker et al., 2005). However, as research has pointed out, teachers are
seldom adequately equipped to be able to use assessment data for instructional decisions
(Hoogland et al., 2016). In this study, Swedish teachers that followed the educational program
LegiLexi, were aided in the processes of making informed decisions of teaching, covering
both content (i.e., specific focus areas for individual students) and form (employing
differentiated teaching) that changed over the course of two assessment periods. The
narratives from the focus groups interviews were organized in three main themes. First, we
18 (27)
discuss the first two themes as they explicitly related to research question 1 and 2, followed by
the third theme, which related to the third research question.
Teaching practices
The first and the second research question concerned how the teachers used student
information and how they used that information regarding changes in their teaching practice.
An observable effect of participating in the program was that the teachers’ talk about
assessment use translated to some of the key features of response to intervention, including
data driven teaching traditions (Means et al., 2011). Student learning is a central part of
teaching within these frameworks, as long as assessments are used for additional and
continuous teaching with regard to future learning goals. Although, the teachers stated that
they already worked in the spirit of forward-looking teaching before entering the program,
they stated they had acquired more skills and showed more awareness in how to align learning
to instructional decisions, as both learning and teaching had become visible for the teachers.
The teachers held a belief that in order to improve reading development for each student,
teaching needs to match learning needs and to be differentiated. The recurring assessments
also helped the teachers to evaluate their teaching efforts and to inform the teachers of
students that developed poorly over time. Based on excerpt in theme 2 (Changes in the
organization of teaching, but not regarding individualized content), the teachers viewed their
teaching as dynamic, that is, that instruction changed over the course of the school year and
that it became more intense for some of the students. Teaching was also viewed as flexible as
the teachers said they were using different and temporary constellations of students in learning
activities and that they used a variety of teaching methods based on informed decisions. The
LL-tool allowed for additional adjustments in the form of specific and structured training
sessions. Based on this, the teaching could also have included individual-based student tasks
to train reading skills in order to supplement the text reading in the reading groups. Reading
lists of suitable level is such a task (Wendick, 2015), concepts and vocabulary training
another. There are challenges with achieving teaching that corresponds to the individual needs
of students and similar results have been found in other studies (Schildkamp & Kuiper, 2010).
In terms of teacher development that promoted the ability to make informed de- cisions at the
individual level, it was difficult to achieve all teaching possibilities the tool allowed for during
19 (27)
the implementation process of the program. However, it is very likely that grouping students
based on ability level can be a first realistic adjustment when being supported by an
assessment tool. Therefore, it is plausible that teachers manage to employ further
individualization when they have evaluated the tool and have integrated the tool as a natural
part of their practice regarding how teaching is organized in their classrooms. Systematic
student assessments seem to provide teachers with tools to evaluate their practice
continuously, including how to organize students into groups, as well as to monitor student
progress over the year.
The results point to the relative difficulty of achieving targeted instruction that corresponds
to a comprehensive reading profile, that includes both content (specific focus areas) and
ability-levels (profile and recommendations) across different students in class. This challenge
is closely intertwined with the processes of the teachers’ professional growth, discussed
below.
Professional growth
The third research question concerned how the teachers were affected with regard to being
supported by a systematic assessment tool. Adapted teaching is seen as one of the most
difficult professional levels teachers can acquire (Van de Grift, 2007). Earlier studies show
that this level of professional competence takes years to develop but can be reached a lot
sooner if teachers are supported in the processes of analyzing instructional behavior and with
the use of data feedback (Van den Hurk et al., 2016). This study showed that the participating
teachers had started to make conscious considerations as to improve student reading
development through the use of student data and to follow the recommendations that stemmed
from the assessment tool. Other studies of teachers’ professional growth point, among other
things, to the importance of teacher collaboration in order to build an efficient practice for
assessment use for instruction (Blanc et al., 2010; Lai & McNaughton, 2016). Teacher
collaboration means that teachers work together with analyzing data and to create solutions
for classroom instruction. Blanc’s et al. study (2010) showed that teachers’ skill building is
shaped in their communities, and if given proper support by school leaders, can lead to greater
use of assessment data. Our results indicated that the teachers in this study were in the early
stages of such changes. The teachers expressed the benefits of planning learning activities and
20 (27)
how to allocate school resources through teacher collaboration and assessment use in their
respective schools.
The LL-program and the teaching material was not designed as a protocol or manual that
the teachers could follow, rather the materials and suggested instructions were available for
the teachers to choose from between the assessment periods. Given the relatively challenging
task of implementing the assessments, providing additional targeted efforts might require
further experience of working with assessments. Future studies of this program could reveal
more positive results, when teachers have evaluated the possibilities of the tool in relation to
their professional competence. Based on the results of this study, future re- search should
focus on how to further improve teacher’s use of assessments for instruction with a focus on
promoting teacher collaboration, in order for teachers being able to better align, and to better
explore all possibilities of the tool regarding individualized instruction. This should also
include to study the school-, and classroom organization at large, in order to identify potential
hindering factors concerning teachers’ possibilities for enabling structured level-based
adjustments at the individual level.
Conclusions
This study demonstrated that the teachers valued being informed about student’s ability levels
and progress, they also demonstrated that detailed assessments and teaching recommendations
can be used to organize teaching in accordance with individual learning needs. This raises
hopes to better being able to meet the learning needs of students in the early phases of
learning how to read, as well as for more advanced readers during primary school. The
teachers in this study mostly used assessments and recommendations for creating reading
groups of students, and the creation of these groups seemed to be based on in- formed
decisions, utilizing the LL-tool. However, the LL-tool allowed for even more targeted efforts
in addition to reading groups, taking into account specific focus areas for students, which this
study did not find evidence of. Therefore, enabling teachers to carry out even more directed
and targeted teaching effort could be the next step after implementing a program like LL. This
includes teacher collaboration in order to utilize all possibilities of the tool, that is, the link
between a variety of student assessments in class and corresponding teaching efforts, as well
21 (27)
as to study and evade possible constraints or hindering factors in the school-, and class
environment.
Limitations
The knowledge gained from this study is based on focus group interviews. Although, we are
convinced that the teachers made honest contributions to the study, we did not have the
opportunity to observe and monitor teachers’ everyday work in an applied setting. As the
interviews were conducted at a single time point, we could not study any possible changes in
teachers’ skills and beliefs as in a pre-post design and this limits our understanding of how
teachers gradually develop the skills needed for effective assessment use. In addition,
although steps were taken to ensure that the teacher self-reports represented actual behavior, it
is still challenging to establish reliable conclusions about what teachers did and did not do
during the semester.
Two different procedures were used to record data during the focus group interviews, by
video and audio or by audio only. However, the video recordings were only used in order to
possibly help clarify the content of what was being said and not to study group interactions or
actions. Our impression from the interview situation was that teachers were not particularly
affected by being recorded on video or audio. They seemed to express their opinions openly
and freely. When data was analyzed, the video recordings did not contribute with much useful
information for this study and thus the difference in recording procedures were not that
important.
Not all requested teachers volunteered to participate and there were some dropouts before
the interviews. We did not have details about the circumstances that prevented some teachers
from participating in the study, beyond details of reported heavy workload or being ill at the
time of the study. It is likely that the participating teachers were, on average, more motivated
users than those who did not participate. This does not mean that they were more positive but
rather that they might have felt that they had something to say about the practical work with
the tool.
Furthermore, while it was evident that the participating teachers displayed professional
awareness of using the LL-tool (e.g., that they tried their best to use the tool to improve the
quality of teaching), we did not have access to details of how experienced and skilled these
22 (27)
teachers actually were, over and above information of how many years they had worked as
teachers. Less experienced teachers than the participating ones could have had a different
experience of using the tool in an applied setting. They could, for example, have found the
tool even more useful than was evident, but they could also have found the tool complex to
use, if they were in a position where teaching was new to them, being unable to benefit from
the rich feedback being provided. More experienced, and potentially more skilled teachers,
could also have experienced the benefits of the tool differently than the participating ones.
However, the somewhat small sample still represented some variation and provided an
understanding of how teachers can use student information and an insight to challenges in
implementing a project like LegiLexi.
Qualitative data as presented here does not tell anything about the actual improvements in
students reading skills. The previous study (Gustafson, Nordström, Andersson, Fälth, &
Ingvar, 2019), revealed that Grade 1 students benefited from having teachers who participated
in the program, which demonstrated that paying attention to students reading development,
and to act accordingly, can impact students reading development positively. This study
showed that participating teachers took these considerations seriously, particularly, affecting
their awareness of how students develop their reading skills during the semester, and made
efforts to employ teaching that corresponded to these skill levels.
23 (27)
References
Algozzine, B., & Anderson, K. M. (2007). Tips for teaching: Differentiating instruction to
include all students. Preventing School Failure, 51(3), 4954.
Andersson, U. B., Löfgren, H., & Gustafson, S. (2019). Forward-looking assessments that
support students learning: A comparative analysis of two approaches. Studies in
Educational Evaluation, 60, 109116.
Antoniou, F., & Souvignier, E. (2007). Strategy instruction in reading comprehension: An
intervention study for students with learning disabilities. Learning Disabilities: A
Contemporary Journal, 5(1), 4157.
Black, P. (2015). Formative assessmentAn optimistic but incomplete vision. Assessment in
Education Principles Policy and Practice, 22(February), 37–41.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in
Education Principles Policy and Practice, 5(1), 7–74.
Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010).
Learning to learn from data: Benchmarks and instructional communities. Peabody Journal
of Education, 85(2), 205225.
van den Bosch, R. M., Espin, C. A., Chung, S., & Saab, N. (2017). Data‐based decision-
making: Teachers’ comprehension of curriculum‐based measurement progress‐monitoring
graphs. Learning Disabilities Research and Practice, 32(1), 4660.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research
in Psychology, 3(2), 77–101.
Breen, R. L. (2006). A practical guide to focus-group research. Journal of Geography in
Higher Education, 30(3), 463475.
Connor, C. M., Piasta, S. B., Fishman, B., Glasney, S., Schatschneider, C., Crowe, E., et al.
(2009). Individualizing student instruction precisely: Effects of child by instruction
interactions on first graders’ literacy development. Child Development, 80(1), 77.
Creswell, J. W., & Miller, D. (2000). Determining validity in qualitative inquiry. The Police
Journal, 39(3), 125–130.
Creswell, J. W., & Poth, C. N. (2017). Qualitative inquiry and research design: Choosing
among five approaches. Sage publications.
Cunningham, A. E., & O’Donnell, C. R. (2015). Teachers’ knowledge about beginning
Reading development and instruction. In A. Pollatsek, & R. Treiman (Eds.). The Oxford
handbook of reading.
Davidson, K. L., & Frohbieter, G. (2011). District adoption and implementation of interim
and benchmark assessments. CRESST report 806. National Center for Research on
Evaluation, Standards, and Student Testing (CRESST).
Dunn, K. E., Airola, D. T., Lo, W. J., & Garrison, M. (2013). What teachers think about what
they can do with data: Development and validation of the data driven decision- making
efficacy and anxiety inventory. Contemporary Educational Psychology, 38(1), 8798.
Fälth, L., Gustafson, S., Kugelberg, E., & Nordström, T. (2017). LegiLexis formativa
bedömningsverktyg-Testmanual.
Foorman, B. R., & Torgesen, J. (2001). Critical elements of classroom and small‐group
instruction promote reading success in all children. Learning Disabilities Research and
Practice, 16(4), 203–212.
24 (27)
Förster, N., & Souvignier, E. (2015). Effects of providing teachers with information about
their students’ reading progress. School Psychology Review, 44(1), 6075.
Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S., et
al. (2008). Assisting students struggling with reading: Response to Intervention and multi-
tier intervention for reading in the primary grades. A practice guide. (NCEE 2009- 4045).
Retrieved from Washington, DC: National Center for Education Evaluation and Regional
Assistance, Institute of Education Sciences, US Department of Education. National Center
for Education Evaluation and Regional Assistance, Institute of Education Sciences.
http://ies.ed.gov/ncee/wwc/publications/practiceguides.
Van de Grift, W. (2007). Quality of teaching in four European countries: A review of the
literature and application of an assessment instrument. Educational Research, 49(2), 127–
152.
Grigorenko, E. L. (2008). Dynamic assessment and response to intervention: Two sides of one
coin. Journal of Learning Disabilities, 42(2), 111–132.
Grosche, M., & Volpe, R. J. (2013). Response-to-intervention (RTI) as a model to facilitate
inclusion for students with learning and behaviour problems. European Journal of Special
Needs Education, 28(3), 254269.
Gustafson, S., Nordström, T., Andersson, U. B., Fälth, L., & Ingvar, M. (2019). Effects of a
formative assessment system on early reading development. Education.
Guthrie, J. T., Wigfield, A., Barbosa, P., Perencevich, K. C., Taboada, A., Davis, M. H., et al.
(2004). Increasing reading comprehension and engagement through concept-oriented
reading instruction. Journal of Educational Psychology, 96(3), 403.
Hatcher, P. J., Hulme, C., & Ellis, A. W. (1994). Ameliorating early reading failure by
integrating the teaching of reading and phonological skills: The phonological linkage
hypothesis. Child Development, 65(1), 4157.
Hoogland, I., Schildkamp, K., van der Kleij, F., Heitink, M., Kippers, W., Veldkamp, B., et
al. (2016). Prerequisites for data-based decision making in the classroom: Research
evidence and practical illustrations. Teaching and Teacher Education, 60, 377386.
Van den Hurk, H. T. G., Houtveen, A. A. M., & Van de Grift, W. J. C. M. (2016). Fostering
effective teaching behavior through the use of data-feedback. Teaching and Teacher
Education, 60, 444451.
Krueger, R. (2000). Focus groups: A practical guide for applied research. London: Sage.
Lai, M. K., & McNaughton, S. (2016). The impact of data use professional development on
student achievement. Teaching and Teacher Education, 60, 434–443.
Lam, E. A., & McMaster, K. L. (2014). Predictors of responsiveness to early literacy
intervention: A 10-year update. Learning Disability Quarterly, 37(3), 134–147.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
Means, B., Chen, E., Debarger, A., & Padilla, C. (2011). Teachers’ ability to use data to
inform instruction: Challenges and supports. Office of Planning, Evaluation and Policy
Development, US Department of Education, 1–122.
PISA 2009 Results: Learning Trends: Changes in Student Performance since 2000. OECD
Publishing, (5). Organisation for Economic Co-operation and Development. (2013). PISA
2012 results in focus: What 15-year-olds know and what they can do with what they know.
Paris, France: Author.
Van der Scheer, E. A., & Visscher, A. J. (2016). Effects of an intensive data-based decision
making intervention on teacher efficacy. Teaching and Teacher Education, 60, 34–43.
25 (27)
Schildkamp, K., & Kuiper, W. (2010). Data-informed curriculum reform: Which data, what
purposes, and promoting and hindering factors. Teaching and Teacher Education, 26(3),
482496.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to
improve student achievement: Review of research. Psychology in the Schools, 42(8), 795
819.
Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. K., &
Conway, T. (2001). Intensive remedial instruction for children with severe reading
disabilities: Immediate and long-term outcomes from two instructional approaches.
Journal of Learning Disabilities, 34(1), 33–58.
Tran, L., Sanchez, T., Arellano, B., & Lee Swanson, H. (2011). A meta-analysis of the RTI
literature for children at risk for reading disabilities. Journal of Learning Disabilities, 44,
283295.
Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to
instruction: The promise and potential problems. Learning Disabilities Research and
Practice, 18(3), 137146.
Vescio, V., Ross, D., & Adams, A. (2008). A review of research on the impact of professional
learning communities on teaching practice and student learning. Teaching and Teacher
Education, 24(1), 80–91.
Wayman, M. M., Wallace, T., Wiley, H. I., Tichdt, R., & Espin, C. A. (2007). Literature
synthesis on curriculum-based measurement in reading. The Journal of Special Education,
41(2), 85–120.
Wendick, G. (2015). Wendickmodellen Intensivläsning (The wendick model intensive
training).
Wolff, U. (2016). Effects of a randomized reading intervention study aimed at 9-Year-olds: A
5-Year follow-up. Dyslexia, 22(2), 85–100.
Zeuch, N., Förster, N., & Souvignier, E. (2017). Assessing teachers’ competencies to read and
interpret graphs from learning progress assessment: Results from tests and interviews.
Learning Disabilities Research and Practice, 32(1), 61–70.
26 (27)
Table 1 Details of participating teachers
Teacher
Grade
Years in profession
Number of students in class
Age
Interview 1
A
1–3
10
39
58
B
1
17
23
59
C
1–2
10
22
N/A
D
3
2
20
32
E
1
22
12
47
Interview 2
F
2
12
25
56
G
2
12
26
38
H
1
13
28
56
27 (27)
Table 2 Extracted themes with distinguished teacher quotes
Theme
Teacher quotes with grade level within parentheses
1.
Awareness of student learning
Teacher #H (Grade 1). [by working with assessments] I am strengthening both
those students that are skilled, who already has a developed language, and those
who need to practice more. They have received an educational form that provides
improvements for the students.
Teacher #G (Grade 2). It was perhaps not so very surprising that I had a few
students there at the beginner level. I had already identified them. But the tests
cover a relatively large portion of reading skills. It confirms what you are sensing
of the students' reading development.
Teacher #H (Grade 1). When I examined the test results for the class, I was able to
nd two or three students with not so much progress. I saw that one of the students
rather backed than stood still. It made a bell ring - now we need to get started.
Teacher #A (Grade 1-3). It has been truly informing to have access to the target
levels between test occasions, especially as I have students with difficulties
learning how to read. After the second assessment, it turned out that they have
improved their scores on vocabulary and language comprehension. Then you can
catch your breath and know that the students are underway.
Teacher #H (Grade 1). It was exciting to see that there were so many competent
students in the class. They should also be allowed to develop by adapted teaching
and challenges.
Teacher #B (Grade 1). The advanced readers become motivated to learn, that they
can get to the next level by working harder and to increase their efforts, so that
they can reach the target level for Grade 3 early on.
2.
Changes in the organization of
teaching, but not regarding
individualized content
Teacher #H (Grade 1). Sometimes I put really advanced readers to read
together with students who are not advanced readers. They will then take
part in a dierent kind of reading than had the group only consisted of
students who struggle with text.
Teacher #G (Grade 2). In part based on the recommendations, we use skill
level groupings. For students having trouble with silent reading (because
they usually just browse through the books), we now have structured
sessions of oral reading, so that students actually read during sessions.
Teacher #H (Grade 1). We created a group of nine students who had the same
book. They got to read in groups of three students. Then they went together
and discussed the book. It worked absolutely fantastic
3.
Strengthened teacher role, but
modest professional growth
Teacher #B (Grade 1). Using the recurrent assessments enhances my profession
when I tell the parents about their children's reading development. It gives the
parents a sense of security. They witness progression, even if they are worried.
Teacher #F (Grade 1-2). The special education teacher in my school said I was so
knowledgeable about the students that it has helped the special education teacher
to identify students who have difficulties.
Teacher #C (Grade 1-2). It can be interesting to learn from colleagues with better
results at school year-end. What are the reasons behind those results? What can I
get out of it?
Teacher #C (Grade 1-2). Assessments can be the basis for my own teaching, while
it also can be used to distribute school resources differently and to organize
ourselves better, when we know how different the classes are.
... The assessment process is another point to be considered in the teaching-learning process. The teacher has a wide range of chances to carry out various assessment instruments in the form of either formative or summative assessment (Nordström et al., 2019). Several ongoing assessments can also be carried out more easily in face-to-face instruction (Rawas et.al, 2019). ...
Article
Full-text available
Situated in a disadvantaged condition in Lombok, Indonesia, the present study looks at the enactment of an emergency EFL course after earthquake and aftershock circumstances in a public university in the region. For such a purpose, forty-two non-English department students who attended the course in four face-to-face and nine asynchronous meetings were recruited using a convenience sampling technique. A set of questionnaire was disseminated to document participants’ responses on the course implementation. Observation and semi-structured interviews were also conducted to portray the pedagogical praxis. The findings suggest that the course delivery did not utterly reflect an effective teaching-learning process accordingly due to various factors. Barriers to using the WhatsApp tool also existed. Interestingly, the students positively reflected the course as the best way to learn in a disadvantaged condition. However, they were not confident with their attainment in English skills and components. Further considerations on how to design materials and assessment instruments and build a decent interaction are needed in learning under such disadvantaged condition. HIGHLIGHTS • The findings evince that the course delivery did not utterly reflect an effective teaching-learning process accordingly due to various factors. Barriers to using the WhatsApp tool also existed. • The students positively reflected the course as the best way to learn in an emergency learning milieu. The teacher appeared to be able to maintain students’ motivation. • Further considerations on how to design materials and assessment instruments and build a decent interaction are needed in learning under such disadvantaged conditions.
Article
Full-text available
Anxiety is one of the most prevalent mental health problems; it is known to impede cognitive functioning. It is believed to alter preferences for feedback-based learning in anxious and non-anxious learners. Thus, the present study measured feedback processing in adults ( N = 30) with and without anxiety symptoms using a probabilistic learning task. Event-related potential (ERP) measures were used to assess how the bias for either positive or negative feedback learning is reflected by the feedback-related negativity component (FRN), an ERP extracted from the electroencephalogram. Anxious individuals, identified by means of the Penn State Worry Questionnaire, showed a diminished FRN and increased accuracy after negative compared to positive feedback. Non-anxious individuals exhibited the reversed pattern with better learning from positive feedback, highlighting their preference for positive feedback. Our ERP results imply that impairments with feedback-based learning in anxious individuals are due to alterations in the mesolimbic dopaminergic system. Our finding that anxious individuals seem to favor negative as opposed to positive feedback has important implications for teacher–student feedback communication.
Article
Full-text available
The present paper reports on a 5-year follow-up of a randomized reading intervention in grade 3 in Sweden. An intervention group (n = 57) received daily training for 12 weeks in phoneme/grapheme mapping, reading comprehension and reading speed, whereas a control group (n = 55) participated in ordinary classroom activities. The main aim was to investigate if there were remaining effects of the intervention on reading-related skills. Previous analyses showed that the intervention group performed significantly better than the control group on spelling, reading speed, reading comprehension and phoneme awareness at the immediate post-test with sustained effects 1 year later. Results from the 5-year follow-up show that the only significant difference between the intervention (n = 47) and the control group (n = 37) was on word decoding. There was also a significant interaction effect of group assignment and initial word decoding, in the way that the lowest-performing students benefitted the most from the intervention. Another aim was to examine if the children identified in a screening (n = 2212) as poor readers in grade 2 still performed worse than typical readers. The analyses showed that the typically developing students (n = 66) outperformed the students identified as poor readers in grade 2 on working memory, spelling, reading comprehension and word decoding.
Article
In this theoretical paper we describe and compare two forward-looking approaches to assessment, Formativeassessment (FA) and Response to intervention (RTI). The purpose is to provide researchers and practitioners withinsights that enable more informed decisions regarding forward-looking assessments that support students’learning.In the descriptions of FA and RTI a number of key aspects were identified. For FA; classroom context, stu-dents’involvement, feedback to students, qualitative focus and for RTI; systematic organisation, data baseddecision making, evidence based instruction and standardized tests.In a subsequent analysis we use a Venn-diagram to highlight the unique key aspects of FA and RTI andcommon key aspects termed; forward-looking assessment, goals and criteria, and assessment competence.Finally, we argue for a pragmatic view on assessments, that a wide scope of adjustments needs to be con-sidered and that teachers need a comprehensive knowledge about assessment practices and to carefully followstudents’progress.
Teachers have difficulty using data from Curriculum-based Measurement (CBM) progress graphs of students with learning difficulties for instructional decision-making. As a first step in unraveling those difficulties, we studied teachers’ comprehension of CBM graphs. Using think-aloud methodology, we examined 23 teachers’ ability to read, interpret, and link CBM data to instruction for fictitious graphs and their own students’ graphs. Additionally, we examined whether graph literacy—measured with a self-report question and graph-reading skills test—affected graph comprehension. To provide a framework for understanding teachers’ graph comprehension, we also collected data from “gold-standard” experts. Results revealed that teachers were reasonably proficient at reading the data, but had more difficulty with interpreting and linking the data to instruction. Graph literacy was related to some but not all aspects of teachers’ CBM graph-comprehension ability. Implications for training teachers to comprehend and use CBM progress data for decision-making are discussed. © 2017 The Division for Learning Disabilities of the Council for Exceptional Children
Learning progress assessment (LPA) provides formative information about effectiveness of instructional decisions. Learning curves are usually presented as graphical illustrations. However, little is known about teachers understanding and interpreting of graphically presented information. An instrument to measure competencies in reading graphs from learning progress assessment (LPA-test) was developed. One-hundred and twenty-four student teachers and 36 teachers completed the LPA-test and a second test that assessed reading graphical information from diagrams. In addition, interviews from 10 teachers provided information about thinking processes while dealing with graphs on learning progress information. Technical adequacy of the LPA-test proved to be satisfactory. Results in the LPA-test underline the capacity that most teachers generally have when learning graphing. Interviews reveal that teachers skip a thorough description of the graphical illustration, concentrate on marginal details, and tend to immediately transmit students’ achievement into school grades without structuring information. © 2017 The Division for Learning Disabilities of the Council for Exceptional Children
Article
Research into the effects of interventions on teacher efficacy is scarce. In this study, the long-term effects of an intensive data-based decision making intervention on teacher efficacy of mainly grade 4 teachers were investigated by means of a delayed treatment control group design (62 teachers). The findings showed significant strong intervention effects on teachers' efficacy for instructional strategies, and student engagement in both treatment groups. No significant effects were found for teacher efficacy regarding classroom management. Improved teacher efficacy in the first treatment group persisted throughout the second school year. Suggestions for future research are presented.
Article
This paper describes the data use professional development (PD) component of a whole-school intervention that has been replicated in 53 schools over eight years. Quasi-experimental designs were used to test for intervention impact. The intervention improved achievement in reading comprehension, writing and high school qualifications. Effect sizes were generally higher than international comparisons. The data use PD involved collaboratively analyzing data to determine the achievement problems; identifying and testing the causes of the problems using theory evaluation principles; and co-creating solutions. The relative contribution of the data use PD to the intervention and the importance of content knowledge are discussed.
Article
Data-based decision making can lead to increased student learning. The desired effects of increased student learning can only be realized if data-based decision making is implemented successfully. Therefore, a systematic literature review was conducted to identify prerequisites of such successful implementation. Furthermore, focus group meetings were conducted with experts and practitioners to verify and illustrate the findings from the review. Several prerequisites of successful data use in the classroom that are supported by a substantial evidence base were identified, including teacher collaboration around the use of data, data literacy, and leadership.
Article
In this study data-feedback in a cyclic model of data-driven teaching was used to enhance the teaching behavior of students registered in a master course for teachers. Differences between pre- and post-test measures in a simple one-group pre-test post-tests design proved to be significant with effect sizes ranging from d = 0.29 to d = 0.76. Improving teaching behavior in a time span of only six weeks on average is remarkable since earlier studies indicated that it takes over 15 years to master complex teaching skills with a ‘natural development’ of teaching skills of about 25% of a standard deviation.