Access to this full-text is provided by Frontiers.
Content available from Frontiers in Education
This content is subject to copyright.
Frontiers in Education 01 frontiersin.org
Pre-service teachers evaluating
online texts about learning styles:
there is room for improvement in
justifying the credibility
PirjoKulju
1*, Elina K.Hämäläinen
2, MaritaMäkinen
1,
EijaRäikkönen
3 and CaritaKiili
1
1 Faculty of Education and Culture, Tampere University, Tampere, Finland, 2 Department of Teacher
Education, University of Jyväskylä, Jyväskylä, Finland, 3 Faculty of Education and Psychology,
University of Jyväskylä, Jyväskylä, Finland
Teachers’ abilities to critically evaluate the credibility of online information are
fundamental when they educate critical online readers. This study examined pre-
service teachers’ abilities to evaluate and justify the credibility of online texts on
learning styles. Pre-service teachers (N = 169) read and evaluated two more and
two less credible online texts on learning styles in a web-based environment.
Most pre-service teachers were able to dierentiate the more credible texts from
the less credible ones but struggled with justifying the credibility. Pre-service
teachers’ inaccurate prior beliefs about learning styles impeded questioning the
less credible texts. Implications for teacher education are discussed.
KEYWORDS
credibility evaluation, online reading, sourcing, critical reading, misinformation,
pre-service teachers, teacher education
1 Introduction
Critical online reading skills, which involve analyzing, evaluating, and interpreting online
texts that provide conicting information, are crucial for teachers. First, teachers oen use the
internet to build professional knowledge (Andreassen and Bråten, 2013; Bougatzeli etal., 2017)
to understand new demands for learning, develop their teaching practices, and solve
pedagogical problems (Zimmermann etal., 2022). However, the prevalence of misinformation
on the internet is a major challenge (Ecker etal., 2022; Lewandowsky etal., 2017), and the
educational eld is not immune to this phenomenon (Sinatra and Jacobson, 2019).
Consequently, teachers must beable to evaluate the credibility of information when perusing
educational topics on the internet. Lack of sucient credibility evaluation skills may lead them
to rely on unveried information instead of basing their classroom practices on evidence-
based information (Dekker etal., 2012; List etal., 2022; Zimmerman and Mayweg-Paus, 2021).
Second, teachers are responsible for educating critical online readers (e.g., ACARA, 2014;
NCC, 2016) and preparing their students to evaluate conicting information they may
encounter on the internet. However, previous research suggests that education systems have
not fully succeeded in this task, as many students evaluate online information supercially
(Coiro etal., 2015; Fraillon etal., 2020; Hämäläinen etal., 2020). Since primary and lower
secondary school students, in particular, require explicit instruction to learn how to justify
their evaluations (Abel etal., 2024), teachers must beable to model various ways to justify
credibility and scaold students toward in-depth reasoning.
However, little research has been conducted on pre-service teachers’ ability to evaluate the
credibility of online texts or, more specically, online educational texts. Rather, the focus has
OPEN ACCESS
EDITED BY
Noela Rodriguez Losada,
University of Malaga, Spain
REVIEWED BY
Keiichi Kobayashi,
Shizuoka University, Japan
Florian Schmidt-Borcherding,
University of Bremen, Germany
*CORRESPONDENCE
Pirjo Kulju
pirjo.kulju@tuni.fi
RECEIVED 18 June 2024
ACCEPTED 07 October 2024
PUBLISHED 05 November 2024
CITATION
Kulju P, Hämäläinen EK, Mäkinen M,
Räikkönen E and Kiili C (2024) Pre-service
teachers evaluating online texts about
learning styles: there is room for
improvement in justifying the credibility.
Front. Educ. 9:1451002.
doi: 10.3389/feduc.2024.1451002
COPYRIGHT
© 2024 Kulju, Hämäläinen, Mäkinen,
Räikkönen and Kiili. This is an open-access
article distributed under the terms of the
Creative Commons Attribution License
(CC BY). The use, distribution or reproduction
in other forums is permitted, provided the
original author(s) and the copyright owner(s)
are credited and that the original publication
in this journal is cited, in accordance with
accepted academic practice. No use,
distribution or reproduction is permitted
which does not comply with these terms.
TYPE Original Research
PUBLISHED 05 November 2024
DOI 10.3389/feduc.2024.1451002
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 02 frontiersin.org
been on higher education students in general (e.g., Barzilai etal.,
2020b; Kammerer etal., 2021; see also Anmarkrud etal., 2021). Our
study seeks to ll in this important gap by examining pre-service
teachers’ credibility evaluations and underlying reasoning when
reading more and less credible online texts on learning styles—the
existence of which the current scientic knowledge does not support
(Sinatra and Jacobson, 2019).
2 Theoretical framework
is study is situated in the multiple-document reading context,
whose requirements are depicted in the Documents Model framework
(Britt etal., 1999; Perfetti etal., 1999). Building a coherent mental
representation from multiple texts that complement or contradict each
other requires readers to consider and evaluate the source information
(e.g., the expertise and intentions of the source); connect the source
information to its related content; and compare, contrast, and weigh
the views of the sources.
As this study focuses on the credibility evaluation of multiple
online texts, it was further guided by a bidirectional model of rst-and
second-hand evaluation strategies by Barzilai etal. (2020b). According
to this model, readers can use rst-hand evaluation strategies to judge
the validity and quality of information and second-hand evaluation
strategies to judge the source’s trustworthiness (see also Stadtler and
Bromme, 2014). First-hand evaluation strategies consist of knowledge-
based validation, discourse-based validation, and corroboration.
When employing knowledge-based validation, readers evaluate the
quality of information by comparing it to their prior knowledge or
beliefs about the topic (see Section 2.1 for more details). In discourse-
based validation, readers focus on how knowledge is justied and
communicated. A crucial aspect to consider is the quality of evidence
and how it is produced (e.g., whether the evidence is based on research
or personal experience; Chinn et al., 2014; Nussbaum, 2020).
Importantly, readers can evaluate information quality through
corroboration, which involves using other texts to verify the
information’s accuracy.
When employing second-hand evaluation strategies, readers
engage in sourcing; this is dened as attending to, evaluating, and
using the available information about a source (Bråten etal., 2018).
When evaluating the trustworthiness of a source, readers can consider
whether the author has the expertise to provide accurate information
on the topic in question (Bråten etal., 2018; Stadtler and Bromme,
2014). It is also essential to evaluate the author’s benevolence, which
is the willingness to provide accurate information in the readers’ best
interest (Hendriks et al., 2015; Stadtler and Bromme, 2014). In
addition to the author’s characteristics, readers can pay attention to
the publication venue—for example, whether it serves as a gatekeeper
and monitors the accuracy of the information published on the
website (Braasch et al., 2013). While the bidirectional model of
rst-and second-hand evaluation strategies separates these two sets of
strategies, it accentuates their reciprocity as well (Barzilai et al.,
2020b). In essence, the evaluation of the validity and quality of
information is reected in the evaluation of the trustworthiness of a
source, and vice versa.
Following this theoretical framing wecreated a multiple-text
reading task comprising four online texts on learning styles. In these
texts, wemanipulated the expertise and benevolence of the source, the
quality evaluation evidence, and the publication venue. As weaimed
to understand how pre-service teachers employ rst and second-hand
evaluation strategies, weasked them to evaluate the expertise of the
author, the benevolence of the author, the publication practices of the
venue, and the quality of evidence; they were also required to justify
their evaluations. Wealso asked pre-service teachers to rank the four
online texts according to their credibility.
2.1 Prior topic beliefs in credibility
evaluation
Several cognitive, social, and aective processes may aect
people’s acquisition of inaccurate information (Ecker etal., 2022).
Prior topic beliefs are one such cognitive factor whose function has
been explained in a two-step model of validation in multiple-text
comprehension by Richter and Maier (2017). According to the
two-step model of validation, in the rst validation step, belief
consistency is used as a heuristic when processing information across
multiple texts, including the evaluation of information (see also
Metzger and Flanagin, 2013). Relying on heuristic processing leads
readers to evaluate belief-consistent information more plausibly,
process belief-consistent information more deeply, and comprehend
belief-consistent texts more clearly. If readers identify an inconsistency
between prior beliefs and information in the text and are motivated
and capable of resolving the conict, they may engage in strategic
processing to resolve this inconsistency. is strategic processing,
which is the second step in the validation process, may engender a
balanced mental model that incorporates the dierent views of
the texts.
Anmarkrud etal. (2021), in their review of individual dierences
in sourcing, concluded that approximately half of the studies
investigating belief constructs provided empirical support for
relationships between sourcing and beliefs. For instance, Tarchi (2019)
found that university students’ (N = 289) prior beliefs about vaccines
correlated with trustworthiness judgments of online texts (ve out of
six), varying by position toward vaccination and trustworthiness.
Positive prior beliefs about vaccines were positively associated with
texts that had a neutral or positive stance toward vaccines and
negatively with texts that had a negative stance toward vaccines.
Similarly, van Strien etal. (2016) found that the stronger university
students’ (N = 79) prior attitudes about organic food, the lower they
judged the credibility of attitude-inconsistent websites.
3 Previous research on pre-service
teachers’ credibility evaluation
Despite the importance of the credibility evaluation of online texts
in teachers’ professional lives, only a few studies have examined
pre-service teachers’ credibility evaluation. In Anmarkrud et al.’s
(2021) review of 72 studies on individual dierences in sourcing (i.e.,
attending to, evaluating, and using sources of information),
participants represented an educational program only in nine studies.
Research suggests that although pre-service teachers tend to judge
educational researchers as experts, they oen judge educational
practitioners more benevolent than researchers (Hendriks etal., 2021;
Merk and Rosman, 2021). However, Hendriks etal. (2021) showed
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 03 frontiersin.org
that epistemic aims may inuence how pre-service teachers evaluate
dierent aspects of the source trustworthiness. In their study,
pre-service teachers (N = 389) were asked to judge the trustworthiness
(i.e., expertise, integrity, and benevolence) of educational psychology
researchers and teachers in two situations: when their epistemic aim
was to seek an explanation for a case in the school context or to obtain
practical advice for everyday school life. When the aim was to seek
theoretical explanations, researchers were evaluated as possessing
more expertise and integrity, but less benevolence, than teachers.
However, when the aim was to gain practical advice, teachers were
seen as possessing more competence, integrity, and benevolence
than researchers.
In contrast to studies by Hendriks etal. (2021) and Merk and
Rosman (2021) which used a researcher-designed inventory,
Zimmerman and Mayweg-Paus (2021) examined pre-service teachers’
credibility evaluations with an authentic online research task. ey
asked pre-service teachers’ (N = 83) to imagine themselves as teachers
while searching for online information about mobile phone use in
class. Pre-service teachers conducted the search individually;
aerward, they were asked to either individually or collaboratively
justify their methods and rationale for selecting the most relevant
websites. Notably, only 34% of the selected websites were science-
related (e.g., journal articles, scientic blogs). While the collaborative
reection of the selections yielded more elaborate reasoning than
individual reection, the groups referred equally oen to dierent
types of criteria, such as criteria related to information (e.g.,
scienticness, two-sided arguments), source (e.g., expertise,
benevolence), or media (e.g., layperson or expert forum). A rather
prominent nding was that sources and media were mentioned
relatively rarely when pre-service teachers justied their selections.
Furthermore, research suggests that pre-service teachers may
perceive anecdotal evidence as more useful or trustworthy than
research evidence when seeking information for professional decision-
making (Ferguson and Bråten, 2022; Kiemer and Kollar, 2021; Menz
etal., 2021). However, when examining pre-service teachers’ (N = 329)
proles regarding evidence evaluation, Reuter and Leuchter (2023)
found that most pre-service teachers (81%) belonged to the prole in
which both strong and limited scientic evidence was rated higher
than anecdotal evidence. e rest of the pre-service teachers (19%)
belonged to the prole in which all three evidence types (strong
scientic evidence, limited scientic evidence, and anecdotal) were
rated equally high. Reuter and Leuchter (2023) concluded that
pre-service teachers might value strong scientic evidence but do not
automatically evaluate anecdotal evidence as inappropriate. Finally,
List etal. (2022) examined the role of source and content bias in
pre-service teachers’ (N = 143) evaluations of educational app reviews.
A total of four educational app reviews were manipulated in terms of
content bias (one-sided or two-sided content) as well as source bias
and commercial motivations (third-party review: objective or
sponsored; commercial website: with or without a teacher testimonial).
e authors found that, while pre-service teachers paid attention to
source bias in their app review ratings and purchase recommendations,
their considerations of source and content bias were not suciently
prevalent. For instance, they rated the commercial website with a
teacher testimonial and the objective third-party review site as
equally trustworthy.
In sum, pre-service teachers may lack the skills to conceptualize the
nuances of commercial motivations, and even if they value scientic
evidence they do not necessarily rely on scientic sources when selecting
online information for practice. It seems that evaluation of evidence and
source trustworthiness are dependent on the epistemic aims.
4 Present study
e present study sought to understand pre-service teachers’
abilities to evaluate and justify the credibility of four researcher-
designed online texts on learning styles on which both accurate (i.e.,
in line with current scientic knowledge) and inaccurate information
(i.e., not in line with current scientic knowledge) spread online
(Dekker et al., 2012; McAfee and Homan, 2021; Sinatra and
Jacobson, 2019). Two more credible online texts (popular science
news text and researcher’s blog) contained accurate information, while
two less credible texts (teacher’s blog and commercial text) contained
inaccurate information. While reading, pre-service teachers evaluated
four credibility aspects of each text (the author’s expertise, author’s
benevolence, venue’s publication practices, and quality of evidence),
justied their evaluations, and ranked the texts according to their
credibility. Before reading and evaluating the online texts, pre-service
teachers’ prior beliefs of learning styles were measured.
We examined the dierences in how pre-service teachers
evaluated the author’s expertise, author’s benevolence, venue’s
publication practices, and quality of evidence across the four online
texts (RQ1). Furthermore, weinvestigated how pre-service teachers
justied their evaluations (RQ2) and ranked the texts according to
their credibility (RQ3). Finally, we examined the contribution of
pre-service teachers’ prior beliefs about learning styles to the
credibility evaluation of more and less credible online texts when the
reading order of the texts was controlled for (RQ4).
Figure 1 presents the hypothetical model of the associations
between pre-service teachers’ prior beliefs about the topic and
credibility evaluations. e structure of the credibility evaluation was
based on our previous study (Kiili etal., 2023), in which students’
credibility evaluations loaded into four factors according to the online
texts. ese four rst-order factors further formed two second-order
factors: conrming the credibility of more credible online texts and
questioning the credibility of less credible online texts (see also Fendt
etal., 2023).
Based on theoretical considerations and empirical ndings
(Barzilai etal., 2020b; Richter and Maier, 2017), weassumed that
pre-service teachers’ prior beliefs about learning styles (i.e., beliefs in
accordance with inaccurate information about learning styles) would
be negatively associated with conrming and questioning the
credibility of online texts. Finally, as the reading order of controversial
texts has been shown to inuence reading comprehension (e.g., Maier
and Richter, 2013), the reading order of the texts was controlled for in
the analysis.
5 Method
5.1 Teacher education in the Finnish
context
Basic education in Finland comprises Grades 1–9 and is meant for
students between the ages of 7 and 16. In primary school (Grades
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 04 frontiersin.org
1–6), class teachers teach various subjects; in lower secondary school
(Grades 7–9), subject teachers are responsible for teaching their
respective subjects (Finnish National Agency for Education, 2022).
Finnish teacher education has an academic basis that presumes
pre-service teachers have the competence to apply research-based
knowledge in their daily work (Tirri, 2014; see, e.g., Niemi, 2012 for
more details on Finnish teacher education). Accordingly, both class
teachers and subject teachers must complete a master’s degree, which
requires a minimum of 300 ECTS (e European Credit Transfer and
Accumulation System; One ECTS is equal to 27 h of study).
Class teachers study education as their major, while subject
teachers have their own majors, such as mathematics or English. Both
degrees include teacher’s pedagogical studies (60 ECTS); moreover, the
class teacher degree includes multidisciplinary studies in subjects and
cross-curricular themes (60 ECTS). Subject teachers who supplement
their studies by completing these multidisciplinary studies will have
dual qualications. e data collection of the present study was
integrated into these multidisciplinary studies.
5.2 Participants
is study was integrated into an obligatory literacy course
arranged in two Finnish universities. Of the 174 pre-service teachers
taking this course, 169 gave informed consent to use a specic course
task for research purposes. e age of the participating pre-service
teachers varied from 19 to 43 years (M = 23.33; SD = 4.54). Most of the
participants were females (81.2%), which is in line with the gender
distribution in class teacher education. In 2021, 80.6% of the students
accepted to class teacher programs were females (Vipunen – Education
Statistics Finland, 2022). Due to the Finnish language requirements of
teacher education programs, all pre-service teachers were uent
in Finnish.
e majority of the participants were enrolled in a class teacher
program (86.5%), whereas the rest of the participants (13.5%) were
enrolled in a subject teacher program targeting dual qualications.
Most of the participants (65.9%) had earned 100 or fewer ECTs.
5.3 Credibility evaluation task
e credibility evaluation task was created using Critical Online
Reading Research Environment, which is a web-based environment
where researchers can design critical online reading tasks for students
(Kiili etal., 2023). Wecreated four online texts on learning styles, each
of which had ve paragraphs and approximately 200 words (see
Table1). Two more credible online texts (Text 2: Researcher’s blog and
Text 4: Popular Science Text) contained accurate information about
learning styles, whereas two less credible texts (Text 1: Teacher’s blog
and Text 3: Commercial text) contained inaccurate information. Text
1 claimed that students have a specic learning style, whereas Text 2
claimed the opposite. Text 3 claimed that customizing teaching
according to students’ learning styles improves learning outcomes and
Text 4 claimed that such improvement had not been observed. In each
text, the main claim was presented in the second paragraph.
e more credible texts were based on research literature,
providing evidence against the existence of specic learning styles.
We wrote the researcher’s blog by drawing on several articles
FIGURE1
Hypothetical model of the associations between prior beliefs about the topic and credibility evaluations, with the reading order of the texts as a control
variable.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 05 frontiersin.org
(Howard-Jones, 2014; Kirschner, 2017; Krätzig and Arbuthnott, 2006;
Sinatra and Jacobson, 2019), whereas the popular science news text
summarized a review that examined the impact of aligning
instruction to modality-specic learning styles on learning outcomes
(Aslaksen and Lorås, 2018). ese references were also listed in the
online texts that pre-service teachers read. While the texts were
ctitious, existing websites served as loose templates (cf. Hahnel etal.,
2020). To increase the authenticity, a graphic designer created logos
for the web pages, and each web page included decorative photos (see
Figure2).
We manipulated the credibility aspects of the texts in terms of the
author’s expertise, author’s benevolence, quality of evidence,
publication venue, and text genre (see Table1). Pre-service teachers
were asked to read and evaluate each text in terms of the author’s
expertise (Evaluate how much the author has expertise about learning
styles), the author’s benevolence (Evaluate how willing the author is to
provide accurate information), publishing practices of the venue
(Evaluate how well the publication venue can ensure that website
includes accurate information), and the quality of evidence (Evaluate
how well the author can support his/her main claim) on a 6-point
scale (e.g., author’s expertise: 1 = very little; 6 = very much). Before
evaluating the author’s expertise, pre-service teachers were asked to
identify the author, and before evaluating the quality of evidence, they
were asked to identify the main claim and supporting evidence. e
purpose of these identication items was to address pre-service
teachers’ attention to the aspects they were asked to evaluate. Aer the
evaluations, pre-service teachers were also asked to justify their
evaluations by responding to open-ended items. eir nal task was
to order the texts according to their credibility. Pre-service teachers
were informed that the texts had been designed for the task; however,
they were asked to evaluate the online texts as if they were authentic.
Pre-service teachers were randomly assigned to read the texts in
two dierent reading orders (Reading Order 1 = Text 1, Text 2, Text 4,
and Text 3; Reading Order 2 = Text 2, Text 1, Text 3, and Text 4).
Reading order was based on the text pairs. e main claims of the rst
text pair (Texts 1 and 2) concerned the existence of specic learning
styles. e main claims of the second text pair (Texts 3 and 4)
concerned the impact of customizing teaching according to students’
learning styles. If pre-service teachers read the less credible text rst
in the rst text pair, they read the more credible text rst in the second
pair. If pre-service teachers read the more credible text rst in the rst
text pair, they read the less credible text rst in the second pair. Text
order was dummy coded for the analyses (0 = Reading Order 1,
1 = Reading Order 2).
5.4 Prior topic beliefs measure
Prior to completing the evaluation task, the participants’ prior
beliefs about learning styles were measured with 3 Likert-scale items
(from 1 = highly disagree to 7 = highly agree). Each item included a
common misconception about learning styles. Items 1 and 3 represent
misconceptions, while Item 2 provides accurate information; however,
if reversed, it would reect a common misconception. e items were
as follows: (1) students can beclassied into auditive, kinesthetic, and
visual learners; (2) teaching according to students’ learning styles does
TABLE1 The evaluated online texts.
Text title Text genre Description
Text 1. Learning styles
bring color to your
teaching1
Teacher’s blog e text is written by a classroom teacher who is passionate about learning styles. She has 15 years of experience in
teaching, and she has employed learning styles ever since hearing about them in her initial teacher education. She
intends to share teaching tips on her personal blog. She claims that all students have their own characteristic learning
style. She describes how she classies her own students according to learning styles and gives illustrative examples of
how learning styles are considered in her classroom practice. She uses her own experiences and observations in the
classroom as evidence. e text is published on a public blog platform.
Text 2. Learning styles
– nothing but a myth?2
Researcher’s blog e text is written by a researcher whose research focuses on learning and learning processes. Her intention is to refute
misinformation on learning styles. e author argues that students do not have a particular learning style and uses the
results of several scientic studies as evidence. e text draws on empirical research and the neurological basis of the
brain. e author also explains that learners benet from carefully designed learning materials that use dierent sensory
channels, as long as the materials do not overload learners’ working memory. e text is published on the Education
Research Centre’s website in the Researchers blog section.
Text 3. Teacher,
implement learning
styles in your
classroom1
Commercial text e text is written by a consultant who is a communications specialist. Hehas commercial intentions as his purpose is to
promote the training courses for teachers oered by his company, which oers tailored educational courses. Heclaims
that customizing teaching according to students’ learning styles improves learning outcomes. e evidence is based on a
customer survey with positive testimonials from teachers. e text is published on the company’s website.
Text 4. Interpreting
research on learning
styles requires
criticality2
Popular Science News
text
e text is written by an associate professor of psychology specializing in personality psychology. His intention is to
inform the public on scientic issues. Heclaims that teaching according to learning styles does not support learning.
Heuses a scientic review as evidence and describes the methods and results of the review in the text. e cited review
was based on ten carefully selected intervention studies, and the statistical analysis supports the authors’ claim. e text
is published on Science News website. Science News is a scientic newspaper which publishes summaries of interesting
scientic articles. e paper has an editorial board consisting of experts from dierent elds.
1less credible text.
2more credible text.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 06 frontiersin.org
not improve how well they acquire knowledge; and (3) students’
learning can be facilitated through learning materials, which are
adjusted to their learning styles. Before computing a sum variable of
the items (maximum score: 21), wereversed the second item. e
McDonald’s omega for the total score was 0.71.
5.5 Procedure
e literacy course, where data were collected, focused on
pedagogical aspects of language, literacy, and literature, but not
particularly on credibility evaluation of online texts. e data was
collected in two courses, one of which was taught by the rst author.
Owing to the COVID-19 pandemic, the course was arranged online,
and the study was conducted during a 90-min-long online class using
Zoom. A week before the meeting, an information letter about the
study was sent to pre-service teachers. e online meeting began with
a short introduction to the task, during which the students watched a
video tutorial on how to work in the task environment. Furthermore,
pre-service teachers were informed that the online texts were designed
for study purposes but that they simulate the resources people
encounter on the internet.
Aer the introduction, pre-service teachers entered the online
task environment using a code. First, they lled in a questionnaire for
background information and indicated whether their responses could
beused for research purposes. Following this, they completed the
credibility evaluation task at their own pace. Once all pre-service
teachers had completed the task, they discussed their experiences of
the task. ey also shared their ideas about teaching credibility
evaluation to their students.
5.6 Data analysis
5.6.1 Qualitative analysis
Each pre-service teacher responded to 16 justication items
(justications for credibility evaluations) during the task, resulting in
2704 justications. Weemployed qualitative content analysis (Cohen
etal., 2018; Elo and Kyngäs, 2008; Weber, 1990) to examine the quality
of pre-service teachers’ justications for their credibility evaluations.
e unit of analysis was a justication that included the related
response to the identication item.
e qualitative data analysis was performed in two phases. In
Phase 1, four authors were involved in creating a scoring schema that
could beapplied to all credibility aspects. Weemployed both deductive
and inductive analysis (Elo and Kyngäs, 2008). Previous knowledge of
sourcing (e.g., Anmarkrud etal., 2013; Hendriks etal., 2015) and the
quality of evidence (Chinn etal., 2014; Nussbaum, 2020) served as a
lens for analyzing the written justications. is phase was also
informed by the scoring schema developed for a similar credibility
evaluation task completed by upper secondary school students (Kiili
etal., 2022). Inductive analysis was also utilized when reading through
the data to allow us to become immersed in the data and consider the
relevance and depth of the justications for each credibility aspect in
relation to the respective text. us, we read through the data,
formulated the initial scoring schema, and tested it; following this,
FIGURE2
Screenshot from the task-environment. The text is on the left-hand side, and the questions are on the right-hand side.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 07 frontiersin.org
wediscussed the scoring schema and revised and modied it based
on the discussions.
e nal scoring schema included four levels (Levels 0–3)
depicting the dierences in the quality of pre-service teachers’
justications (Table 2). e lowest level (Level 0) represented
inadequate justication. e response was regarded as inadequate on
two occasions: First, pre-service teachers referred to the wrong author,
venue, or evidence. Second, they did not provide any relevant source
information that would reveal the author’s area of expertise, nor did
they present relevant information about the author’s intentions,
venue’s publication practices, or quality of evidence.
At the remaining levels, relevant reasoning, whether strengthening
or weakening the credibility, was considered in determining the level
of justications. For example, pre-service teachers considered the
domain of personal psychology (associate professor, popular science
news text) either strengthening (learning styles could beconsidered
in personal psychology, see Table3 for an example) or weakening the
credibility (his research focuses on personality, not learning). At Level
1, justications were limited, including one piece of information on
the credibility aspect in question: accurate information that specied
the author’s key expertise, author’s intention, venue’s publication
practices, or quality of evidence. Level 2 represented elaborated
justication in which pre-service teachers provided two or more
accurate pieces of information or elaborated on one piece of
information. Finally, Level 3 represented advanced justication,
namely those that included scientic reasoning or additionally
considered, for example, the domain of the author’s expertise in
relation to the text topic, multiple perspectives, or counterarguments.
Authentic examples representing the quality levels of the justications
are presented in Table2.
In Phase 2, weapplied the scoring schema to the whole data. To
examine the inter-rater reliability of the scoring, two of the authors
examined the inter-rater reliability of 10.7% of the responses (i.e., 289
responses). e kappa values for expertise, benevolence, publication
venue, and evidence varied from 0.72 to 0.84, 0.68 to 0.84, 0.65 to 1.00,
and 0.66 to 0.86, respectively.
From these scored credibility justication responses, weformed a
sum variable labeled Credibility Justication Ability Score. e
reliability was assessed with McDonald’s omega, which was 0.72.
5.6.2 Statistical analyses
e statistical analyses were conducted using the SPSS 27 and
MPlus sowares (Version 8.0; Muthén and Muthén, 1998–2017).
Weconducted four Friedman’s tests to examine whether pre-service
teachers’ evaluations of the author’s expertise, author’s benevolence,
venue’s publication practices, and quality of evidence diered across
the online texts (RQ1). For the post hoc comparisons, weused the
Wilcoxon rank test. e non-parametric methods were used because
a few of the variables were skewed. Weused the correlation coecient
r as the eect size measure: a value of 0.10 indicates a small eect, 0.30
a medium eect, and 0.50 a large eect (Cohen, 1988).
We examined the associations of pre-service teachers’ prior beliefs
on learning styles with their credibility evaluations (RQ4) using
structural equation modeling (SEM). Prior to SEM, a second-order
measurement model for pre-service teachers’ credibility evaluation
(see Kiili et al., 2023) was constructed using conrmatory factor
analysis (CFA). e measurement model (see Figure1) included four
rst-order factors based on the evaluated online texts that represented
dierent genres (researcher’s blog, popular Science News text, teacher’s
blog, and commercial text). ese rst-order factors were used to
dene two second-order factors: conrming the credibility of more
credible texts (researcher’s blog and popular Science News text) and
questioning the credibility of the less credible texts (teacher’s blog and
commercial text). From the teacher’s blog, weexcluded two items,
namely evaluation of expertise and benevolence, because these did not
require questioning. e teacher could beconsidered an expert who
genuinely wanted to share information with her colleagues without
realizing her blog included information not aligned with the current
scientic knowledge. In the analysis, the scores for evaluations related
to the teacher’s blog and commercial text (less credible texts)
were reversed.
e CFA analysis for the structure of the credibility evaluation
indicated that the model t to the data well [χ2(84) = 121.16; p = 0.005;
RMSEA = 0.05; CFI = 0.97; TLI = 0.96; SMRM = 0.06], and all
parameter estimates were statistically signicant (p < 0.001). us, the
credibility evaluation structure was used in further analysis.
e model t for the CFA and SEM models was evaluated using
multiple indices: the chi-square test (χ
2
), root mean square error of
approximation (RMSEA), comparative t index (CFI), Tucker–Lewis
index (TLI), and standardized root mean square error (SRMR). e
cut-o values indicating a good model t were set as follows: p > 0.05
for the chi-square test, RMSEA value < 0.06, CFI and TLI values >
0.95, and SRMR value < 0.08 (Hu and Bentler, 1999). ese analyses
were conducted using the Mplus soware (Version 8.6; Muthén and
Muthén, 1998–2017).
6 Results
6.1 Credibility evaluation
Table4 presents the descriptive statistics for pre-service teachers’
evaluations of the author’s expertise, author’s benevolence, quality of
the venue’s publication practices, and quality of evidence for each
online text. Pre-service teachers’ evaluations of the author’s expertise
[χ
2
(3) = 337.02, p < 0.001] and the author’s benevolence [χ
2
(3) = 157.99,
p < 0.001] diered across the online texts. Post-hoc comparisons
showed that pre-service teachers evaluated the researcher specialized
in learning sciences as possessing the most expertise on learning styles
and being the most benevolent author; furthermore, she was evaluated
to have considerably higher expertise than the associate professor in
psychology (Z = 10.19, p < 0.001; r = 0.78). e associate professor in
psychology and the classroom teacher were evaluated to have a similar
level of expertise. Furthermore, 28.4% of pre-service teachers did not
question (i.e., used 5 or 6 on the scale) the benevolence of the
consultant who served as a company’s communication expert.
Similarly, pre-service teachers’ evaluations of the quality of the
venue’s publication practices [χ
2
(3) = 314.65, p < 0.001] and the
quality of evidence [χ
2
(3) = 335.79, p < 0.001] diered across the
online texts. Regarding the publication practices, the Center of
Educational Research and the blog service site were evaluated as
having the highest and lowest quality, respectively, compared to
the other venues (Table 4). Cited research literature (four
references) was evaluated considerably higher in terms of the
quality of evidence than the review of 10 studies (Z = 9.86,
p < 0.001; r = 0.76). In essence, research-based evidence (i.e., cited
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 08 frontiersin.org
TABLE2 Justification examples representing dierent quality levels of
justifications.
Quality of justification Example
Level 0
Inadequate justication
He is reporting on research done by
others, so youcannot infer the author’s
own expertise from the text.
(Expertise: Science News)
Level 1
Limited justication
e author wishes to share
information about the spread of
misinformation.
(Benevolence: Researcher’s blog)
Level 2
Elaborated justication
e numbers in the follow-up survey
seem decent, but of course the sales
text does not highlight bad feedback.
No research-based evidence used in
the text.
(Evidence: Commercial text)
Level 3
Advanced justication
I do not think blog platforms in
general have any kind of maintenance
to ensure the accuracy of the
information. e role of maintenance
is probably mainly to curb disruptive
be havior.
(Venue: Teacher’s blog)
e examples are translated from Finnish.
research literature and summary of research review) were
evaluated higher than teachers’ own experiences and customer
feedback: the eect sizes (r) ranged from 0.62 to 0.85. Furthermore,
11.8% of pre-service teachers regarded the summary of the
research review as unreliable evidence (i.e., using 1 or 2in the
scale). Customer feedback was evaluated as the most unreliable
evidence. However, 7.7% of pre-service teachers were not critical
toward the use of customer feedback as evidence (i.e., using 5 or 6
on the scale).
6.2 Credibility justification
e average total credibility justication score was 17.28
(SD = 6.04) out of 48. Table 5 presents the scores for pre-service
teachers’ credibility justications, arranged by the justied credibility
aspect. In general, several pre-service teachers’ justications were
shallow, as the mean scores varied from 0.47 to 1.62 (out of 3).
However, some pre-service teachers engage in high quality reasoning
(advanced justications). Depending on the online text, 8 to 20% of
pre-service teachers’ justications for the author’s expertise reached the
highest level. e corresponding numbers for justifying benevolence
were 2–3%, venue’s publication practices 5 to 8% and for the quality of
evidence 4–8%. Table3 presents examples of the advanced justications.
As suggested by the score means (Table5), pre-service teachers
struggled more with some credibility aspects than others. Across the
texts, approximately a h of pre-service teachers failed to meet the
minimum requirements in justifying the author’s expertise (i.e., scored
0 points); they did not consider the domain of the author’s expertise
or referred to it inaccurately. In addition, four pre-service teachers
(2%) referred to the publisher or the entire editorial board as the
author; thus, even in higher education, students may mix up the
author and the publisher or editors.
While the researcher and the associate professor were evaluated
as benevolent (see Table4), pre-service teachers struggled to justify
their benevolent intentions. Notably, 47 and 59% of pre-service
teachers failed to provide any relevant justication for their
benevolence evaluation of the researcher and the associate professor,
respectively. Several pre-service teachers considered issues related to
evidence rather than the author’s intentions. In contrast, recognizing
the less benevolent, commercial intentions of the consultant with
marketing expertise seemed easier, as 78% of pre-service teachers
received at least one point for their response.
Furthermore, pre-service teachers encountered diculties in
justifying the quality of the publication practices of the company website;
72% of pre-service teachers were unable to provide any relevant
justication for their evaluation of these publication practices. ey oen
referred to the author’s commercial intentions instead of explicating the
publication practices. A few pre-service teachers (6.4%) also confused
the publication venue (popular Science News) with the publication
forum of the article that was summarized in the text.
Regarding the quality of evidence, pre-service teachers performed
better when justifying the quality of the cited research compared to
the results of the review (Z = 5.35; p < 0.001; r = 0.41). Pre-service
teachers’ responses suggest that they did not recognize the value of the
research review as evidence. For example, approximately 10% of
pre-service teachers claimed that the author relied only on one study
despite the text explicitly stating that the research review was based on
10 empirical studies. e number of references (four in the researcher’s
blog vs. one in the text on the popular Science News website) was
seemingly valued over the nature of the references.
Moreover, when justifying the customer survey as evidence,
pre-service teachers exhibited similar patterns as when justifying the
commercial website’s publication practices: some pre-service teachers’
responses focused on the author’s commercial intentions rather than
the quality of evidence.
6.3 Credibility ranking
e majority of pre-service teachers (90%) were able to dierentiate
the texts with accurate information on learning styles from those with
inaccurate information by ranking the credibility of the research-based
texts (researcher’s blog and popular Science News text) in the two
highest positions. Notably, some pre-service teachers (7%) ranked the
teacher’s blog and the commercial text (3%) in the second position. e
researcher’s blog was ranked more oen as the most credible online
text (80%) than the popular Science News text (20%) that summarized
the results of ten experimental studies of learning styles.
6.4 Associations between prior topic beliefs
and credibility evaluations
A Spearman correlation matrix for the variables used in the statistical
analysis is presented in Supplementary Appendix. Weused structural
equation modeling (SEM) to examine whether pre-service teachers’ prior
beliefs about learning styles predict their abilities to conrm the more
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 09 frontiersin.org
credible texts and question the less credible ones. e t indices were
acceptable or approaching the cut-o values (see Figure3); thus, the SEM
model presented in Figure3 was considered the nal model.
Pre-service teachers endorsed the inaccurate conception about
learning styles (M = 16.29; SD = 2.46, the maximum value was 21).
Regarding the role of prior beliefs, our assumptions were partly
conrmed. e stronger a pre-service teacher’s prior beliefs about the
existence of learning styles, the less able they were to question the
credibility of the online texts that included inaccurate information
about learning styles. However, pre-service teachers’ prior beliefs were
not associated with conrming the credibility of online texts
containing accurate information.
7 Discussion
is study sought to extend our knowledge of pre-service
teachers’ ability to evaluate the credibility of online texts concerning
learning styles on which accurate an inaccurate information spread
online. We created four online texts that either supported or
opposed the idea of learning styles and manipulated these texts in
terms of the author’s expertise, author’s benevolence, publication
venue, and the quality of evidence. is study provides unique
insights into pre-service teachers’ ability to justify the credibility of
more and less credible online texts from dierent perspectives and,
thus, their preparedness to explicate and model credibility
evaluation in their own classrooms. e study also demonstrates the
associations between pre-service teachers’ prior beliefs about
learning styles and credibility evaluations. Overall, the study oers
insights that can aid in developing teacher education to ensure that
pre-service teachers are supported with sucient credibility
evaluation skills while preparing for their profession as educators.
7.1 Pre-service teachers as credibility
evaluators
Our results showed that the majority of pre-service teachers
evaluated the credibility of the more credible texts to behigher than
that of the less credible texts. e researcher’s blog and the popular
science news text were evaluated as more credible than the teacher’s
blog and the commercial text in all the credibility aspects, with one
exception: the associate professor’s expertise was evaluated as highly
as the teacher’s expertise. Overall, these results suggest that pre-service
teachers valued scientic expertise and research-based evidence; this
is somewhat contradictory to previous ndings showing that
pre-service teachers may perceive researchers as less benevolent than
practitioners (Hendriks et al., 2021; Merk and Rosman, 2021),
particularly when seeking practical advice (Hendriks etal., 2021). In
the present study, pre-service teachers valued research-based evidence
considerably higher than the educators’ testimonials on the teacher’s
blog and the commercial website. However, in dierent contexts, such
as evaluating educational app reviews, pre-service teachers may value
testimonials, especially if they are present by teachers (List etal., 2022).
TABLE3 Examples of advanced justifications.
Evaluated aspect Justification
Author’s expertise Popular science news text: Heis an associate professor in psychology. In psychology, for example, there is a lot of research on learning
and dierent learning styles, depending, of course, on one’s area of research. Vaaraholma’s research focuses on personality psychology,
and dierent learning styles could beincluded in this area.
Teacher’s blog: e teacher has practical experience as she has been a classroom teacher for 15 years but has apparently not been
involved in scientic research. e theoretical knowledge she has acquired about learning styles dates back to her time as a student.
Categorizing students according to their learning styles also sounded pretty straightforward.
Author’s benevolence Researcher’s blog: Ipresume that there is a willingness to produce credible information. e text is on a website that aims to support
teachers. In addition, this article is based on science, seems transparent, and considers both sides of the issue. However, the author can
subconsciously push his point of view, but this article considers things from many angles without making strong arguments.
Commercial text: e text is a sort of sales pitch for teachers to attend training courses organized by the company. erefore, the author
is not very motivated to ensure that the information behind the training courses is accurate. On the other hand, the author knows that
the training courses must bebased on at least some accurate information.
Venue’s publication practices Popular science news text: Ibelieve this website checks the articles it publishes. e site’s credibility is enhanced because the editorial
board is mentioned, and they probably check each other’s texts before publication. In addition, all the editorial board members are
academics—researchers, associate professors, and professors—which also adds credibility. ey all certainly have an understanding of
academic writing. Further, providing references adds credibility because it allows the reader to check whether the text is accurate.
Teacher’s blog: On this venue, anyone can create their website and produce the texts and content they wish. e venue may have
specic rules that users must follow. However, it is unlikely that all content is actively monitored. Clearly wrong and sensitive content
could berestricted and even removed, but Ido not think this text would bethe rst that the administration would intervene for.
Quality of evidence Researcher’s blog: Hebases his argument on several studies, not just one. So the reader is informed by the perspectives of several
researchers. In the end, the author has also compiled his sources so that the reader can still check the information for himself if
hewishes.
Commercial text: People who enroll in the course are likely those who already have a positive attitude toward learning styles. Everyone
likes their ideas to beconrmed and supported. us, when the course reinforces the idea that participants “are right,” of course they
like it. However, the fact that the participants apply the course contents does not say anything about their eectiveness.
For each credibility aspect one example relates to more credible text and one for less credible text.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 10 frontiersin.org
TABLE5 Descriptive statistics of credibility justifications, arranged by online texts.
Justified credibility aspect M SD Skewness Kurtosis
Author’s expertise
1. Researcher in learning sciences (RB) 1.53 0.92 −0.61 −0.73
2. Associate professor in psychology (SN) 1.36 0.98 0.26 −0.91
3. Classroom teacher (TB) 1.62 1.03 −0.35 −1.01
4. Consultant and communication expert (CT) 1.29 0.89 0.06 −0.82
Author’s benevolence
1. Researcher in learning sciences (RB) 0.72 0.79 0.76 −0.34
2. Associate professor in psychology (SN) 0.54 0.76 1.39 1.51
3. Teacher (TB) 0.83 0.72 0.66 0.50
4. Consultant and communication expert (CT) 1.03 0.56 1.04 3.75
Quality of publication practices
1. Center for Educational Research (RB) 1.20 0.89 0.28 −0.68
2. Science News (SN) 1.15 0.89 0.10 −1.04
3. Blog service site (TB) 1.22 0.90 0.16 −0.84
4. Company website (CT) 0.47 0.87 1.85 2.37
Quality of evidence
1. Cited research literature (RB) 1.41 0.78 0.23 −0.28
2. Summary of the research review (SN) 0.80 0.86 0.73 −0.42
3. Own experiences in the classroom (TB) 1.20 0.85 0.43 −0.31
4. Customer feedback (CT) 0.90 0.77 0.65 0.24
TABLE4 Descriptive statistics for the evaluations of the author’s expertise, author’s benevolence, quality of the venue’s publication practices, and
quality of evidence across the online texts, with pairwise comparisons.
M SD Md Pairwise comparisons
Author’s expertise
1. Researcher in learning sciences (RB) 5.60 0.72 6 1 > 2, 3, 4
2. Associate professor in psychology (SN) 3.96 1.16 4 2 > 4
3. Classroom teacher (TB) 3.98 1.03 4 3 > 4
4. Consultant and communication expert (CT) 2.37 1.12 2
Author’s benevolence
1. Researcher in learning sciences (RB) 5.53 0.73 6 1 > 2, 3, 4
2. Associate professor in psychology (SN) 4.88 1.13 5 2 > 3, 4
3. Classroom teacher (TB) 4.58 1.13 5 3 > 4
4. Consultant and communication expert (CT) 3.52 1.54 3
Quality of publication practices
1. Center of Educational Research (RB) 4.92 0.95 5 1 > 2, 3, 4
2. Science News (SN) 4.63 1.13 5 2 > 3, 4
3. Blog service (TB) 2.12 1.29 2
4. Company website (CT) 2.71 1.24 3 4 > 3
Quality of evidence
1. Cited research literature (RB) 5.50 0.65 6 1 > 2, 3, 4
2. Summary of the research review (SN) 4.13 1.22 4 2 > 3, 4
3. Own experiences in the classroom (TB) 2.80 1.09 3 3 > 4
4. Customer feedback (CT) 2.53 1.17 2
RB, researcher’s blog; SN, popular Science News text; TB, teacher’s blog; CT, commercial text.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 11 frontiersin.org
Despite these encouraging results, over a fourth of pre-service
teachers did not question the benevolence of the consultant who
authored the text published on the commercial website. Furthermore,
12% of pre-service teachers perceived the summary of the research
review as unreliable evidence, and 10% of pre-service teachers ranked
either the teacher’s blog or the commercial text among the two most
credible texts.
Moreover, several pre-service teachers struggled with justifying
their credibility evaluations. e average total credibility justication
score was 17.26 out of 48 points, and 10–70% of pre-service teachers’
justications were inadequate, depending on the target of the
evaluation. ese ndings are in accordance with previous research,
which has shown that even pre-service teachers who attend to the
source may do it in a supercial manner (List etal., 2022). is is
illustrated by a h of pre-service teachers failing to explicate the
domain of the author’s expertise in their justications despite it being
essential to consider how well the author’s expertise is aligned with the
domain of the text (Hendriks etal., 2015; Stadtler and Bromme, 2014).
Paying attention to the domain of expertise is essential in today’s
society, where the division of cognitive labor is accentuated (Scharrer
etal., 2017). is may befurther relevant in education because many
people, regardless of their background, participate in educational
discussions, and even well-intended educational resources may share
information that is inaccurate or incomplete (Kendeou etal., 2019).
ere may be several reasons behind pre-service teachers
struggling to justify the dierent aspects of credibility. First, problems
may have stemmed from pre-service teachers’ lack of experience in
considering dierent credibility aspects separately. Several pre-service
teachers considered the evidence as an indicator of scientists’
benevolence or commercial intentions. Second, pre-service teachers
may have lacked adequate evidence or source knowledge. For example,
our results suggest that several pre-service teachers did not have
sucient knowledge of research reviews and, therefore, did not fully
recognize its value as evidence. is is understandable as most
pre-service teachers were at the beginning of their university studies.
It would beworth to examine also pre-service teachers’ justication
abilities in the later stage of their studies.
Notably, a small proportion of pre-service teachers’ responses
showed an advanced understanding of the target of credibility
evaluation. Such an understanding equips pre-service teachers with a
rm basis for developing practices to educate critical online readers
when providing thinking tools for their students (cf. Andreassen and
Bråten, 2013; Leu etal., 2017).
7.2 Prior beliefs as predictors for credibility
evaluation
Our assumption regarding the association between pre-service
teachers’ prior beliefs about learning styles and credibility evaluations
was only partially conrmed. In line with previous research (Eitel
etal., 2021), several pre-service teachers believed in learning styles;
the stronger their beliefs, the more they struggled with questioning
the credibility of the less credible texts whose contents adhered to their
prior beliefs. is is in line with previous ndings showing that prior
beliefs are reected in readers’ evaluation of information (Richter and
Maier, 2017, see also van Strien etal., 2016). However, wedid not nd
an association between pre-service teachers’ prior beliefs and
conrming the credibility of more credible texts. It is possible that
pre-service teachers resolved the conict between their prior beliefs
and text content by trusting the expertise of the source (Stadtler and
Bromme, 2014).
FIGURE3
Structural equation model with standardized estimates for the associations between credibility evaluations and prior topic beliefs, with the reading
order as a control variable. All connections are at least p < 0.05.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 12 frontiersin.org
7.3 Limitations
is study has certain notable limitations. First, the online texts
evaluated by pre-service teachers were researcher-designed, and the
texts were consistently designed so that all credibility aspects included
indicators toward the more or less credibility, with the sole exception
of two aspects of the teacher’s blog. us, the task did not fully
represent credibility evaluation in an authentic online setting, which
would have entailed additional complexities. For example, researchers
can have biased intentions or commercial websites may share
research-based information.
Second, our credibility evaluation task was not contextualized (cf.
List et al., 2022; Zimmerman and Mayweg-Paus, 2021). is is a
considerable aw as pre-service teachers’ evaluations seem to depend
on their epistemic aims (Hendriks etal., 2021). Furthermore, the lack
of contextualization in teachers’ professional lives may have decreased
readers’ engagement (Herrington and Oliver, 2000). ird, while the
data were collected as part of an obligatory literacy course, pre-service
teachers’ responses were not assessed. Consequently, some pre-service
teachers may have lacked the motivation to respond to the items to the
best of their ability (see List and Alexander, 2018). Moreover, some
pre-service teachers may have found the number of items (16 open-
ended responses) to be overwhelming, leading to a decrease
in engagement.
7.4 Instructional implications
Teachers are key players in educating critical online readers. To
scaold their students beyond supercial evaluation practices,
pre-service teachers require adequate source and evidence knowledge
as well as eective evaluation strategies. Our ndings suggest that this
is not necessarily the case at present, especially in the early stage of
teacher education. erefore, teacher education should provide
opportunities for pre-service teachers to discuss what constitutes
expertise in dierent domains, the limits of such expertise, the criteria
for what constitutes good evidence in a particular domain, and the
publication practices and guidelines that can ensure high-quality
information. is would allow the prospective teachers to model the
evaluation strategies to their students, provide them constructive
feedback, and engage in high-quality reasoning about credibility with
their students.
As pre-service teachers struggled with justifying their credibility
evaluations, they would benet from concrete models; these models
could bejustication examples by peers that would illustrate advanced
reasoning. We collected pre-service teachers’ advanced responses,
which can be used for modeling (see Table 3). Furthermore, the
examples illustrate the level teacher educators can target when
fostering credibility evaluation among pre-service teachers.
To engage pre-service teachers in considering the credibility
of online information, they could select different types of online
texts that concern the topical educational issue debated in public,
such as using mobile phones in the classrooms. The pre-service
teachers could explore whose voices are considered in public
debates, what kind of expertise they represent, and how strong
evidence the authors provide to support their claims. Before
analyzing and evaluating the selected texts, pre-service teachers
can beasked to record their own prior beliefs on the topic. This
would allow pre-service teachers to consider how their prior
beliefs are reflected in their credibility evaluations. Activation and
reflection could also concern pre-service teachers’ beliefs about
the justification of knowledge claims: whether knowledge claims
need to be justified by personal beliefs, authority, or multiple
sources (see Bråten etal., 2022).
Furthermore, a collaborative reflection is a promising practice
to support pre-service teachers’ ability to evaluate the credibility
of online educational texts (Zimmerman and Mayweg-Paus,
2021). Such reflection should cover not only different types of
credible scientific texts but also less credible texts on complex,
educational issues. Teacher educators could encourage discussions
that cover different credibility aspects. These discussions could
besupported with digital tools designed to promote the critical
and deliberative reading of multiple conflicting texts (Barzilai
etal., 2020a; Kiili etal., 2016).
8 Conclusion
Although pre-service teachers could dierentiate the more
credible texts from the less credible ones, they struggled with
justifying the credibility. Credibility evaluation in current online
information landscape is complex, requiring abilities to understand
what makes one web resource more credible than others. e
complexities are further amplied by readers’ prior beliefs, which
may include inaccurate information. erefore, readers must also
be critical of their own potential inaccurate beliefs in order to
overcome them. It is apparent, as well as supported by the ndings
of this study, that pre-service teachers require theoretical and
practical support to become skillful online evaluators. Consequently,
teacher education must ensure that prospective teachers master the
rapidly changing developments in literacy environments, thereby
enabling them to base their classroom practices on scientic,
evidence-based information and beprepared to educate critical
online readers.
Data availability statement
e raw data supporting the conclusions of this article will
bemade available by the authors, without undue reservation.
Ethics statement
Ethical approval was not required for the study involving human
participants since no sensitive or personal data was collected. e
study was conducted in accordance with the local legislation and
institutional requirements. e participants provided their written
informed consent to participate in this study.
Author contributions
PK: Investigation, Methodology, Project administration,
Resources, Writing – original dra, Writing – review & editing. EH:
Investigation, Methodology, Resources, Writing – review & editing.
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 13 frontiersin.org
MM: Methodology, Resources, Writing – review & editing. ER:
Formal analysis, Methodology, Writing – review & editing. CK:
Conceptualization, Formal analysis, Funding acquisition,
Methodology, Resources, Writing – original dra, Writing – review &
editing.
Funding
e author(s) declare nancial support was received for the
research, authorship, and/or publication of this article. is work was
supported by the Strategic Research Council (CRITICAL:
Technological and Societal Innovations to Cultivate Critical Reading
in the Internet Era: No. 335625, 335727, 358490, 358250) and the
Research Council of Finland (No. 324524).
Acknowledgments
Preprint version of this article is available in OSF (Kulju
etal., 2022).
Conflict of interest
e authors declare that the research was conducted in the
absence of any commercial or nancial relationships that could
beconstrued as a potential conict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors
and do not necessarily represent those of their aliated organizations,
or those of the publisher, the editors and the reviewers. Any product
that may beevaluated in this article, or claim that may bemade by its
manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
e Supplementary material for this article can befound online
at: https://www.frontiersin.org/articles/10.3389/feduc.2024.1451002/
full#supplementary-material
References
Abel, R., Roelle, J., and Stadtler, M. (2024). Whom to believe? Fostering source
evaluation skills with interleaved presentation of untrustworthy and trustworthy
social media sources. Discourse Process. 61, 233–254. doi:
10.1080/0163853X.2024.2339733
ACARA (2014). Australian curriculum. Senior secondary curriculum. English
(Version 8.4). Available at: https://www.australiancurriculum.edu.au/senior-secondary-
curriculum/english/ (Accessed June 17, 2024).
Andreassen, R., and Bråten, I. (2013). Teachers’ source evaluation self-ecacy predicts
their use of relevant source features when evaluating the trustworthiness of web sources
on special education. Br. J. Educ. Technol. 44, 821–836. doi:
10.1111/j.1467-8535.2012.01366.x
Anmarkrud, Ø., Bråten, I., Florit, E., and Mason, L. (2021). e role of individual
dierences in sourcing: a systematic review. Educ. Psychol. Rev. 34, 749–792. doi:
10.1007/s10648-021-09640-7
Anmarkrud, Ø., Bråten, I., and Strømsø, H. I. (2013). Multiple-documents literacy:
strategic processing, source awareness, and argumentation when reading multiple
conicting documents. Learn. Individ. Dier. 30, 64–76. doi: 10.1016/j.
lindif.2013.01.007
Aslaksen, K., and Lorås, H. (2018). e modality-specic learning style hypothesis: a
mini-review. Front. Psychol. 9:1538. doi: 10.3389/fpsyg.2018.01538
Barzilai, S., Mor-Hagani, S., Zohar, A. R., Shlomi-Elooz, T., and Ben-Yishai, R.
(2020a). Making sources visible: promoting multiple document literacy with digital
epistemic scaolds. Comput. Educ. 157:103980. doi: 10.1016/j.compedu.2020.103980
Barzilai, S., omm, E., and Shlomi-Elooz, T. (2020b). Dealing with disagreement: the
roles of topic familiarity and disagreement explanation in evaluation of conicting expert
claims and sources. Learn. Instr. 69:101367. doi: 10.1016/j.learninstruc.2020.101367
Bougatzeli, E., Douka, M., Bekos, N., and Papadimitriou, F. (2017). Web reading
practices of teacher education students and in-service teachers in Greece: a descriptive
study. Preschool Prim. Educ. 5, 97–109. doi: 10.12681/ppej.10336
Braasch, J. L. G., Bråten, I., Strømsø, H. I., Anmarkrud, Ø., and Ferguson, L. E. (2013).
Promoting secondary school students' evaluation of source features of multiple
documents. Contemp. Educ. Psychol. 38, 180–195. doi: 10.1016/j.cedpsych.2013.03.003
Bråten, I., Brandmo, C., Ferguson, L. E., and Strømsø, H. I. (2022). Epistemic
justication in multiple document literacy: a refutation text intervention. Contemp.
Educ. Psychol. 71:102122. doi: 10.1016/j.cedpsych.2022.102122
Bråten, I., Stadtler, M., and Salmerón, L. (2018). “e role of sourcing in discourse
comprehension” in Handbook of discourse processe. eds. M. F. Schober, D. N. Rapp and
M. A. Britt. 2nd ed (New York: Routledge), 141–166.
Britt, M. A., Perfetti, C. A., Sandak, R. S., and Rouet, J.-F. (1999). Content integration
and source separation in learning from multiple texts. In Narrative comprehension,
causality, and coherence. Essays in honor of Tom Trabasso, eds. S.R. Goldman, A.C.
Graesser and BroekP. van den (Mahwah, NJ: Lawrence Erlbaum Associates, Inc),
209–233.
Chinn, C. A., Rinehart, R. W., and Buckland, L. A. (2014). “Epistemic cognition and
evaluating information: applying the AIR model of epistemic cognition” in Processing
inaccurate information: eoretical and applied perspectives from cognitive science and
the educational sciences. eds. D. Rapp and J. Braasch (Cambridge, Massachusetts: MIT
Press), 425–453.
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.).
Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.
Cohen, L., Manion, L., and Morrison, K. (2018). Research methods in education. 8th
Edn. London: Routledge, Taylor & Francis Group.
Coiro, J., Coscarelli, C., Maykel, C., and Forzani, E. (2015). Investigating criteria that
seventh graders use to evaluate the quality of online information. J. Adolesc. Adult. Lit.
59, 287–297. doi: 10.1002/jaal.448
Dekker, S., Lee, N. C., Howard-Jones, P., and Jolles, J. (2012). Neuromyths in
education: prevalence and predictors of misconceptions among teachers. Front. Psychol.
3. doi: 10.3389/fpsyg.2012.00429
Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., et al.
(2022). e psychological drivers of misinformation belief and its resistance to
correction. Nat. Rev. Psychol. 1, 13–29. doi: 10.1038/s44159-021-00006-y
Eitel, A., Prinz, A., Kollmer, J., Niessen, L., Russow, J., Ludäscher, M., et al. (2021). e
misconceptions about multimedia learning questionnaire: an empirical evaluation study
with teachers and student teachers. Psychol. Learn. Teach. 20, 420–444. doi:
10.1177/14757257211028723
Elo, S., and Kyngäs, H. (2008). e qualitative content analysis process. J. Adv. Nurs.
62, 107–115. doi: 10.1111/j.1365-2648.2007.04569.x
Fendt, M., Nistor, N., Scheibenzuber, C., and Artmann, B. (2023). Sourcing against
misinformation: eects of a scalable lateral reading training based on cognitive
apprenticeship. Comput. Hum. Behav. 146:107820. doi: 10.1016/j.chb.2023.107820
Ferguson, L. E., and Bråten, I. (2022). Unpacking pre-service teachers’ beliefs and
reasoning about student ability, sources of teaching knowledge, and teacher-ecacy: a
scenario-based approach. Frontiers in Education 7:975105. doi: 10.3389/feduc.2022.975105
Finnish National Agency for Education (2022). Finnish National Agency for
Education's decisions on eligibility for positions in the eld of education and training.
Available at: https://www.oph./en/services/recognition-teaching-qualications-and-
teacher-education-studies (Accessed June 17, 2024).
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., and Duckworth, D. (2020). Preparing
for life in a digital world. IEA international computer and information literacy study
2018 international report. Cham: Springer.
Hahnel, C., Eichmann, B., and Goldhammer, F. (2020). Evaluation of online
information in university students: development and scaling of the screening instrument
EVON. Front. Psychol. 11:562128. doi: 10.3389/fpsyg.2020.562128
Hämäläinen, E. K., Kiili, C., Marttunen, M., Räikkönen, E., González-Ibáñez, R., and
Leppänen, P. H. T. (2020). Promoting sixth graders’ credibility evaluation of web pages:
an intervention study. Comput. Hum. Behav. 110:106372. doi: 10.1016/j.chb.2020.106372
Kulju et al. 10.3389/feduc.2024.1451002
Frontiers in Education 14 frontiersin.org
Hendriks, F., Kienhues, D., and Bromme, R. (2015). Measuring laypeople’s trust in
experts in a digital age: the muenster epistemic trustworthiness inventory (METI). PLoS
One 10:e0139309. doi: 10.1371/journal.pone.0139309
Hendriks, F., Seifried, E., and Menz, C. (2021). Unraveling the “smart but evil”
stereotype: Pre-service teachers’ evaluations of educational psychology researchers
versus teachers as sources of information. Z Padagog Psychol. 35, 157–171. doi:
10.1024/1010-0652/a000300
Herrington, J., and Oliver, R. (2000). An instructional design framework for authentic
learning environments. Educ. Technol. Res. Dev. 48, 23–48. doi: 10.1007/BF02319856
Howard-Jones, P. A. (2014). Neuroscience and education: myths and messages. Nat.
Rev. Neurosci. 15, 817–824. doi: 10.1038/nrn3817
Hu, L. T., and Bentler, P. M. (1999). Cuto criteria for t indexes in covariance
structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model.
Multidiscip. J. 6, 1–55. doi: 10.1080/10705519909540118
Kammerer, Y., Gottschling, S., and Bråten, I. (2021). e role of internet-specic
justication beliefs in source evaluation and corroboration during web search on an unsettled
socio-scientic issue. J. Educ. Comput. Res. 59, 342–378. doi: 10.1177/0735633120952731
Kendeou, P., Robinson, D. H., and McCrudden, M. (2019). Misinformation and fake
news in education. Charlotte, NC: Information Age Publishing.
Kiemer, K., and Kollar, I. (2021). Source selection and source use as a basis for
evidence-informed teaching. Do pre-service teachers’ beliefs regarding the utility of
(non)scientic information sources matter? Zeitschri für Pädagogische Psychologie 35,
127–141. doi: 10.1024/1010-0652/a000302
Kiili, C., Bråten, I., Strømsø, H. I., Hagerman, M. S., Räikkönen, E., and Jyrkiäinen, A.
(2022). Adolescents’ credibility justications when evaluating online texts. Educ. Inf.
Tec hn ol . 27, 7421–7450. doi: 10.1007/s10639-022-10907-x
Kiili, C., Coiro, J., and Hämäläinen, J. (2016). An online inquiry tool to support the
exploration of controversial issues on the internet. J. Literacy Technol. 17, 31–52.
Kiili, C., Räikkönen, E., Bråten, I., Strømsø, H. I., and Hagerman, M. S. (2023).
Examining the structure of credibility evaluation when sixth graders read online texts.
J. Comput. Assist. Learn. 39, 954–969. doi: 10.1111/jcal.12779
Kirschner, P. A. (2017). Stop propagating the learning styles myth. Comput. Educ. 106,
166–171. doi: 10.1016/j.compedu.2016.12.006
Krätzig, G. P., and Arbuthnott, K. D. (2006). Perceptual learning style and learning
prociency: a test of the hypothesis. J. Educ. Psychol. 98, 238–246. doi:
10.1037/0022-0663.98.1.238
Kulju, P., Hämäläinen, E., Mäkinen, M., Räikkönen, E., and Kiili, C. (2022). Pre-
service teachers evaluating online texts about learning styles: there is room for
improvement in justifying the credibility. OSF [Preprint]. doi: 10.31219/osf.io/3zwk8
Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., and Henry, L. A. (2017). New literacies: a
dual-level theory of the changing nature of literacy, instruction, and assessment. J. Educ.
197, 1–18. doi: 10.1177/002205741719700202
Lewandowsky, S., Ecker, U. K. H., and Cook, J. (2017). Beyond misinformation:
understanding and coping with the “post-truth” era. J. Appl. Res. Mem. Cogn. 6, 353–369.
doi: 10.1016/j.jarmac.2017.07.008
List, A., and Alexander, P. A. (2018). “Cold and warm perspectives on the cognitive
aective engagement model of multiple source use” in Handbook of mu ltiple source use.
eds. J. L. G. Braasch, I. Bråten and M. T. McCrudden (New York: Routledge), 34–54.
List, A., Lee, H. Y., Du, H., Campos Oaxaca, G. S., Lyu, B., Falcon, A. L., et al. (2022).
Preservice teachers’ recognition of source and content bias in educational application
(app) reviews. Comput. Hum. Behav. 134:107297. doi: 10.1016/j.chb.2022.107297
Maier, J., and Richter, T. (2013). Text belief consistency eects in the comprehension
of multiple texts with conicting information. Cogn. Instr. 31, 151–175. doi:
10.1080/07370008.2013.769997
McAfee, M., and Homan, B. (2021). e morass of misconceptions: how unjustied
beliefs inuence pedagogy and learning. Int. J. Scholarship Teach. Learn. 15:4. doi:
10.20429/ijsotl.2021.150104
Menz, C., Spinath, B., and Seifried, E. (2021). Where do pre-service teachers'
educational psychological misconceptions come from? e roles of anecdotal versus
scientic evidence. Zeitschri für Pädagogische Psychologie 35, 143–156. doi:
10.1024/1010-0652/a000299
Merk, S., and Rosman, T. (2021). Smart but evil? Student-teachers’ perception of
educational researchers’ epistemic trustworthiness. AERA Open 5,
233285841986815–233285841986818. doi: 10.1177/2332858419868158
Metzger, M. J., and Flanagin, A. J. (2013). Credibility and trust of information in
online environments: the use of cognitive heuristics. J. Prag mat. 59, 210–220. doi:
10.1016/j.pragma.2013.07.012
Muthén, L. K., and Muthén, B. O. (1998–2017). Mplus user’s guide. 8th Edn. Los
Angeles, CA: Muthén & Muthén.
NCC (2016). National core curriculum for basic education 2014. Helsinki: Finnish
National Board of Education.
Niemi, H. (2012). “e societal factors contributing to education and schooling in
Finland” in Miracle of education. eds. H. Niemi, A. Toom, A. Kallioniemi and H. Niemi
(Rotterdam: Sense Publishers), 19–38.
Nussbaum, E. M. (2020). Critical integrative argumentation: toward complexity in
students’ thinking. Educ. Psychol. 56, 1–17. doi: 10.1080/00461520.2020.1845173
Perfetti, C. A., Rouet, J. F., and Britt, M. A. (1999). Toward a theory of documents
representation. In The construction of mental representation during reading, eds.
OostendorpH. Van and S.R. Goldman. Mahwah, New Jersey, London:
Erlbaum, 99–122.
Reuter, T., and Leuchter, M. (2023). Pre-service teachers’ latent prole transitions in
the evaluation of evidence. J. Teach. Educ. 132:104248. doi: 10.1016/j.tate.2023.104248
Richter, T., and Maier, J. (2017). Comprehension of multiple documents with
conicting information: a two-step model of validation. Educ. Psychol. 52, 148–166. doi:
10.1080/00461520.2017.1322968
Scharrer, L., Rupieper, Y., Stadtler, M., and Bromme, R. (2017). When science becomes
too easy: science popularization inclines laypeople to underrate their dependence on
experts. Public Underst. Sci. 26, 1003–1018. doi: 10.1177/0963662516680311
Sinatra, G. M., and Jacobson, N. (2019). “Zombie concepts in education: why they
won’t die and why youcan’t kill them” in Misinformation and fake news in education.
eds. P. Kendeou, D. H. Robinson and M. T. McCrudden (Charlotte, NC: Information
Age Publishing), 7–27.
Stadtler, M., and Bromme, R. (2014). “e content–source integration model: a
taxonomic description of how readers comprehend conicting scientic information”
in Processing inaccurate information: theoretical and applied perspectives from
cognitive science and the educational sciences. eds. D. N. Rapp and J. L. G. Braasch
(Cambridge, Massachusetts: MIT Press), 379–402.
Tarchi, C. (2019). Identifying fake news through trustworthiness judgements of
documents. Cult. Educ. 31, 369–406. doi: 10.1080/11356405.2019.1597442
Tirri, K. (2014). e last 40 years in Finnish teacher education. J. Educ. Teach. 40,
1–10. doi: 10.1080/02607476.2014.956545
van Strien, J. L. H., Kammerer, Y., Brand-Gruwel, S., and Boshuizen, H. P. A. (2016).
How attitude strength biases information processing and evaluation on the web.
Comput. Hum. Behav. 60, 245–252. doi: 10.1016/j.chb.2016.02.057
Vipunen – Education Statistics Finland (2022). Applicants and those who accepted a
place. Available at: https://vipunen./en-gb/university/Pages/Hakeneet-ja-
hyv%C3%A4ksytyt.aspx (Accessed June 17, 2024).
Weber, R. P. (1990). Basic content analysis. Newbury Park, CA: Sage Publications, Inc.
Zimmerman, M., and Mayweg-Paus, E. (2021). e role of collaborative
argumentation in future teachers’ selection of online information. Zeitschri für
Pädagogische Psychologie 35, 185–198. doi: 10.1024/1010-0652/a000307
Zimmermann, M., Engel, O., and Mayweg-Paus, E. (2022). Pre-service teachers’
search strategies when sourcing educational information on the internet. Front. Educ.
7:976346. doi: 10.3389/feduc.2022.976346
Available via license: CC BY
Content may be subject to copyright.