Content uploaded by Cristiano Mauro Assis Gomes
Author content
All content in this area was uploaded by Cristiano Mauro Assis Gomes on Apr 02, 2021
Content may be subject to copyright.
..
PRESENTING SLAT-THINKING SECOND VERSION AND ITS CONTENT VALIDITY
*Cristiano Mauro Assis Gomes and Diogo Ferreira do Nascimento
Laboratório de Investigação da Arquitetura Cognitiva (LAICO), Universidade Federal de Minas Gerais, Brazil
ARTICLE INFO ABSTRACT
Students’ Approaches to Learning is an important theory in Educational Psychology that
investigates the interaction of students with objects of knowledge and how this interaction affects
learning outcomes. Recently, the Students’ Learning Approach Test (SLAT-Thinking) was
proposed as a pioneer methodology to investigate approaches to learning through performance in a
given task. Nevertheless, this test has presented some issues regarding the high probably of
answering correctly by chance. This paper presents a new version of this performance test, SLAT-
Thinking 2. This new version solves the aforementioned issues, adds a theoretical framework to
explain the incorrect answers given by respondents, and presents two test forms. This study
presents the content validity of SLAT-Thinking 2, which is the first step to investigate the test
validity. The analysis was performed by nine judges, four of which with an Educational
Psychology background. It led to changes in the wording of the test instructions, wording of the
two texts given in the test task, wording of three items, wording of the response options of four
items, and to the change of one answer key. This analysis certified the content validity of the new
version of the test, which it is expected to become a useful tool for researchers and practitioners.
Copyright © 2021, Cristiano Mauro Assis Gomes and Diogo Ferreira do Nascimento. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
INTRODUCTION
The Students’ Approaches to Learning theory studies how students
interact with objects of knowledge (Biggs & Tang, 2011). This theory
assumes that there is a deep approach and a surface approach to
learning which characterize the way students interact with the objects
of knowledge. In short, a deep approach occurs when students
interact actively with the objects of knowledge, both in terms of
strategy and motivation and in turn, a surface approach characterizes
the passive interaction of students with the objects of knowledge.
Therefore, the Students’ Approaches to Learning theory assumes that
the deep approach is positively correlated with student achievement,
whereas the surface approach is negatively correlated with this
outcome (Contreras et al., 2017).
Despite the promising constructs of this theory, two meta-analyses
(Richardson, Abraham & Bond, 2012; Watkins, 2001) showed that
the deep and surface approaches have weak correlations with student
achievement. Important predictors, such as intelligence (Alves,
Gomes, Martins, & Almeida, 2016, 2017, 2018; Golino & Gomes,
2019; Gomes, 2010b, 2011b, 2012b; Gomes & Borges, 2007, 2008c,
2009b, 2009c; Gomes, de Araújo, Ferreira & Golino, 2014; Gomes &
Golino, 2012b,
2015; Muniz, Gomes, & Pasian, 2016; Valentini et al., 2015), meta-
cognition (Golino & Gomes, 2014a; Golino, Gomes, & Andrade,
2014; Gomes & Golino, 2014; Gomes, Golino, & Menezes, 2014;
Pires & Gomes, 2018), self-regulation (Cardoso, Seabra, Gomes, &
Fonseca, 2019; Dias et al., 2015; Golino, Gomes, Commons &
Miller, 2014; Gomes, 2007, 2010a; Gomes & Borges, 2009a; Gomes,
Golino, Santos, & Ferreira, 2014; Pereira, Golino, M. T. S., &
Gomes, 2019; Reppold et al., 2015), and socioeconomic variables
(Gomes & Almeida, 2017; Gomes, Amantes & Jelihovschi, 2020;
Gomes, Fleith, Marinho-Araujo, & Rabelo, 2020; Gomes &
Jelihovschi, 2019; Gomes, Lemos, & Jelihovschi, 2020; Pazeto, Dias,
Gomes & Seabra, 2019) are muchmore important than students’
approaches to predict academic achievement. However, the deep and
surface approaches seem to have incremental validity (Gomes,
2011a), which sustains their importance. In short, students’
approaches to learning are a secondary predictor (Gomes, 2010c,
2011a, 2013; Gomes, Araujo, & Jelihovschi, 2020; Gomes & Golino,
2012c; Gomes, Golino, Pinheiro, Miranda, & Soares, 2011), similar
to motivational and self-reference variables such as personality
(Gomes, 2012a; Gomes & Gjikuria, 2017; Gomes & Golino, 2012a),
students’ beliefs on teaching-learning processes (Alves, Flores,
Gomes & Golino, 2012; Gomes & Borges, 2008a), learning styles
(Gomes, Marques, & Golino, 2014; Gomes & Marques, 2016),
motivation for learning (Gomes & Gjikuria, 2018), and academic
self-reference (Costa, Gomes, & Fleith, 2017). The mainstream
argumentation of researchers about the low prediction of students’
approaches to learning, concerning academic achievement, is that this
is caused by the educational assessment system, which does not
ISSN: 2230-9926
International Journal of Development Research
Vol. 11, Issue, 03, pp. xxxxxxxxxx, March, 2021
https://doi.org/10.37118/ijdr.xxxxx.03.2021
Article History:
Received xxxxxx, 2020
Received in revised form
xxxxxxxx, 2020
Accepted xxxxxxxxx, 2020
Published online xxxxx, 2021
Available online at http://www.journalijdr.com
RESEARCH ARTICLE OPEN ACCESS
Key Words:
Students’ Approaches to Learning,
Assessment, Content validity,
Test based on performance.
*Corresponding author:
Derlane Gaia Barroso Nascimento,
Citation: Cristiano Mauro Assis Gomes and Diogo Ferreira do Nascimento, 2021. “Presenting slat-thinking second version and its content validity”,
International Journal of Development Research, 11, (03), xxxxxxxxxxxxxx.
promote the deep approach and, in certain aspects, reinforces the
surface approach (Contreras et al., 2017). An alternative
interpretation for this is the exclusive existence of self-report
questionnaires to measure the students’ approaches. It is possible that
the exclusive use of self-report instruments to measure these
approaches produces considerable bias, generating scores with high
noise, diminishing the correlation between the approaches and
students’ achievement. Interested readers can find a detailed
argumentative exposition about this in the article of Gomes, Linhares,
Jelihovschi, and Rodrigues (2020).
Taking all that into account, Gomes and Linhares created the
Students’Learning Approach Test - Identification of Thinking
Contained in Texts (SLAT-Thinking). This test is the first
measurement of students’ approaches to learning based on the
performance of respondents. The test measures the approaches of a
person in identifying the thinking of an author in a given text
(Linhares & Gomes, 2018). While measuring approaches through
performance, SLAT-Thinking is guided by the assumption that the
measurement of the approaches based on performance in a test should
focus on a specific ability or domain, since the students’ approaches
occur in many contexts. For example, deep and surface approaches
can be measured through the ability to transfer knowledge learned in
a context to another context or the ability to seek information and
select what is important and what is noise. SLAT-Thinking measures
students’ approaches through their ability to identify the author’s
thinking in a specific text. This ability was chosen since it is a
strategic tool for the critical reasoning in the age of internet, that is, it
is an appropriate context to measure the approaches to learning in the
21st century.
SLAT-Thinking has two similar texts and 12 items related to each of
them. Each item is composed of a statement which can represent the
author’s thinking in a given text. Thus, the respondent must read the
text and answer each item related to it, marking one out of three
options. Option one affirms that the item's statement represents the
author's thinking, option two states that the item does not represent
the author's thinking, and option three informs that it is not possible
to answer whether or not the item represents the author's thinking in
that text because it did not provide enough information. An example
of item which follows this structure is shown in Figure 1. This item is
part of the instructions of the test. A detailed description of the
assumptions which guide SLAT-Thinking as its structure can be
found in Gomes et al. (2020).
STATEMENT
E
N
Z
1) Real Madrid is the best soccer team in the world.
E= this statement represents the author's thinking; N = this statement does not
represent the author's thinking; Z = it is not possible to answer whether or not
this statement represents the author's thinking.
Figure 1. Example of item which follows the SLAT-
Thinking structure
Despite the advances of SLAT-Thinking in the measurement of
students’ approaches to learning, the test showed some relevant
issues. It presents a high probability of respondents answering an
item correctly by chance. In practical effect, SLAT-Thinking tends to
allow a probability ofnearly50% for this occurrence. Although
SLAT-Thinking has three answer options, the third option is not
plausible, since it is against the test instructions, that is, the
respondents must read the text and infer whether each statement
represents or not the author’s thinking only considering the text they
have read. Therefore, the statement of each item should represent or
not the author’s thinking in the specific text read by the respondent.
As a consequence of this high probability to answer each item
correctly by chance, the test tends to produce many false-positive
responses, which support the erroneous inference that some
respondents have a strong deep approach when, in fact, they have a
weak or a moderate deep approach. To solve this issue, Gomes,
Nascimento and Araujo created the SLAT-Thinking Second Version
(SLAT-Thinking 2).
In short, this new test is very similar to the first version, but some
modifications have been made: the two texts and all their items were
revised, new items were created, and the answer options were
increased from 3 to 7, diminishing the probability of respondents
answering correctly by chance. The creation of the answer options
was guided by theoretical processes related to motivational and
strategic aspects of the surface approach in the ability of identifying
the author’s thinking in a given text. The new items enabled the
creation of two forms of the test. Form A is composed of one of the
revised texts and a set of items and form B comprises the other
revised text and another set of items. This structure enables SLAT-
Thinking 2to be used in interventions which intend to promote
students’ approaches to learning by applying a pre- and post-test
design. Therefore, SLAT-Thinking 2 has achieved three
improvements in relation to the original version. First, it diminishes
the probability of respondents answering correctly by chance.
Second, it enables clinicians and educators to assess qualitative
processes that inform the errors produced by respondents. Thus,
SLAT-Thinking 2 is a promising tool for educational diagnostics.
This is very important because, despite the advances in studies
addressing the internal and external validity of evaluation (Golino &
Gomes, 2014c, 2016; Gomes & Almeida, 2017; Gomes & Borges,
2008b; Gomes, Golino, & Peres, 2016, 2018, 2020; Gomes &
Jelihovschi, 2019; Gomes, Lemos, & Jelihovschi, 2020; Pires &
Gomes, 2017), there are very few tests that enable proper analysis of
the processes related to the learning and development of students in
the educational field.
This scarcity is a huge issue, since the creation of relevant sets of
instruments that measure processes tend to promote scientific
advances, such as in music therapy (André, Gomes, & Loureiro,
2017, 2018, 2020a, 2020b, 2020c; Rosário, Gomes, & Loureiro,
2019). Third, SLAT-Thinking 2 allows the assessment of
interventions on students’ approaches to learning and on the
development of cognitive abilities (Gomes, 2007; Gomes, Golino,
Santos, & Ferreira, 2014), in addition to making the evaluation of
student more feasible (Ferreira & Gomes, 2017; Gomes, Araujo,
Nascimento, & Jelihovschi, 2018; Gomes, de Araujo, Ferreira, &
Golino, 2014; Gomes & Golino, 2015; Jelihovschi & Gomes, 2019).
In summary, the objective of this study is to present SLAT-Thinking
2 to the scientific community and show evidence of its content
validity. This paper is the first part of a series of necessary studies on
the internal and external validity of SLAT-Thinking 2 that aims to
make this test available to psychologists and educators.
Presenting the Rationality of SLAT-Thinking 2: As previously
mentioned, SLAT-Thinking 2 differs from its first version in five
mains aspects: (1) the number of options for answering each item has
been largely increased; (2) presence of new items; (3) existence of
form A and form B; (4) the response options are theoretically based
on processes that are attributed to the surface approach in the ability
of identifying the author’s thinking in a given text; (5) the two texts
have been revised and slightly changed. In addition to these
modifications, there is a sixth change in relation to the first version of
the test. In SLAT-Thinking, respondents only had to choose between
the options “represent”, “does not represent”, or “it is not possible to
know whether the statement represents or not the authors’ thinking”.
In SLAT-Thinking 2, the seven options are composed of three
options that affirm that the statement of the item represents the
author’s thinking in a given text, as the other three options affirm that
the statement does not represent the author’s thinking. Beyond the
terms “represents” or “does not represent”, these six options have an
argumentation that sustain why the statement represents or not the
author’s thinking in a given text. These argumentations were created
through a theoretical postulate that assumes the existence of different
processes related to the surface approach in identifying the author’s
thinking. These argumentations allow a suitable assessment of
processes that drive the respondent to make errors, enabling further
understanding about the causes of these errors. Besides the six
aforementioned answer options, there is a seventh option which
claims that none of the six previous options are correct.
Figure 2shows the instructions of SLAT- Thinking 2 and an example
of item. This item is different from the items of the test that will be
answered by the respondent, since it has only four answer options.
The smaller number of options in this item was considered adequate
and sufficient by the authors to explain to the respondents the
structure of the test and how it should be performed. SLAT-Thinking
2 postulates the presence of seven error processes related to the
surface approach in identifying the author’s thinking in a given text.
Each answer option was created to be a marker of one of these seven
error processes. The list of these error processes, their descriptions
and examples are shown in Table 1. It is important to highlight that
the items do not have a balanced number of answer options in terms
of error processes. Certain items have more answer options related to
the error process of “The reader does not capture the meaning of the
terms in the text”, while other items have more answer options related
to the error process of “False causality” or “Projection of thought”,
and so on. It is worth highlighting that, even though SLAT-
Thinking 2 assumes that each answer option was created to be
a marker of a certain error process, it is possible that the
respondent marks certain answer option guided by another
error process or by guess. As previously mentioned, SLAT-
Thinking 2 comprises form A and form B. The selection of
error processes that would be used to create incorrect answer
options for each item varied according to the test form. This
variation was due to specific characteristics of the text of each
form and the items related to it.
This test aims at investigating your ability to identify whether the given statements represent or not the author's thinking contained in a given text.
Below the text there is a sequence of items that shows an assertion and a set of statements that support that the item assertion represents or not the
author’s thinking contained in a given text.
Read the text and answer the items that refer to it.
Each statement is followed by an argument that may or may not be able to support why the assertion represents or not the author’s thinking.
If you think the answer option is correct and that the argument that follows is able to support this stand, then you shouldplace an X in the parentheses
associated with such option.
Here is an example:
Text X.
Maria is a young adult (Phrase 1). She does not like chocolate because her father forced her to eat chocolate when she was a child. (Phrase 2).
Assertion 1. Maria likes chocolate
( ) REPRESENTS: Maria used to eat chocolate when she was a child, therefore, she likes chocolate.
( ) REPRESENTS: If Maria did not like chocolate she would not received chocolate from her father.
( X ) DOES NOT REPRESENT: The excerpt from phrase 2 “She does not like chocolate” denies Statement 1.
( ) DOES NOT REPRESENT: Both phrases 1 and phrase 2 are necessary to reach the conclusion shown in Statement 1.
Suppose you are answering Assertion 1 and agrees that it does not represent the author’s thinking because the excerpt from phrase 2 “She does not like
chocolate” denies Assertion 1,so you place an “X” in the parenthesis of the statement that represents this answer option. There is only one correct
answer per assertion. Figure 2. Instructions of SLAT-Thinking 2
Table 1. Error processes used as the basis for creating the incorrect answer options
Error process
Description
Example
1. The reader does not capture the
meaning of the terms
The reader does not decode the meanings of the terms,
which prevents a logical analysis. In many cases in
which the meaning of terms is not captured, the reader
scans the text for the explicit presence of a certain term
and does not recognize the presence of another term
that has the same meaning.
Example 1: “Everyone likes strawberries. John is a friend
of Charles.” The lack of understanding that John and
Charles are people prevents the reader from concluding
that they both like strawberries.
Example 2: “It is very warm today” and “It is very hot
today” express the same meaning. However, the reader
can understand that these sentences are different only
because the words “hot” and “warm” are different.
2. The reader does not differentiate
the meaning of the terms
It occurs when the reader assumes that terms with
different semantics express the same meaning. While in
error process 1 the reader does not understand what the
terms mean, in this process they confuse the meaning of
the terms.
“John likes cold things”. When reading this sentence, the
reader assumes that John likes “ice cream”, indicating that
he does not differentiate the meaning of “ice cream” from
that of “cold things”.
3. Projection of thought
It occurs when the reader projects their thought in the
author's thought.
“Maria likes chocolate and popsicles”. Since the reader
believes that those who like chocolate and popsicles are
addicted to sweets, they conclude that “Maria is addicted
to sweets” and that this is the author’s thought.
4. Refinement of argument
The reader adds new arguments, unconsciously, seeking
to support or improve some logical relation supposedly
presented by the author. This addition is understood by
the reader as an argument of the author. Although every
refinement of argument includes a projection of
thought, the refinement process differs from the
previous one, because in this process the reader
correctly recognizes the relations presented by the
author and enhances their argument.
The reader reads the phrase “Men are sexists” and
interprets that the author means that most men are sexists,
but not all. After all, the reader understands that stating
that all men are sexist is a very strong and perhaps
inappropriate statement.
5. False causality
It occurs when the reader assigns a relation of causality
when only one association is established. This error
process also encompasses the non-differentiation of the
meaning of the terms, since the reader confuses the
terms that establish the causality relation with those that
define the relation of association in order to commit this
error.
“People who frequently eat chocolate are happier”. The
reader concludes that eating chocolate frequently causes
happiness.
6. The reader does not identify some
relations
It occurs when some relation (other than causality)
presented by the author is not identified by the reader,
resulting in inadequate logical conclusions.
“Maria does not like ice cream; Maria thinks ice cream
tastes bad” The reader believes that without the first
sentence it is not possible to conclude whether Maria likes
ice cream or not.
7. Wrong logical conclusion
The reader correctly identifies the terms, but establishes
an illogical conclusion.
“All men are mortal. Socrates is a man”. The reader
articulates the assumptions wrongly and concludes that
Socrates is immortal, which would not be logically
possible.
The “False causality” error process was used only to answer
options in form B. Table 2 shows the frequency of error
processes in the whole test, as well as in form A and form B.
Only the target error processes have been counted and
categorized in each answer option.
Table 2. Frequency of error processes
Error process
Form
A
Form
B
Total
The reader does not capture the
meaning of the terms
8
6
14
The reader does not differentiate the
meaning of the terms
10
2
12
Projection of thought
42
29
71
Refinement of argument
1
8
9
False causality
0
10
10
The reader does not identify some
relations
8
9
17
Wrong logical conclusion
11
22
33
METHODS
Participants: Nine judges (56% male) aged 21 to 69 years evaluated
the content validity of SLAT-Thinking 2. Four of them were
psychologists while the others were an economist, an educator, a
statistician, an engineer, and an undergraduate student. Seven of these
judges already had or were coursing master’s or doctorate degrees.
Instrument
SLAT-Thinking 2: The Students’ Learning Approach Test 2 -
Identification of Thinking Contained in Texts (SLAT-Thinking 2) is
an assessment based on performance used to measure students’
approaches to learning in identifying the author's thinking contained
in a given text. It was developed by C. M. A. Gomes, D. Nascimento,
and J. Araujo, at the Laboratory for Cognitive Architecture Mapping
(Laboratório de Investigação da Arquitetura Cognitiva – LAICO) of
the Federal University of Minas Gerais, Brazil, in 2020. The test
comprises two forms: A and B. Each of these forms contains a
specific reference text. Form A has 16 items while form B has 17
items. Each item has a statement that may represent the author’s
thinking in a given text, as well as seven answer options, three of
them justifying that the statement presented by the item represents the
author's thinking in a given text, three justifying that the statement
presented by the item does not represent the author's thinking, and
one option claiming that “none of the previous options” are correct.
The respondent's task is to read the text presented by the test, as well
as each item related to it and its answer options, and mark one answer
option per item. If the respondent answers an item correctly, the item
is scored as 1; otherwise, the item is scored as 0. It is expected that
higher raw scores indicate greater deep approach.
Data collection: SLAT-Thinking 2 was sent to the judges by email
together with a content validity protocol which contained both forms
of the test and a description of the error processes that guided the
creation of the incorrect answer options to the items. The protocol
asked the judges to evaluate: (1) the description of each error process;
(2) whether the instructions of SLAT-Thinking 2 were clear and easy
to understand; (3) whether the texts contained some ambiguity; (4)
whether they agreed with the answer key, as well as with the error
process attributed to each incorrect answer option. The judges were
instructed to take the test first and then complete the tasks of the
content validity protocol. After that, two authors of the test scheduled
a meeting with each judge. At these meetings, the judges should
present suggestions to improve the test. Regarding the points of
disagreement between the judge and the authors, the judge exposed
the arguments that supported their point of view, as well listened to
the authors ‘arguments of the test. If a disagreement pointed out by
the judge remained unsolved after the individual meeting with the
two authors of the test, the issue was discussed at a later meeting by
the full team of test authors and, if necessary, modifications were
made.
RESULTS AND DISCUSSION
To summarize the issues raised by the judges, the authors created six
categories. Four of these categories represent suggestions given by
the judges to reformulate the wording of some part of the test or the
description of the error processes. The other two categories represent
the disagreements of the judges regarding the answer options of the
items. Table 3 presents these categories and the quantification of
suggestions and disagreements presented by each judge. The category
“Suggestions: wording of the instructions” shows whether the judge
made suggestions to reformulate the test instructions. It has a binary
score, in which “no suggestion was presented” is 0 and “suggestions
were presented” is1. Six judges gave some suggestions to reformulate
the test instructions. Most of these suggestions referred to the
example item pertaining to the instructions. This example item had
only 2 answer options: one that sought to justify that the statement
that comprised the item represented the author's thought contained in
the example text and one that sought to justify that the statement did
not represent it. Since the actual test items had seven answer options,
some judges believed that the example item was too simple and not
able to clearly represent the task to be performed on the test. To solve
this issue, two more answer options were added. The final version of
the example item can be seen in the test instructions presented in
Figure 2. Other suggestions referred to the phrasing preferences of
certain judges and, therefore, did not represent relevant suggestions to
the wording of the test instructions. The category “Suggestions:
wording of error processes” shows the number of text reformulation
suggestions presented by each judge regarding the description or
exemplification of the error processes. Each judge could make from 0
to 7 suggestions, one for each error process, and, considering the
number of judges, the total of suggestions could vary from 0 to 63. In
total, the judges made only four suggestions. The three test authors
reviewed each of the proposed suggestions and, if there was a
consensus among them that a given proposal would make the
description or exemplification of an error process clearer, it was
accepted. The final version of the error processes descriptions and
exemplification has already been presented in Table 1.
The category “Suggestions: wording of the texts” shows the
suggestions to reformulate the texts in form A and form B of the test.
It represents an ordinal distribution, where 0 is equivalent to “there
were no suggestions to reformulate any of the texts”, 1 is equivalent
to “there were suggestions to reformulate one of the two texts”, and 2
is equivalent to “there were suggestions to reformulate both texts”.
There was one suggestion to reformulate the text in form A and two
suggestions to reformulate the text in form B. Regarding the text in
form A, one of the judges suggested adding an adjunct to a noun in
the text in order to avoid ambiguity. Regarding the text in Form B,
two judges suggested adding an adjunct to a noun in the text, since
that noun could convey a feeling of incompleteness to the reader.
Both suggestions were accepted and the texts were changed.
The category “Suggestions: wording of the items” shows the number
of items for which a given judge presented reformulation suggestions.
These suggestions were restricted to the items tatementand did not
cover the answer options. Each judge could present one suggestion
per item and, considering the two forms of the test and the number of
judges, the total number of suggestions could range from 0 to 297. In
total, the judges presented only 18 reformulation suggestions. The
suggestions that demonstrated the presence of terms in the item that
could invalidate the correct answer option led to a change of the item.
Based on them, items 4, 5 and 10 of form A of the test were changed.
Suggestions related to the clarity of items that reflected particular
preferences of certain judges or that would make the item
considerably easier were not considered sufficiently relevant by the
test authors to justify a change of items. The category
“Disagreements” represents the number of disagreements of the judge
in relation to the answer options before the judge discussed their
responses with the test authors. The category “Disagreements after
discussion with the authors” represents the number of disagreements
of a given judge that remained even after discussion with two test
authors. Each judge could present one disagreement per answer
option, so the total number of possible disagreements per judge could
vary between 0 and 198. Only 167 disagreements were presented
before the discussion between judges and test authors and only 34
disagreements remained after the discussion. Considering the two
forms of the test after discussion, 18 items did not retain any
disagreement, 10 items retained disagreements of one judge, one item
retained disagreements of two judges, and four items retained
disagreements of three judges. In other words, of the total of 33
items, 15 retained disagreements of at least one judge. The latter were
individually reviewed by the three test authors after the meetings with
the judges. The group of disagreements retained in each item was
characterized by the test authors in one of the following categories:
(1) “The term used in the wording of the item changes the correct
answer option”; (2) “The correct answer proposed in the answer key
is inadequate”; (3) “Lack of clarity” (4) “Judge's own conception”;
and (5) "Erroneous meaning attributed by the judges to one of the
terms of the item or the text". The first category occurred when a term
used in the wording of an answer option could invalidate the correct
answer originally proposed by the authors. It covered three of the
items that retained disagreements and these items had their answer
options reformulated. The second category occurred when the answer
option proposed as correct in the answer key was inadequate because
it was incorrect. It covered just one of the items that retained
disagreements and the correct answer for this item was changed. The
third category occurred when a demonstrative pronoun used in one of
the answer options to the item was not easily interpretable according
to the judge. It covered only one of the items that retained
disagreements and, to make the answer option clearer, the wording
that caused confusion was rewritten. The fourth category occurred
when the judge presented a personal conception that biased their
analysis, that is, the judge's own analysis presented a projection of
thought as described in this article. It covered seven of the items that
retained disagreements. The fifth category occurred when the judge
attributed an inappropriate meaning to one of the terms of the item or
the text which corrupted their analysis. It covered four of the items
that retained disagreements
1
. Items related to the fourth and fifth
categories were not changed.
CONCLUSION
This paper presented SLAT-Thinking 2 and evidence concerning its
content validity. SLAT-Thinking 2 brings many improvements to the
field of students’ approaches to learning.
1
One item that retained disagreements was coveredinthe category“Judge's
own conception” and the category“Erroneous meaning attributed by the judges
to one of the terms of the item or the text”, since it contained disagreements
pertinent to both categories.Therefore, the total frequency of categories related
to disagreements is 16, one point higher than the number of items that retained
disagreements.
First, it makes available to the researchers a measurement of
approaches to learning based on achievement whose items have low
probability to be correctly answered by chance. Second, since the
answer options are guided by theoretical error processes regarding the
surface approach, SLAT-Thinking 2 enables clinicians and educators
to assess qualitative processes that inform the errors produced by
respondents, being a promising tool for Educational Psychology
diagnosis. Third, SLAT-Thinking 2 allows the assessment of
interventions on students’ approaches to learning and on the
development of cognitive abilities, since this test is composed of two
forms (A and B). This paper is the first part of a series of necessary
studies regarding the construct validity of SLAT-Thinking. Further
studies should investigate the structural validity of this test, as well its
invariance and external validity. We hope this presentation
encourages researchers to use tests of approaches to learning based on
achievement so that the exclusive use of self-report assessment scan
be avoided in this area.
REFERENCES
Alves, A. F., Gomes, C. M. A., Martins, A., & Almeida, L. S. 2016.
Social and cultural contexts change but intelligence persists as
incisive to explain children's academic achievement. PONTE:
International Scientific Researches Journal, 729, 70-89. doi:
10.21506/j.ponte.2016.9.6
Alves, A. F., Gomes, C. M. A., Martins, A., & Almeida, L. S. 2017.
Cognitive performance and academic achievement: How do
family and school converge? European Journal of Education and
Psychology, 102, 49-56. doi: 10.1016/j.ejeps.2017.07.001
Alves, A. F., Gomes, C. M. A., Martins, A., & Almeida, L. S. 2018.
The structure of intelligence in childhood: age and socio-familiar
impact on cognitive differentiation. Psychological Reports, 1211,
79-92. doi: 10.1177/0033294117723019
Alves, F. A., Flores, R. P., Gomes, C. M. A., & Golino, H. F. 2012.
Preditores do rendimento escolar: inteligência geral e crenças
sobre ensino-aprendizagem. Revista E-PSI, 1, 97-117. Retrieved
from https://revistaepsi.com/artigo/2012-ano2-volume1-artigo5/
André, A. M., Gomes, C. M. A., & Loureiro, C. M. V. 2017.
Equivalência de itens, semântica e operacional da versão
brasileira da Escala Nordoff Robbins de Comunicabilidade
Musical. OPUS, 232, 153. doi:10.20504/opus2017b2309.
André, A. M., Gomes, C. M. A., & Loureiro, C. M. V. 2018.
Reliability Inter-Examiners Of The Nordoff Robbins Musical
Communicativeness Scale Brazilian Version. 11th International
Conference of Students of Systematic Musicology, 101–105.
Retrieved from http://musica.ufmg.br/sysmus2018/wp-
content/uploads/2018/07/Reliability-Inter-examiners-of-the-
Nordoff-Robbins-Musical-Communicativeness-Scale-Brazilian-
Version.pdf
André, A. M. B., Gomes, C. M. A., & Loureiro, C. M. V. 2020a.
Confiabilidade Inter-examinadores da Escala de Relação
Criança-Terapeuta na Experiência Musical Coativa para validação
no contexto brasileiro. Hodie, 20e64243, 1–18.
doi:10.5216/mh.v20.64243
Table 3. Suggestions and disagreements presented by the judges
Judge
Suggestions:
wording of the
instructions
Suggestions:
wording of the
error processes
Suggestions:
wording of the
texts
Suggestions:
wording of the
items
Disagreements
Disagreements after
discussion with
authors
1
1
1
1
3
24
8
2
1
0
0
2
54
3
3
1
0
1
1
27
1
4
1
2
0
5
25
14
5
0
0
1
2
3
1
6
1
1
0
1
1
0
7
0
0
0
1
2
0
8
0
0
0
3
13
6
9
1
0
0
0
18
1
Total
6
4
3
18
167
34
André, A. M. B., Gomes, C. M. A., & Loureiro, C. M. V. 2020b.
Confiabilidade Interexaminadores da versão brasileira da Escala
Nordoff Robbins de Comunicabilidade Musical. In Estudos
Latino-americanos em Música vol.2 pp. 152–163. Artemis.
doi:10.37572/EdArt_13210092015
André, A. M. B., Gomes, C. M. A., & Loureiro, C. M. V. 2020c.
Equivalência de itens, semântica e operacional da “Escala de
Musicabilidade: Formas de Atividade, Estágios e Qualidades de
Engajamento.” Orfeu, 52, 1–22.
doi:10.5965/2525530405022020e0010
Biggs, J., & Tang, C. 2011. Teaching for Quality Learning at
University. Maidenhead, UK: Open University Press
Cardoso, C. O., Seabra, A. G., Gomes, C. M. A., & Fonseca, R. P.
2019. Program for the neuropsychological stimulation of
cognition in students: impact, effectiveness, and transfer effect on
student cognitive performance. Frontiers in Psychology, 10, 1-16.
doi: 10.3389/fpsyg.2019.01784
Contreras, M. S., Salgado, F. C., Hernández-Pina, F., & Hernández,
F. M. 2017. Enfoques de aprendizaje y enfoques de enseñanza:
Origen y evolución. Educación y Educadores, 201, 65-88. DOI:
10.5294/edu.2017.20.1.4
Costa, B. C. G., Gomes, C. M. A., & Fleith, D. S. 2017. Validade da
Escala de Cognições Acadêmicas Autorreferentes: autoconceito,
autoeficácia, autoestima e valor. Avaliação Psicológica, 161, 87-
97. doi: 10.15689/ap.2017.1601.10
Dias, N. M., Gomes, C. M. A., Reppold, C. T., Fioravanti-Bastos, A.,
C., M., Pires, E. U., Carreiro, L. R. R., & Seabra, A. G. 2015.
Investigação da estrutura e composição das funções executivas:
análise de modelos teóricos. Psicologia: teoria e prática, 172, 140-
152. doi: 10.15348/1980-6906/psicologia.v17n2p140-152
Ferreira, M. G., & Gomes, C. M. A. 2017. Intraindividual analysis of
the Zarit Burden Interview: a Brazilian case study. Alzheimers &
Dementia, 13, P1163-P1164. doi: 0.1016/j.jalz.2017.06.1710
Golino, H. F., & Gomes, C. M. A. 2014a. Four Machine Learning
methods to predict academic achievement of college students: a
comparison study. Revista E-Psi, 1, 68-101. Retrieved from
https://revistaepsi.com/artigo/2014-ano4-volume1-artigo4/
Golino, H.F., & Gomes, C. M. A. 2014b. Psychology data from the
“BAFACALO project: The Brazilian Intelligence Battery based
on two state-of-the-art models – Carroll’s Model and the CHC
model”. Journal of Open Psychology Data, 21, p.e6.
doi:10.5334/jopd.af
Golino, H. F., & Gomes, C. M. A. 2014c. Visualizing random forest’s
prediction results. Psychology, 5, 2084-2098. doi:
10.4236/psych.2014.519211
Golino, H. F., & Gomes, C. M. A. 2016. Random forest as an
imputation method for education and psychology research: its
impact on item fit and difficulty of the Rasch model. International
Journal of Research & Method in Education, 394, 401-421. doi:
10.1080/1743727X.2016.1168798
Golino, H. F., Gomes, C. M. A., & Andrade, D. 2014. Predicting
academic achievement of high-school students using machine
learning. Psychology, 5, 2046-2057.
doi:10.4236/psych.2014.518207
Golino, H. F., Gomes. C. M. A., Commons, M. L., & Miller, P. M.
2014. The construction and validation of a developmental test for
stage identification: Two exploratory studies. Behavioral
Development Bulletin, 193, 37-54. doi:
10.1037/h0100589Gomes, C. M. A. 2007. Softwares educacionais
podem ser instrumentos psicológicos. Psicologia Escolar e
Educacional, 112, 391-401. doi: 10.1590/S1413-
85572007000200016
Gomes, C. M. A. 2007. Softwares educacionais podem ser
instrumentos psicológicos. Psicologia Escolar e Educacional, 112,
391-401. doi: 10.1590/S1413-85572007000200016
Gomes, C. M. A. 2010a. Avaliando a avaliação escolar: notas
escolares e inteligência fluida. Psicologia em Estudo, 154, 841-
849. Retrieved from
http://www.redalyc.org/articulo.oa?id=287123084020
Gomes, C. M. A. 2010b. Estrutura fatorial da Bateria de Fatores
Cognitivos de Alta-Ordem BaFaCalo. Avaliação Psicológica, 93,
449-459. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712010000300011&lng=pt.
Gomes, C. M. A. 2010c. Perfis de Estudantes e a relação entre
abordagens de aprendizagem e rendimento Escolar. Psico
PUCRS. Online, 414, 503-509. Retrieved from
http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/articl
e/view/6336
Gomes, C. M. A. 2011a. Abordagem profunda e abordagem
superficial à aprendizagem: diferentes perspectivas do rendimento
escolar. Psicologia: Reflexão e Crítica, 243, 438-447. doi:
10.1590/S0102-79722011000300004
Gomes, C. M. A. 2011b. Validade do conjunto de testes da habilidade
de memória de curto-prazo CTMC. Estudos de Psicologia Natal,
163, 235-242. doi:10.1590/S1413-294X2011000300005
Gomes, C. M. A. 2012a. A estrutura fatorial do inventário de
características da personalidade. Estudos de Psicologia Campinas,
292, 209-220. doi:10.1590/S0103-166X2012000200007
Gomes, C. M. A. 2012b. Validade de construto do conjunto de testes
de inteligência cristalizada CTIC da bateria de fatores cognitivos
de alta-ordem BaFaCAlO. Gerais : Revista Interinstitucional de
Psicologia, 52, 294-316. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S198
3-82202012000200009&lng=pt&tlng=pt.
Gomes, C. M. A. 2013. A Construção de uma Medida em
Abordagens de Aprendizagem. Psico PUCRS. Online, 442, 193-
203. Retrieved from
http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/articl
e/view/11371
Gomes, C. M. A., & Almeida, L. S. 2017. Advocating the broad use
of the decision tree method in education. Practical Assessment,
Research & Evaluation, 2210, 1-10, 2017. Recuperado de
https://pareonline.net/getvn.asp?v=22&n=10
Gomes, C.M.A., Amantes, A., & Jelihovschi, E.G. 2020. Applying
the regression tree method to predict students’ science
achievement. Trends in Psychology. doi: 10.9788/s43076-019-
00002-5
Gomes, C. M. A., Araujo, J., Nascimento, E., & Jelihovisch, E. 2018.
Routine Psychological Testing of the Individual Is Not Valid.
Psychological Reports, 1224, 1576-1593. doi:
10.1177/0033294118785636
Gomes, C. M. A., Araujo, J., & Jelihovschi, E. G. 2020. Approaches
to learning in the non-academic context: construct validity of
learning approaches test in video game lat-video game.
International Journal of Development Research, 1011, 41842-
41849. doi: 10.37118/ijdr.20350.11.2020
Gomes, C. M. A., & Borges, O. N. 2007. Validação do modelo de
inteligência de Carroll em uma amostra brasileira. Avaliação
Psicológica, 62, 167-179. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712007000200007&lng=en&tlng=pt.
Gomes, C. M. A., & Borges, O. N. 2008a. Avaliação da validade e
fidedignidade do instrumento crenças de estudantes sobre ensino-
aprendizagem CrEA. Ciências & Cognição UFRJ, 133, 37-50.
Retrieved from
http://www.cienciasecognicao.org/revista/index.php/cec/article/vi
ew/60
Gomes, C. M. A., & Borges, O. 2008b. Limite da validade de um
instrumento de avaliação docente. Avaliação Psicológica, 73,
391-401. Recuperado de
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712008000300011&lng=pt&tlng=pt.
Gomes, C. M. A., & Borges, O. 2008c. Qualidades psicométricas de
um conjunto de 45 testes cognitivos. Fractal: Revista de
Psicologia, 201, 195-207. doi:10.1590/S1984-
02922008000100019
Gomes, C. M. A., & Borges, O. N. 2009a. O ENEM é uma avaliação
educacional construtivista? Um estudo de validade de construto.
Estudos em Avaliação Educacional, 2042, 73-88. doi:
10.18222/eae204220092060
Gomes, C. M. A.s, & Borges, O. N. 2009b. Propriedades
psicométricas do conjunto de testes da habilidade visuo espacial.
PsicoUSF, 141, 19-34. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S141
3-82712009000100004&lng=pt&tlng=pt.
Gomes, C. M. A., & Borges, O. 2009c. Qualidades psicométricas do
conjunto de testes de inteligência fluida. Avaliação Psicológica,
81, 17-32. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712009000100003&lng=pt&tlng=pt.
Gomes, C. M. A., de Araújo, J., Ferreira, M. G., & Golino, H. F.
2014. The validity of the Cattel-Horn-Carroll model on the
intraindividual approach. Behavioral Development Bulletin, 194,
22-30. doi: 10.1037/h0101078
Gomes, C. M. A., Fleith, D. S., Marinho-Araujo, C. M., & Rabelo, M.
L. 2020. Predictors of students’ mathematics achievement in
secondary education. Psicologia: Teoria e Pesquisa, 36, e3638.
doi: 10.1590/0102.3772e3638
Gomes, C. M. A., & Gjikuria, J. 2017. Comparing the ESEM and
CFA approaches to analyze the Big Five factors. Avaliação
Psicológica, 163, 261-267. doi:10.15689/ap.2017.1603.12118
Gomes, C. M. A., & Gjikuria, E. 2018. Structural Validity of the
School Aspirations Questionnaire SAQ. Psicologia: Teoria e
Pesquisa, 34, e3438. doi:10.1590/0102.3772e3438
Gomes, C. M. A., & Golino, H. F. 2012a. Relações hierárquicas entre
os traços amplos do Big Five. Psicologia: Reflexão e Crítica, 253,
445-456. doi:10.1590/S0102-7972201200030000422
Gomes, C. M. A., & Golino, H. F. 2012b. O que a inteligência prediz:
diferenças individuais ou diferenças no desenvolvimento
acadêmico? Psicologia: teoria e prática, 141, 126-139. Retrieved
from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S151
6-36872012000100010&lng=pt&tlng=pt.
Gomes, C. M. A., & Golino, H. F. 2012c. Validade incremental da
Escala de Abordagens de Aprendizagem EABAP. Psicologia:
Reflexão e Crítica, 254, 400-410. doi:10.1590/S0102-
79722012000400001
Gomes, C. M. A., & Golino, H. F. 2014. Self-reports on students'
learning processes are academic metacognitive knowledge.
Psicologia: Reflexão e Crítica, 273, 472-480. doi: 10.1590/1678-
7153.201427307
Gomes, C. M. A., & Golino, H. 2015. Factor retention in the intra-
individual approach: Proposition of a triangulation strategy.
Avaliação Psicológica, 142, 273-279. doi:
10.15689/ap.2015.1402.12
Gomes, C. M. A., Golino, H. F., & Menezes, I. G. 2014. Predicting
School Achievement Rather than Intelligence: Does
Metacognition Matter? Psychology, 5, 1095-1110.
doi:10.4236/psych.2014.59122
Gomes, C. M. A., Golino, H. F., & Peres, A. J. S. 2016. Investigando
a validade estrutural das competências do ENEM: quatro
domínios correlacionados ou um modelo bifatorial. Boletim na
Medida INEP-Ministério da Educação, 510, 33-30. Retrieved
from
http://portal.inep.gov.br/documents/186968/494037/BOLETIM+
NA+MEDIDA+-+N%C2%BA+10/4b8e3d73-d95d-4815-866c-
ac2298dff0bd?version=1.1
Gomes, C. M. A. Golino, H. F., & Peres, A. J. S. 2018. Análise da
fidedignidade composta dos escores do enem por meio da análise
fatorial de itens. European Journal of Education Studies, 58, 331-
344. doi:10.5281/zenodo.2527904
Gomes, C. M. A., Golino, H. F., & Peres, A. J. S. 2020.
Fidedignidade dos escores do Exame Nacional do Ensino Médio
Enem. Psico RS, 542, 1-10. doi: 10.15448/1980-
8623.2020.2.31145.
Gomes, C. M. A., Golino, H. F., Pinheiro, C. A. R., Miranda, G. R.,
& Soares, J. M. T. 2011. Validação da Escala de Abordagens de
Aprendizagem EABAP em uma amostra Brasileira. Psicologia:
Reflexão e Crítica, 241, 19-27. doi: 10.1590/S0102-
79722011000100004
Gomes, C. M. A., Golino, H. F., Santos, M. T., & Ferreira, M. G.,
2014. Formal-Logic Development Program: Effects on Fluid
Intelligence and on Inductive Reasoning Stages. British Journal of
Education, Society & Behavioural Science, 49, 1234-1248.
Retrieved from http://www.sciencedomain.org/review-
history.php?iid=488&id=21&aid=4724
Gomes, C. M. A., & Jelihovschi, E. 2019. Presenting the regression
tree method and its application in a large-scale educational
dataset. International Journal of Research & Method in Education.
doi: 10.1080/1743727X.2019.1654992
Gomes, C. M. A., Lemos, G. C., & Jelihovschi, E. G. 2020.
Comparing the predictive power of the CART and CTREE
algorithms. Avaliação Psicológica, 191, 87-96. doi:
10.15689/ap.2020.1901.17737.10
Gomes, C. M. A., Linhares, I. S., Jelihovschi, E. G., & Rodrigues, M.
N. S. 2020. Introducing rationality and content validity of SLAT-
Thinking. International Journal of Development Research, 10 10.
Gomes, C. M. A., & Marques, E. L. L. 2016. Evidências de validade
dos estilos de pensamento executivo, legislativo e judiciário.
Avaliação Psicológica, 153, 327-336. doi:
10.15689/ap.2016.1503.05
Gomes, C. M. A., Marques, E. L. L., & Golino, H. F. 2014. Validade
Incremental dos Estilos Legislativo, Executivo e Judiciário em
Relação ao Rendimento Escolar. Revista E-Psi, 2, 31-46.
Retrieved from https://revistaepsi.com/artigo/2013-2014-ano3-
volume2-artigo3/
Jelihovschi, E. G., & Gomes, C. M. A. 2019. Proposing an
achievement simulation methodology to allow the estimation of
individual in clinical testing context. Revista Brasileira de
Biometria, 374, 1-10. doi: 10.28951/rbb.v37i4.423
Linhares, I. & Gomes, C. M. A. 2020. Investigação da validade de
conteúdo do TAP-Pensamento. Pôster. I Encontro Anual da Rede
Nacional de Ciência para Educação CPE. doi:
10.13140/RG.2.2.31110.40006
Muniz, M., Gomes, C. M. A., & Pasian, S. R. 2016. Factor structure
of Raven's Coloured Progressive Matrices. Psico-USF, 212, 259-
272. doi: 10.1590/1413-82712016210204
Pazeto, T. C. B., Dias, N. M., Gomes, C. M. A., & Seabra, A. G.
2019. Prediction of arithmetic competence: role of cognitive
abilities, socioeconomic variables and the perception of the
teacher in early childhood education. Estudos de Psicologia, 243,
225-236. doi: 10.22491/1678-4669.20190024
Pereira, B. L. S., Golino, M. T. S., & Gomes, C. M. A. 2019.
Investigando os efeitos do Programa de Enriquecimento
Instrumental Básico em um estudo de caso único. European
Journal of Education Studies, 67, 35-52. doi:
10.5281/zenodo.3477577
Pires, A. A. M., & Gomes, C. M. A. 2017. Three mistaken procedures
in the elaboration of school exams: explicitness and discussion.
PONTE International Scientific Researches Journal, 733, 1-14.
doi: 10.21506/j.ponte.2017.3.1
Pires, A. A. M., & Gomes, C. M. A. 2018. Proposing a method to
create metacognitive school exams. European Journal of
Education Studies, 58, 119-142. doi:10.5281/zenodo.2313538
Reppold, C. T., Gomes, C. M. A., Seabra, A. G., Muniz, M.,
Valentini, F., & Laros, J.A. 2015. Contribuições da psicometria
para os estudos em neuropsicologia cognitiva. Psicologia: teoria e
prática, 172, 94-106. doi: 10.15348/1980-
6906/psicologia.v17n2p94-106
Richardson, M., Abraham, C., & Bond, R. 2012. Psychological
correlates of university students’ academic performance: a
systematic review and metaanalysis. Psychol. Bull, 138 2, 353–
387. doi: 10.1037/a0026838.
Rosário, V. M., Gomes, C. M. A., & Loureiro, C. M. V. 2019.
Systematic review of attention testing in allegedly "untestable"
populations. International Journal of Psychological Research and
Reviews, 219, 1-21. doi: 10.28933/ijprr-2019-07-1905
Valentini, F., Gomes, C. M. A., Muniz, M., Mecca, T. P., Laros, J. A.,
& Andrade, J. M. 2015. Confiabilidade dos índices fatoriais da
Wais-III adaptada para a população brasileira. Psicologia: teoria
e prática, 172, 123-139. doi: 10.15348/1980-
6906/psicologia.v17n2p123-139
Watkins, D. 2001. Correlates of Approaches to Learning: A Cross-
Cultural Meta-Analysis. In R. J. Sternberg & L. F. Zhang Eds.,
Perspectives on thinking, learning and cognitive styles pp. 132–
157. Mahwah, NJ: Lawrence Erlbaum Associates.