ArticlePDF Available

Abstract and Figures

Students' Approaches to Learning is an important theory in Educational Psychology that investigates the interaction of students with objects of knowledge and how this interaction affects learning outcomes. Recently, the Students' Learning Approach Test (SLAT-Thinking) was proposed as a pioneer methodology to investigate approaches to learning through performance in a given task. Nevertheless, this test has presented some issues regarding the high probably of answering correctly by chance. This paper presents a new version of this performance test, SLAT-Thinking 2. This new version solves the aforementioned issues, adds a theoretical framework to explain the incorrect answers given by respondents, and presents two test forms. This study presents the content validity of SLAT-Thinking 2, which is the first step to investigate the test validity. The analysis was performed by nine judges, four of which with an Educational Psychology background. It led to changes in the wording of the test instructions, wording of the two texts given in the test task, wording of three items, wording of the response options of four items, and to the change of one answer key. This analysis certified the content validity of the new version of the test, which it is expected to become a useful tool for researchers and practitioners.
Content may be subject to copyright.
..
PRESENTING SLAT-THINKING SECOND VERSION AND ITS CONTENT VALIDITY
*Cristiano Mauro Assis Gomes and Diogo Ferreira do Nascimento
Laboratório de Investigação da Arquitetura Cognitiva (LAICO), Universidade Federal de Minas Gerais, Brazil
ARTICLE INFO ABSTRACT
Students’ Approaches to Learning is an important theory in Educational Psychology that
investigates the interaction of students with objects of knowledge and how this interaction affects
learning outcomes. Recently, the Students’ Learning Approach Test (SLAT-Thinking) was
proposed as a pioneer methodology to investigate approaches to learning through performance in a
given task. Nevertheless, this test has presented some issues regarding the high probably of
answering correctly by chance. This paper presents a new version of this performance test, SLAT-
Thinking 2. This new version solves the aforementioned issues, adds a theoretical framework to
explain the incorrect answers given by respondents, and presents two test forms. This study
presents the content validity of SLAT-Thinking 2, which is the first step to investigate the test
validity. The analysis was performed by nine judges, four of which with an Educational
Psychology background. It led to changes in the wording of the test instructions, wording of the
two texts given in the test task, wording of three items, wording of the response options of four
items, and to the change of one answer key. This analysis certified the content validity of the new
version of the test, which it is expected to become a useful tool for researchers and practitioners.
Copyright © 2021, Cristiano Mauro Assis Gomes and Diogo Ferreira do Nascimento. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
INTRODUCTION
The Students’ Approaches to Learning theory studies how students
interact with objects of knowledge (Biggs & Tang, 2011). This theory
assumes that there is a deep approach and a surface approach to
learning which characterize the way students interact with the objects
of knowledge. In short, a deep approach occurs when students
interact actively with the objects of knowledge, both in terms of
strategy and motivation and in turn, a surface approach characterizes
the passive interaction of students with the objects of knowledge.
Therefore, the Students’ Approaches to Learning theory assumes that
the deep approach is positively correlated with student achievement,
whereas the surface approach is negatively correlated with this
outcome (Contreras et al., 2017).
Despite the promising constructs of this theory, two meta-analyses
(Richardson, Abraham & Bond, 2012; Watkins, 2001) showed that
the deep and surface approaches have weak correlations with student
achievement. Important predictors, such as intelligence (Alves,
Gomes, Martins, & Almeida, 2016, 2017, 2018; Golino & Gomes,
2019; Gomes, 2010b, 2011b, 2012b; Gomes & Borges, 2007, 2008c,
2009b, 2009c; Gomes, de Araújo, Ferreira & Golino, 2014; Gomes &
Golino, 2012b,
2015; Muniz, Gomes, & Pasian, 2016; Valentini et al., 2015), meta-
cognition (Golino & Gomes, 2014a; Golino, Gomes, & Andrade,
2014; Gomes & Golino, 2014; Gomes, Golino, & Menezes, 2014;
Pires & Gomes, 2018), self-regulation (Cardoso, Seabra, Gomes, &
Fonseca, 2019; Dias et al., 2015; Golino, Gomes, Commons &
Miller, 2014; Gomes, 2007, 2010a; Gomes & Borges, 2009a; Gomes,
Golino, Santos, & Ferreira, 2014; Pereira, Golino, M. T. S., &
Gomes, 2019; Reppold et al., 2015), and socioeconomic variables
(Gomes & Almeida, 2017; Gomes, Amantes & Jelihovschi, 2020;
Gomes, Fleith, Marinho-Araujo, & Rabelo, 2020; Gomes &
Jelihovschi, 2019; Gomes, Lemos, & Jelihovschi, 2020; Pazeto, Dias,
Gomes & Seabra, 2019) are muchmore important than students’
approaches to predict academic achievement. However, the deep and
surface approaches seem to have incremental validity (Gomes,
2011a), which sustains their importance. In short, students’
approaches to learning are a secondary predictor (Gomes, 2010c,
2011a, 2013; Gomes, Araujo, & Jelihovschi, 2020; Gomes & Golino,
2012c; Gomes, Golino, Pinheiro, Miranda, & Soares, 2011), similar
to motivational and self-reference variables such as personality
(Gomes, 2012a; Gomes & Gjikuria, 2017; Gomes & Golino, 2012a),
students’ beliefs on teaching-learning processes (Alves, Flores,
Gomes & Golino, 2012; Gomes & Borges, 2008a), learning styles
(Gomes, Marques, & Golino, 2014; Gomes & Marques, 2016),
motivation for learning (Gomes & Gjikuria, 2018), and academic
self-reference (Costa, Gomes, & Fleith, 2017). The mainstream
argumentation of researchers about the low prediction of students’
approaches to learning, concerning academic achievement, is that this
is caused by the educational assessment system, which does not
ISSN: 2230-9926
International Journal of Development Research
Vol. 11, Issue, 03, pp. xxxxxxxxxx, March, 2021
https://doi.org/10.37118/ijdr.xxxxx.03.2021
Article History:
Received xxxxxx, 2020
Received in revised form
xxxxxxxx, 2020
Accepted xxxxxxxxx, 2020
Published online xxxxx, 2021
Available online at http://www.journalijdr.com
RESEARCH ARTICLE OPEN ACCESS
Citation: Cristiano Mauro Assis Gomes and Diogo Ferreira do Nascimento, 2021. Presenting slat-thinking second version and its content validity,
International Journal of Development Research, 11, (03), xxxxxxxxxxxxxx.
promote the deep approach and, in certain aspects, reinforces the
surface approach (Contreras et al., 2017). An alternative
interpretation for this is the exclusive existence of self-report
questionnaires to measure the students’ approaches. It is possible that
the exclusive use of self-report instruments to measure these
approaches produces considerable bias, generating scores with high
noise, diminishing the correlation between the approaches and
students’ achievement. Interested readers can find a detailed
argumentative exposition about this in the article of Gomes, Linhares,
Jelihovschi, and Rodrigues (2020).
Taking all that into account, Gomes and Linhares created the
Students’Learning Approach Test - Identification of Thinking
Contained in Texts (SLAT-Thinking). This test is the first
measurement of students’ approaches to learning based on the
performance of respondents. The test measures the approaches of a
person in identifying the thinking of an author in a given text
(Linhares & Gomes, 2018). While measuring approaches through
performance, SLAT-Thinking is guided by the assumption that the
measurement of the approaches based on performance in a test should
focus on a specific ability or domain, since the students’ approaches
occur in many contexts. For example, deep and surface approaches
can be measured through the ability to transfer knowledge learned in
a context to another context or the ability to seek information and
select what is important and what is noise. SLAT-Thinking measures
students’ approaches through their ability to identify the author’s
thinking in a specific text. This ability was chosen since it is a
strategic tool for the critical reasoning in the age of internet, that is, it
is an appropriate context to measure the approaches to learning in the
21st century.
SLAT-Thinking has two similar texts and 12 items related to each of
them. Each item is composed of a statement which can represent the
author’s thinking in a given text. Thus, the respondent must read the
text and answer each item related to it, marking one out of three
options. Option one affirms that the item's statement represents the
author's thinking, option two states that the item does not represent
the author's thinking, and option three informs that it is not possible
to answer whether or not the item represents the author's thinking in
that text because it did not provide enough information. An example
of item which follows this structure is shown in Figure 1. This item is
part of the instructions of the test. A detailed description of the
assumptions which guide SLAT-Thinking as its structure can be
found in Gomes et al. (2020).
STATEMENT
E
N
Z
1) Real Madrid is the best soccer team in the world.
E= this statement represents the author's thinking; N = this statement does not
represent the author's thinking; Z = it is not possible to answer whether or not
this statement represents the author's thinking.
Figure 1. Example of item which follows the SLAT-
Thinking structure
Despite the advances of SLAT-Thinking in the measurement of
students’ approaches to learning, the test showed some relevant
issues. It presents a high probability of respondents answering an
item correctly by chance. In practical effect, SLAT-Thinking tends to
allow a probability ofnearly50% for this occurrence. Although
SLAT-Thinking has three answer options, the third option is not
plausible, since it is against the test instructions, that is, the
respondents must read the text and infer whether each statement
represents or not the author’s thinking only considering the text they
have read. Therefore, the statement of each item should represent or
not the author’s thinking in the specific text read by the respondent.
As a consequence of this high probability to answer each item
correctly by chance, the test tends to produce many false-positive
responses, which support the erroneous inference that some
respondents have a strong deep approach when, in fact, they have a
weak or a moderate deep approach. To solve this issue, Gomes,
Nascimento and Araujo created the SLAT-Thinking Second Version
(SLAT-Thinking 2).
In short, this new test is very similar to the first version, but some
modifications have been made: the two texts and all their items were
revised, new items were created, and the answer options were
increased from 3 to 7, diminishing the probability of respondents
answering correctly by chance. The creation of the answer options
was guided by theoretical processes related to motivational and
strategic aspects of the surface approach in the ability of identifying
the author’s thinking in a given text. The new items enabled the
creation of two forms of the test. Form A is composed of one of the
revised texts and a set of items and form B comprises the other
revised text and another set of items. This structure enables SLAT-
Thinking 2to be used in interventions which intend to promote
students’ approaches to learning by applying a pre- and post-test
design. Therefore, SLAT-Thinking 2 has achieved three
improvements in relation to the original version. First, it diminishes
the probability of respondents answering correctly by chance.
Second, it enables clinicians and educators to assess qualitative
processes that inform the errors produced by respondents. Thus,
SLAT-Thinking 2 is a promising tool for educational diagnostics.
This is very important because, despite the advances in studies
addressing the internal and external validity of evaluation (Golino &
Gomes, 2014c, 2016; Gomes & Almeida, 2017; Gomes & Borges,
2008b; Gomes, Golino, & Peres, 2016, 2018, 2020; Gomes &
Jelihovschi, 2019; Gomes, Lemos, & Jelihovschi, 2020; Pires &
Gomes, 2017), there are very few tests that enable proper analysis of
the processes related to the learning and development of students in
the educational field.
This scarcity is a huge issue, since the creation of relevant sets of
instruments that measure processes tend to promote scientific
advances, such as in music therapy (André, Gomes, & Loureiro,
2017, 2018, 2020a, 2020b, 2020c; Rosário, Gomes, & Loureiro,
2019). Third, SLAT-Thinking 2 allows the assessment of
interventions on students’ approaches to learning and on the
development of cognitive abilities (Gomes, 2007; Gomes, Golino,
Santos, & Ferreira, 2014), in addition to making the evaluation of
student more feasible (Ferreira & Gomes, 2017; Gomes, Araujo,
Nascimento, & Jelihovschi, 2018; Gomes, de Araujo, Ferreira, &
Golino, 2014; Gomes & Golino, 2015; Jelihovschi & Gomes, 2019).
In summary, the objective of this study is to present SLAT-Thinking
2 to the scientific community and show evidence of its content
validity. This paper is the first part of a series of necessary studies on
the internal and external validity of SLAT-Thinking 2 that aims to
make this test available to psychologists and educators.
Presenting the Rationality of SLAT-Thinking 2: As previously
mentioned, SLAT-Thinking 2 differs from its first version in five
mains aspects: (1) the number of options for answering each item has
been largely increased; (2) presence of new items; (3) existence of
form A and form B; (4) the response options are theoretically based
on processes that are attributed to the surface approach in the ability
of identifying the author’s thinking in a given text; (5) the two texts
have been revised and slightly changed. In addition to these
modifications, there is a sixth change in relation to the first version of
the test. In SLAT-Thinking, respondents only had to choose between
the options “represent”, “does not represent”, or “it is not possible to
know whether the statement represents or not the authors’ thinking”.
In SLAT-Thinking 2, the seven options are composed of three
options that affirm that the statement of the item represents the
author’s thinking in a given text, as the other three options affirm that
the statement does not represent the author’s thinking. Beyond the
terms “represents” or “does not represent”, these six options have an
argumentation that sustain why the statement represents or not the
author’s thinking in a given text. These argumentations were created
through a theoretical postulate that assumes the existence of different
processes related to the surface approach in identifying the author’s
thinking. These argumentations allow a suitable assessment of
processes that drive the respondent to make errors, enabling further
understanding about the causes of these errors. Besides the six
aforementioned answer options, there is a seventh option which
claims that none of the six previous options are correct.
Figure 2shows the instructions of SLAT- Thinking 2 and an example
of item. This item is different from the items of the test that will be
answered by the respondent, since it has only four answer options.
The smaller number of options in this item was considered adequate
and sufficient by the authors to explain to the respondents the
structure of the test and how it should be performed. SLAT-Thinking
2 postulates the presence of seven error processes related to the
surface approach in identifying the author’s thinking in a given text.
Each answer option was created to be a marker of one of these seven
error processes. The list of these error processes, their descriptions
and examples are shown in Table 1. It is important to highlight that
the items do not have a balanced number of answer options in terms
of error processes. Certain items have more answer options related to
the error process of “The reader does not capture the meaning of the
terms in the text”, while other items have more answer options related
to the error process of “False causality” or “Projection of thought”,
and so on. It is worth highlighting that, even though SLAT-
Thinking 2 assumes that each answer option was created to be
a marker of a certain error process, it is possible that the
respondent marks certain answer option guided by another
error process or by guess. As previously mentioned, SLAT-
Thinking 2 comprises form A and form B. The selection of
error processes that would be used to create incorrect answer
options for each item varied according to the test form. This
variation was due to specific characteristics of the text of each
form and the items related to it.
This test aims at investigating your ability to identify whether the given statements represent or not the author's thinking contained in a given text.
Below the text there is a sequence of items that shows an assertion and a set of statements that support that the item assertion represents or not the
author’s thinking contained in a given text.
Read the text and answer the items that refer to it.
Each statement is followed by an argument that may or may not be able to support why the assertion represents or not the author’s thinking.
If you think the answer option is correct and that the argument that follows is able to support this stand, then you shouldplace an X in the parentheses
associated with such option.
Here is an example:
Text X.
Maria is a young adult (Phrase 1). She does not like chocolate because her father forced her to eat chocolate when she was a child. (Phrase 2).
Assertion 1. Maria likes chocolate
( ) REPRESENTS: Maria used to eat chocolate when she was a child, therefore, she likes chocolate.
( ) REPRESENTS: If Maria did not like chocolate she would not received chocolate from her father.
( X ) DOES NOT REPRESENT: The excerpt from phrase 2 “She does not like chocolate” denies Statement 1.
( ) DOES NOT REPRESENT: Both phrases 1 and phrase 2 are necessary to reach the conclusion shown in Statement 1.
Suppose you are answering Assertion 1 and agrees that it does not represent the author’s thinking because the excerpt from phrase 2 “She does not like
chocolate” denies Assertion 1,so you place an “X” in the parenthesis of the statement that represents this answer option. There is only one correct
answer per assertion. Figure 2. Instructions of SLAT-Thinking 2
Table 1. Error processes used as the basis for creating the incorrect answer options
Error process
Description
Example
1. The reader does not capture the
meaning of the terms
The reader does not decode the meanings of the terms,
which prevents a logical analysis. In many cases in
which the meaning of terms is not captured, the reader
scans the text for the explicit presence of a certain term
and does not recognize the presence of another term
that has the same meaning.
Example 1: “Everyone likes strawberries. John is a friend
of Charles.” The lack of understanding that John and
Charles are people prevents the reader from concluding
that they both like strawberries.
Example 2: “It is very warm today” and “It is very hot
today” express the same meaning. However, the reader
can understand that these sentences are different only
because the words “hot” and “warm” are different.
2. The reader does not differentiate
the meaning of the terms
It occurs when the reader assumes that terms with
different semantics express the same meaning. While in
error process 1 the reader does not understand what the
terms mean, in this process they confuse the meaning of
the terms.
“John likes cold things”. When reading this sentence, the
reader assumes that John likes “ice cream”, indicating that
he does not differentiate the meaning of “ice cream” from
that of “cold things”.
3. Projection of thought
It occurs when the reader projects their thought in the
author's thought.
“Maria likes chocolate and popsicles”. Since the reader
believes that those who like chocolate and popsicles are
addicted to sweets, they conclude that “Maria is addicted
to sweets” and that this is the author’s thought.
4. Refinement of argument
The reader adds new arguments, unconsciously, seeking
to support or improve some logical relation supposedly
presented by the author. This addition is understood by
the reader as an argument of the author. Although every
refinement of argument includes a projection of
thought, the refinement process differs from the
previous one, because in this process the reader
correctly recognizes the relations presented by the
author and enhances their argument.
The reader reads the phrase “Men are sexists” and
interprets that the author means that most men are sexists,
but not all. After all, the reader understands that stating
that all men are sexist is a very strong and perhaps
inappropriate statement.
5. False causality
It occurs when the reader assigns a relation of causality
when only one association is established. This error
process also encompasses the non-differentiation of the
meaning of the terms, since the reader confuses the
terms that establish the causality relation with those that
define the relation of association in order to commit this
error.
“People who frequently eat chocolate are happier”. The
reader concludes that eating chocolate frequently causes
happiness.
6. The reader does not identify some
relations
It occurs when some relation (other than causality)
presented by the author is not identified by the reader,
resulting in inadequate logical conclusions.
“Maria does not like ice cream; Maria thinks ice cream
tastes bad” The reader believes that without the first
sentence it is not possible to conclude whether Maria likes
ice cream or not.
7. Wrong logical conclusion
The reader correctly identifies the terms, but establishes
an illogical conclusion.
“All men are mortal. Socrates is a man”. The reader
articulates the assumptions wrongly and concludes that
Socrates is immortal, which would not be logically
possible.
The “False causality” error process was used only to answer
options in form B. Table 2 shows the frequency of error
processes in the whole test, as well as in form A and form B.
Only the target error processes have been counted and
categorized in each answer option.
Table 2. Frequency of error processes
Error process
Form
A
Form
B
Total
The reader does not capture the
meaning of the terms
8
6
14
The reader does not differentiate the
meaning of the terms
10
2
12
Projection of thought
42
29
71
Refinement of argument
1
8
9
False causality
0
10
10
The reader does not identify some
relations
8
9
17
Wrong logical conclusion
11
22
33
METHODS
Participants: Nine judges (56% male) aged 21 to 69 years evaluated
the content validity of SLAT-Thinking 2. Four of them were
psychologists while the others were an economist, an educator, a
statistician, an engineer, and an undergraduate student. Seven of these
judges already had or were coursing master’s or doctorate degrees.
Instrument
SLAT-Thinking 2: The Students’ Learning Approach Test 2 -
Identification of Thinking Contained in Texts (SLAT-Thinking 2) is
an assessment based on performance used to measure students’
approaches to learning in identifying the author's thinking contained
in a given text. It was developed by C. M. A. Gomes, D. Nascimento,
and J. Araujo, at the Laboratory for Cognitive Architecture Mapping
(Laboratório de Investigação da Arquitetura Cognitiva LAICO) of
the Federal University of Minas Gerais, Brazil, in 2020. The test
comprises two forms: A and B. Each of these forms contains a
specific reference text. Form A has 16 items while form B has 17
items. Each item has a statement that may represent the author’s
thinking in a given text, as well as seven answer options, three of
them justifying that the statement presented by the item represents the
author's thinking in a given text, three justifying that the statement
presented by the item does not represent the author's thinking, and
one option claiming that “none of the previous options” are correct.
The respondent's task is to read the text presented by the test, as well
as each item related to it and its answer options, and mark one answer
option per item. If the respondent answers an item correctly, the item
is scored as 1; otherwise, the item is scored as 0. It is expected that
higher raw scores indicate greater deep approach.
Data collection: SLAT-Thinking 2 was sent to the judges by email
together with a content validity protocol which contained both forms
of the test and a description of the error processes that guided the
creation of the incorrect answer options to the items. The protocol
asked the judges to evaluate: (1) the description of each error process;
(2) whether the instructions of SLAT-Thinking 2 were clear and easy
to understand; (3) whether the texts contained some ambiguity; (4)
whether they agreed with the answer key, as well as with the error
process attributed to each incorrect answer option. The judges were
instructed to take the test first and then complete the tasks of the
content validity protocol. After that, two authors of the test scheduled
a meeting with each judge. At these meetings, the judges should
present suggestions to improve the test. Regarding the points of
disagreement between the judge and the authors, the judge exposed
the arguments that supported their point of view, as well listened to
the authors ‘arguments of the test. If a disagreement pointed out by
the judge remained unsolved after the individual meeting with the
two authors of the test, the issue was discussed at a later meeting by
the full team of test authors and, if necessary, modifications were
made.
RESULTS AND DISCUSSION
To summarize the issues raised by the judges, the authors created six
categories. Four of these categories represent suggestions given by
the judges to reformulate the wording of some part of the test or the
description of the error processes. The other two categories represent
the disagreements of the judges regarding the answer options of the
items. Table 3 presents these categories and the quantification of
suggestions and disagreements presented by each judge. The category
“Suggestions: wording of the instructions” shows whether the judge
made suggestions to reformulate the test instructions. It has a binary
score, in which “no suggestion was presented” is 0 and “suggestions
were presented” is1. Six judges gave some suggestions to reformulate
the test instructions. Most of these suggestions referred to the
example item pertaining to the instructions. This example item had
only 2 answer options: one that sought to justify that the statement
that comprised the item represented the author's thought contained in
the example text and one that sought to justify that the statement did
not represent it. Since the actual test items had seven answer options,
some judges believed that the example item was too simple and not
able to clearly represent the task to be performed on the test. To solve
this issue, two more answer options were added. The final version of
the example item can be seen in the test instructions presented in
Figure 2. Other suggestions referred to the phrasing preferences of
certain judges and, therefore, did not represent relevant suggestions to
the wording of the test instructions. The category “Suggestions:
wording of error processes” shows the number of text reformulation
suggestions presented by each judge regarding the description or
exemplification of the error processes. Each judge could make from 0
to 7 suggestions, one for each error process, and, considering the
number of judges, the total of suggestions could vary from 0 to 63. In
total, the judges made only four suggestions. The three test authors
reviewed each of the proposed suggestions and, if there was a
consensus among them that a given proposal would make the
description or exemplification of an error process clearer, it was
accepted. The final version of the error processes descriptions and
exemplification has already been presented in Table 1.
The category “Suggestions: wording of the texts” shows the
suggestions to reformulate the texts in form A and form B of the test.
It represents an ordinal distribution, where 0 is equivalent to “there
were no suggestions to reformulate any of the texts”, 1 is equivalent
to “there were suggestions to reformulate one of the two texts”, and 2
is equivalent to “there were suggestions to reformulate both texts”.
There was one suggestion to reformulate the text in form A and two
suggestions to reformulate the text in form B. Regarding the text in
form A, one of the judges suggested adding an adjunct to a noun in
the text in order to avoid ambiguity. Regarding the text in Form B,
two judges suggested adding an adjunct to a noun in the text, since
that noun could convey a feeling of incompleteness to the reader.
Both suggestions were accepted and the texts were changed.
The category “Suggestions: wording of the items” shows the number
of items for which a given judge presented reformulation suggestions.
These suggestions were restricted to the items tatementand did not
cover the answer options. Each judge could present one suggestion
per item and, considering the two forms of the test and the number of
judges, the total number of suggestions could range from 0 to 297. In
total, the judges presented only 18 reformulation suggestions. The
suggestions that demonstrated the presence of terms in the item that
could invalidate the correct answer option led to a change of the item.
Based on them, items 4, 5 and 10 of form A of the test were changed.
Suggestions related to the clarity of items that reflected particular
preferences of certain judges or that would make the item
considerably easier were not considered sufficiently relevant by the
test authors to justify a change of items. The category
“Disagreements” represents the number of disagreements of the judge
in relation to the answer options before the judge discussed their
responses with the test authors. The category “Disagreements after
discussion with the authors” represents the number of disagreements
of a given judge that remained even after discussion with two test
authors. Each judge could present one disagreement per answer
option, so the total number of possible disagreements per judge could
vary between 0 and 198. Only 167 disagreements were presented
before the discussion between judges and test authors and only 34
disagreements remained after the discussion. Considering the two
forms of the test after discussion, 18 items did not retain any
disagreement, 10 items retained disagreements of one judge, one item
retained disagreements of two judges, and four items retained
disagreements of three judges. In other words, of the total of 33
items, 15 retained disagreements of at least one judge. The latter were
individually reviewed by the three test authors after the meetings with
the judges. The group of disagreements retained in each item was
characterized by the test authors in one of the following categories:
(1) “The term used in the wording of the item changes the correct
answer option”; (2) “The correct answer proposed in the answer key
is inadequate”; (3) “Lack of clarity” (4) “Judge's own conception”;
and (5) "Erroneous meaning attributed by the judges to one of the
terms of the item or the text". The first category occurred when a term
used in the wording of an answer option could invalidate the correct
answer originally proposed by the authors. It covered three of the
items that retained disagreements and these items had their answer
options reformulated. The second category occurred when the answer
option proposed as correct in the answer key was inadequate because
it was incorrect. It covered just one of the items that retained
disagreements and the correct answer for this item was changed. The
third category occurred when a demonstrative pronoun used in one of
the answer options to the item was not easily interpretable according
to the judge. It covered only one of the items that retained
disagreements and, to make the answer option clearer, the wording
that caused confusion was rewritten. The fourth category occurred
when the judge presented a personal conception that biased their
analysis, that is, the judge's own analysis presented a projection of
thought as described in this article. It covered seven of the items that
retained disagreements. The fifth category occurred when the judge
attributed an inappropriate meaning to one of the terms of the item or
the text which corrupted their analysis. It covered four of the items
that retained disagreements
1
. Items related to the fourth and fifth
categories were not changed.
CONCLUSION
This paper presented SLAT-Thinking 2 and evidence concerning its
content validity. SLAT-Thinking 2 brings many improvements to the
field of students’ approaches to learning.
1
One item that retained disagreements was coveredinthe category“Judge's
own conception” and the category“Erroneous meaning attributed by the judges
to one of the terms of the item or the text”, since it contained disagreements
pertinent to both categories.Therefore, the total frequency of categories related
to disagreements is 16, one point higher than the number of items that retained
disagreements.
First, it makes available to the researchers a measurement of
approaches to learning based on achievement whose items have low
probability to be correctly answered by chance. Second, since the
answer options are guided by theoretical error processes regarding the
surface approach, SLAT-Thinking 2 enables clinicians and educators
to assess qualitative processes that inform the errors produced by
respondents, being a promising tool for Educational Psychology
diagnosis. Third, SLAT-Thinking 2 allows the assessment of
interventions on students’ approaches to learning and on the
development of cognitive abilities, since this test is composed of two
forms (A and B). This paper is the first part of a series of necessary
studies regarding the construct validity of SLAT-Thinking. Further
studies should investigate the structural validity of this test, as well its
invariance and external validity. We hope this presentation
encourages researchers to use tests of approaches to learning based on
achievement so that the exclusive use of self-report assessment scan
be avoided in this area.
REFERENCES
Alves, A. F., Gomes, C. M. A., Martins, A., & Almeida, L. S. 2016.
Social and cultural contexts change but intelligence persists as
incisive to explain children's academic achievement. PONTE:
International Scientific Researches Journal, 729, 70-89. doi:
10.21506/j.ponte.2016.9.6
Alves, A. F., Gomes, C. M. A., Martins, A., & Almeida, L. S. 2017.
Cognitive performance and academic achievement: How do
family and school converge? European Journal of Education and
Psychology, 102, 49-56. doi: 10.1016/j.ejeps.2017.07.001
Alves, A. F., Gomes, C. M. A., Martins, A., & Almeida, L. S. 2018.
The structure of intelligence in childhood: age and socio-familiar
impact on cognitive differentiation. Psychological Reports, 1211,
79-92. doi: 10.1177/0033294117723019
Alves, F. A., Flores, R. P., Gomes, C. M. A., & Golino, H. F. 2012.
Preditores do rendimento escolar: inteligência geral e crenças
sobre ensino-aprendizagem. Revista E-PSI, 1, 97-117. Retrieved
from https://revistaepsi.com/artigo/2012-ano2-volume1-artigo5/
André, A. M., Gomes, C. M. A., & Loureiro, C. M. V. 2017.
Equivalência de itens, semântica e operacional da versão
brasileira da Escala Nordoff Robbins de Comunicabilidade
Musical. OPUS, 232, 153. doi:10.20504/opus2017b2309.
André, A. M., Gomes, C. M. A., & Loureiro, C. M. V. 2018.
Reliability Inter-Examiners Of The Nordoff Robbins Musical
Communicativeness Scale Brazilian Version. 11th International
Conference of Students of Systematic Musicology, 101105.
Retrieved from http://musica.ufmg.br/sysmus2018/wp-
content/uploads/2018/07/Reliability-Inter-examiners-of-the-
Nordoff-Robbins-Musical-Communicativeness-Scale-Brazilian-
Version.pdf
André, A. M. B., Gomes, C. M. A., & Loureiro, C. M. V. 2020a.
Confiabilidade Inter-examinadores da Escala de Relação
Criança-Terapeuta na Experiência Musical Coativa para validação
no contexto brasileiro. Hodie, 20e64243, 118.
doi:10.5216/mh.v20.64243
Table 3. Suggestions and disagreements presented by the judges
Judge
Suggestions:
wording of the
instructions
Suggestions:
wording of the
error processes
Suggestions:
wording of the
texts
Suggestions:
wording of the
items
Disagreements
Disagreements after
discussion with
authors
1
1
1
1
3
24
8
2
1
0
0
2
54
3
3
1
0
1
1
27
1
4
1
2
0
5
25
14
5
0
0
1
2
3
1
6
1
1
0
1
1
0
7
0
0
0
1
2
0
8
0
0
0
3
13
6
9
1
0
0
0
18
1
Total
6
4
3
18
167
34
André, A. M. B., Gomes, C. M. A., & Loureiro, C. M. V. 2020b.
Confiabilidade Interexaminadores da versão brasileira da Escala
Nordoff Robbins de Comunicabilidade Musical. In Estudos
Latino-americanos em Música vol.2 pp. 152163. Artemis.
doi:10.37572/EdArt_13210092015
André, A. M. B., Gomes, C. M. A., & Loureiro, C. M. V. 2020c.
Equivalência de itens, semântica e operacional da “Escala de
Musicabilidade: Formas de Atividade, Estágios e Qualidades de
Engajamento.” Orfeu, 52, 1–22.
doi:10.5965/2525530405022020e0010
Biggs, J., & Tang, C. 2011. Teaching for Quality Learning at
University. Maidenhead, UK: Open University Press
Cardoso, C. O., Seabra, A. G., Gomes, C. M. A., & Fonseca, R. P.
2019. Program for the neuropsychological stimulation of
cognition in students: impact, effectiveness, and transfer effect on
student cognitive performance. Frontiers in Psychology, 10, 1-16.
doi: 10.3389/fpsyg.2019.01784
Contreras, M. S., Salgado, F. C., Hernández-Pina, F., & Hernández,
F. M. 2017. Enfoques de aprendizaje y enfoques de enseñanza:
Origen y evolución. Educación y Educadores, 201, 65-88. DOI:
10.5294/edu.2017.20.1.4
Costa, B. C. G., Gomes, C. M. A., & Fleith, D. S. 2017. Validade da
Escala de Cognições Acadêmicas Autorreferentes: autoconceito,
autoeficácia, autoestima e valor. Avaliação Psicológica, 161, 87-
97. doi: 10.15689/ap.2017.1601.10
Dias, N. M., Gomes, C. M. A., Reppold, C. T., Fioravanti-Bastos, A.,
C., M., Pires, E. U., Carreiro, L. R. R., & Seabra, A. G. 2015.
Investigação da estrutura e composição das funções executivas:
análise de modelos teóricos. Psicologia: teoria e prática, 172, 140-
152. doi: 10.15348/1980-6906/psicologia.v17n2p140-152
Ferreira, M. G., & Gomes, C. M. A. 2017. Intraindividual analysis of
the Zarit Burden Interview: a Brazilian case study. Alzheimers &
Dementia, 13, P1163-P1164. doi: 0.1016/j.jalz.2017.06.1710
Golino, H. F., & Gomes, C. M. A. 2014a. Four Machine Learning
methods to predict academic achievement of college students: a
comparison study. Revista E-Psi, 1, 68-101. Retrieved from
https://revistaepsi.com/artigo/2014-ano4-volume1-artigo4/
Golino, H.F., & Gomes, C. M. A. 2014b. Psychology data from the
“BAFACALO project: The Brazilian Intelligence Battery based
on two state-of-the-art models Carroll’s Model and the CHC
model”. Journal of Open Psychology Data, 21, p.e6.
doi:10.5334/jopd.af
Golino, H. F., & Gomes, C. M. A. 2014c. Visualizing random forest’s
prediction results. Psychology, 5, 2084-2098. doi:
10.4236/psych.2014.519211
Golino, H. F., & Gomes, C. M. A. 2016. Random forest as an
imputation method for education and psychology research: its
impact on item fit and difficulty of the Rasch model. International
Journal of Research & Method in Education, 394, 401-421. doi:
10.1080/1743727X.2016.1168798
Golino, H. F., Gomes, C. M. A., & Andrade, D. 2014. Predicting
academic achievement of high-school students using machine
learning. Psychology, 5, 2046-2057.
doi:10.4236/psych.2014.518207
Golino, H. F., Gomes. C. M. A., Commons, M. L., & Miller, P. M.
2014. The construction and validation of a developmental test for
stage identification: Two exploratory studies. Behavioral
Development Bulletin, 193, 37-54. doi:
10.1037/h0100589Gomes, C. M. A. 2007. Softwares educacionais
podem ser instrumentos psicológicos. Psicologia Escolar e
Educacional, 112, 391-401. doi: 10.1590/S1413-
85572007000200016
Gomes, C. M. A. 2007. Softwares educacionais podem ser
instrumentos psicológicos. Psicologia Escolar e Educacional, 112,
391-401. doi: 10.1590/S1413-85572007000200016
Gomes, C. M. A. 2010a. Avaliando a avaliação escolar: notas
escolares e inteligência fluida. Psicologia em Estudo, 154, 841-
849. Retrieved from
http://www.redalyc.org/articulo.oa?id=287123084020
Gomes, C. M. A. 2010b. Estrutura fatorial da Bateria de Fatores
Cognitivos de Alta-Ordem BaFaCalo. Avaliação Psicológica, 93,
449-459. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712010000300011&lng=pt.
Gomes, C. M. A. 2010c. Perfis de Estudantes e a relação entre
abordagens de aprendizagem e rendimento Escolar. Psico
PUCRS. Online, 414, 503-509. Retrieved from
http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/articl
e/view/6336
Gomes, C. M. A. 2011a. Abordagem profunda e abordagem
superficial à aprendizagem: diferentes perspectivas do rendimento
escolar. Psicologia: Reflexão e Crítica, 243, 438-447. doi:
10.1590/S0102-79722011000300004
Gomes, C. M. A. 2011b. Validade do conjunto de testes da habilidade
de memória de curto-prazo CTMC. Estudos de Psicologia Natal,
163, 235-242. doi:10.1590/S1413-294X2011000300005
Gomes, C. M. A. 2012a. A estrutura fatorial do inventário de
características da personalidade. Estudos de Psicologia Campinas,
292, 209-220. doi:10.1590/S0103-166X2012000200007
Gomes, C. M. A. 2012b. Validade de construto do conjunto de testes
de inteligência cristalizada CTIC da bateria de fatores cognitivos
de alta-ordem BaFaCAlO. Gerais : Revista Interinstitucional de
Psicologia, 52, 294-316. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S198
3-82202012000200009&lng=pt&tlng=pt.
Gomes, C. M. A. 2013. A Construção de uma Medida em
Abordagens de Aprendizagem. Psico PUCRS. Online, 442, 193-
203. Retrieved from
http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/articl
e/view/11371
Gomes, C. M. A., & Almeida, L. S. 2017. Advocating the broad use
of the decision tree method in education. Practical Assessment,
Research & Evaluation, 2210, 1-10, 2017. Recuperado de
https://pareonline.net/getvn.asp?v=22&n=10
Gomes, C.M.A., Amantes, A., & Jelihovschi, E.G. 2020. Applying
the regression tree method to predict students’ science
achievement. Trends in Psychology. doi: 10.9788/s43076-019-
00002-5
Gomes, C. M. A., Araujo, J., Nascimento, E., & Jelihovisch, E. 2018.
Routine Psychological Testing of the Individual Is Not Valid.
Psychological Reports, 1224, 1576-1593. doi:
10.1177/0033294118785636
Gomes, C. M. A., Araujo, J., & Jelihovschi, E. G. 2020. Approaches
to learning in the non-academic context: construct validity of
learning approaches test in video game lat-video game.
International Journal of Development Research, 1011, 41842-
41849. doi: 10.37118/ijdr.20350.11.2020
Gomes, C. M. A., & Borges, O. N. 2007. Validação do modelo de
inteligência de Carroll em uma amostra brasileira. Avaliação
Psicológica, 62, 167-179. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712007000200007&lng=en&tlng=pt.
Gomes, C. M. A., & Borges, O. N. 2008a. Avaliação da validade e
fidedignidade do instrumento crenças de estudantes sobre ensino-
aprendizagem CrEA. Ciências & Cognição UFRJ, 133, 37-50.
Retrieved from
http://www.cienciasecognicao.org/revista/index.php/cec/article/vi
ew/60
Gomes, C. M. A., & Borges, O. 2008b. Limite da validade de um
instrumento de avaliação docente. Avaliação Psicológica, 73,
391-401. Recuperado de
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712008000300011&lng=pt&tlng=pt.
Gomes, C. M. A., & Borges, O. 2008c. Qualidades psicométricas de
um conjunto de 45 testes cognitivos. Fractal: Revista de
Psicologia, 201, 195-207. doi:10.1590/S1984-
02922008000100019
Gomes, C. M. A., & Borges, O. N. 2009a. O ENEM é uma avaliação
educacional construtivista? Um estudo de validade de construto.
Estudos em Avaliação Educacional, 2042, 73-88. doi:
10.18222/eae204220092060
Gomes, C. M. A.s, & Borges, O. N. 2009b. Propriedades
psicométricas do conjunto de testes da habilidade visuo espacial.
PsicoUSF, 141, 19-34. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S141
3-82712009000100004&lng=pt&tlng=pt.
Gomes, C. M. A., & Borges, O. 2009c. Qualidades psicométricas do
conjunto de testes de inteligência fluida. Avaliação Psicológica,
81, 17-32. Retrieved from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S167
7-04712009000100003&lng=pt&tlng=pt.
Gomes, C. M. A., de Araújo, J., Ferreira, M. G., & Golino, H. F.
2014. The validity of the Cattel-Horn-Carroll model on the
intraindividual approach. Behavioral Development Bulletin, 194,
22-30. doi: 10.1037/h0101078
Gomes, C. M. A., Fleith, D. S., Marinho-Araujo, C. M., & Rabelo, M.
L. 2020. Predictors of students’ mathematics achievement in
secondary education. Psicologia: Teoria e Pesquisa, 36, e3638.
doi: 10.1590/0102.3772e3638
Gomes, C. M. A., & Gjikuria, J. 2017. Comparing the ESEM and
CFA approaches to analyze the Big Five factors. Avaliação
Psicológica, 163, 261-267. doi:10.15689/ap.2017.1603.12118
Gomes, C. M. A., & Gjikuria, E. 2018. Structural Validity of the
School Aspirations Questionnaire SAQ. Psicologia: Teoria e
Pesquisa, 34, e3438. doi:10.1590/0102.3772e3438
Gomes, C. M. A., & Golino, H. F. 2012a. Relações hierárquicas entre
os traços amplos do Big Five. Psicologia: Reflexão e Crítica, 253,
445-456. doi:10.1590/S0102-7972201200030000422
Gomes, C. M. A., & Golino, H. F. 2012b. O que a inteligência prediz:
diferenças individuais ou diferenças no desenvolvimento
acadêmico? Psicologia: teoria e prática, 141, 126-139. Retrieved
from
http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S151
6-36872012000100010&lng=pt&tlng=pt.
Gomes, C. M. A., & Golino, H. F. 2012c. Validade incremental da
Escala de Abordagens de Aprendizagem EABAP. Psicologia:
Reflexão e Crítica, 254, 400-410. doi:10.1590/S0102-
79722012000400001
Gomes, C. M. A., & Golino, H. F. 2014. Self-reports on students'
learning processes are academic metacognitive knowledge.
Psicologia: Reflexão e Crítica, 273, 472-480. doi: 10.1590/1678-
7153.201427307
Gomes, C. M. A., & Golino, H. 2015. Factor retention in the intra-
individual approach: Proposition of a triangulation strategy.
Avaliação Psicológica, 142, 273-279. doi:
10.15689/ap.2015.1402.12
Gomes, C. M. A., Golino, H. F., & Menezes, I. G. 2014. Predicting
School Achievement Rather than Intelligence: Does
Metacognition Matter? Psychology, 5, 1095-1110.
doi:10.4236/psych.2014.59122
Gomes, C. M. A., Golino, H. F., & Peres, A. J. S. 2016. Investigando
a validade estrutural das competências do ENEM: quatro
domínios correlacionados ou um modelo bifatorial. Boletim na
Medida INEP-Ministério da Educação, 510, 33-30. Retrieved
from
http://portal.inep.gov.br/documents/186968/494037/BOLETIM+
NA+MEDIDA+-+N%C2%BA+10/4b8e3d73-d95d-4815-866c-
ac2298dff0bd?version=1.1
Gomes, C. M. A. Golino, H. F., & Peres, A. J. S. 2018. Análise da
fidedignidade composta dos escores do enem por meio da análise
fatorial de itens. European Journal of Education Studies, 58, 331-
344. doi:10.5281/zenodo.2527904
Gomes, C. M. A., Golino, H. F., & Peres, A. J. S. 2020.
Fidedignidade dos escores do Exame Nacional do Ensino Médio
Enem. Psico RS, 542, 1-10. doi: 10.15448/1980-
8623.2020.2.31145.
Gomes, C. M. A., Golino, H. F., Pinheiro, C. A. R., Miranda, G. R.,
& Soares, J. M. T. 2011. Validação da Escala de Abordagens de
Aprendizagem EABAP em uma amostra Brasileira. Psicologia:
Reflexão e Crítica, 241, 19-27. doi: 10.1590/S0102-
79722011000100004
Gomes, C. M. A., Golino, H. F., Santos, M. T., & Ferreira, M. G.,
2014. Formal-Logic Development Program: Effects on Fluid
Intelligence and on Inductive Reasoning Stages. British Journal of
Education, Society & Behavioural Science, 49, 1234-1248.
Retrieved from http://www.sciencedomain.org/review-
history.php?iid=488&id=21&aid=4724
Gomes, C. M. A., & Jelihovschi, E. 2019. Presenting the regression
tree method and its application in a large-scale educational
dataset. International Journal of Research & Method in Education.
doi: 10.1080/1743727X.2019.1654992
Gomes, C. M. A., Lemos, G. C., & Jelihovschi, E. G. 2020.
Comparing the predictive power of the CART and CTREE
algorithms. Avaliação Psicológica, 191, 87-96. doi:
10.15689/ap.2020.1901.17737.10
Gomes, C. M. A., Linhares, I. S., Jelihovschi, E. G., & Rodrigues, M.
N. S. 2020. Introducing rationality and content validity of SLAT-
Thinking. International Journal of Development Research, 10 10.
Gomes, C. M. A., & Marques, E. L. L. 2016. Evidências de validade
dos estilos de pensamento executivo, legislativo e judiciário.
Avaliação Psicológica, 153, 327-336. doi:
10.15689/ap.2016.1503.05
Gomes, C. M. A., Marques, E. L. L., & Golino, H. F. 2014. Validade
Incremental dos Estilos Legislativo, Executivo e Judiciário em
Relação ao Rendimento Escolar. Revista E-Psi, 2, 31-46.
Retrieved from https://revistaepsi.com/artigo/2013-2014-ano3-
volume2-artigo3/
Jelihovschi, E. G., & Gomes, C. M. A. 2019. Proposing an
achievement simulation methodology to allow the estimation of
individual in clinical testing context. Revista Brasileira de
Biometria, 374, 1-10. doi: 10.28951/rbb.v37i4.423
Linhares, I. & Gomes, C. M. A. 2020. Investigação da validade de
conteúdo do TAP-Pensamento. Pôster. I Encontro Anual da Rede
Nacional de Ciência para Educação CPE. doi:
10.13140/RG.2.2.31110.40006
Muniz, M., Gomes, C. M. A., & Pasian, S. R. 2016. Factor structure
of Raven's Coloured Progressive Matrices. Psico-USF, 212, 259-
272. doi: 10.1590/1413-82712016210204
Pazeto, T. C. B., Dias, N. M., Gomes, C. M. A., & Seabra, A. G.
2019. Prediction of arithmetic competence: role of cognitive
abilities, socioeconomic variables and the perception of the
teacher in early childhood education. Estudos de Psicologia, 243,
225-236. doi: 10.22491/1678-4669.20190024
Pereira, B. L. S., Golino, M. T. S., & Gomes, C. M. A. 2019.
Investigando os efeitos do Programa de Enriquecimento
Instrumental Básico em um estudo de caso único. European
Journal of Education Studies, 67, 35-52. doi:
10.5281/zenodo.3477577
Pires, A. A. M., & Gomes, C. M. A. 2017. Three mistaken procedures
in the elaboration of school exams: explicitness and discussion.
PONTE International Scientific Researches Journal, 733, 1-14.
doi: 10.21506/j.ponte.2017.3.1
Pires, A. A. M., & Gomes, C. M. A. 2018. Proposing a method to
create metacognitive school exams. European Journal of
Education Studies, 58, 119-142. doi:10.5281/zenodo.2313538
Reppold, C. T., Gomes, C. M. A., Seabra, A. G., Muniz, M.,
Valentini, F., & Laros, J.A. 2015. Contribuições da psicometria
para os estudos em neuropsicologia cognitiva. Psicologia: teoria e
prática, 172, 94-106. doi: 10.15348/1980-
6906/psicologia.v17n2p94-106
Richardson, M., Abraham, C., & Bond, R. 2012. Psychological
correlates of university students’ academic performance: a
systematic review and metaanalysis. Psychol. Bull, 138 2, 353
387. doi: 10.1037/a0026838.
Rosário, V. M., Gomes, C. M. A., & Loureiro, C. M. V. 2019.
Systematic review of attention testing in allegedly "untestable"
populations. International Journal of Psychological Research and
Reviews, 219, 1-21. doi: 10.28933/ijprr-2019-07-1905
Valentini, F., Gomes, C. M. A., Muniz, M., Mecca, T. P., Laros, J. A.,
& Andrade, J. M. 2015. Confiabilidade dos índices fatoriais da
Wais-III adaptada para a população brasileira. Psicologia: teoria
e prática, 172, 123-139. doi: 10.15348/1980-
6906/psicologia.v17n2p123-139
Watkins, D. 2001. Correlates of Approaches to Learning: A Cross-
Cultural Meta-Analysis. In R. J. Sternberg & L. F. Zhang Eds.,
Perspectives on thinking, learning and cognitive styles pp. 132
157. Mahwah, NJ: Lawrence Erlbaum Associates.
Presentation
Full-text available
Trabalho elaborado para o Exame de Qualificação de Doutoramento do Programa de pós-graduação de Neurociências da Universidade Federal de Minas Gerais
Thesis
Full-text available
The development of performance-based tests is a relevant agenda for metacognition studies, since it reduces biases present in conventional measures, allowing the acquisition of new evidence regarding the quality of empirical identification and validation of metacognitive components. The Meta-Text, a test designed to assess the components of cognition regulation through the respondent performance, brought significant contribution to the analysis of the validity of these metacognitive components. However, it is necessary to retest the initial evidence using a larger and more diverse sample. Therefore, the aim of this study is to analyze the structural validation of the Meta-Text in a larger and more diverse sample, incorporating a new set of participants to the original sample. The complete sample consists of 1046 university students and graduates from Honduras and Brazil. Different models were tested using item confirmatory factor analysis. The results indicated that the bifactorial model (CFI=0.981; RMSEA=0.035) best represents the factorial structure of the Meta-Text. This model assumes that both the cognition regulation domain and the specific metacognitive abilities, such as planning, judgment and monitoring directly explain people's performance on the test items. Furthermore, the model factors showed statistically significant variance, which is relevant for analyzing the validity of the metacognitive components themselves. The results indicate that the previous evidence was influenced by sample characteristics, such as size and homogeneity. By including a new sample, evidence is observed to support the validity of all metacognition components analyzed and corroborate with what is suggested by the area of metacognitive studies.
Presentation
Full-text available
A inteligência artificial é um tema que fascina o ser humano. Com o desenvolvimento dos computadores nos anos de 1940 a 1960, foram surgindo propostas para se criar agentes virtuais inteligentes. Diferentemente de um mero banco de dados sobre determinado conhecimento, o agente inteligente, ou inteligência artificial, possui uma série de algoritmos que o permite interagir com o seu conhecimento prévio, de forma a gerar novos conhecimentos. É possível identificar agentes inteligentes em muitas áreas. Na área da medicina, por exemplo, eles são capazes de fornecer diagnósticos bastante sofisticados. De um ponto de vista ideal, a inteligência artificial pode ser uma grande parceira do ser humano e não seu substituto. Até relativamente pouco tempo, as inteligências artificiais eram muito caras ou restritas a certas práticas. No entanto, esse cenário vem mudando de forma relevante e, talvez, em relativamente pouco tempo estaremos usando a inteligência artificial para melhorar nossas práticas profissionais. No que diz respeito à psicometria, os agentes inteligentes têm o potencial de auxiliar o psicometrista em uma série de práticas da área. Nesta apresentação, mostrarei uma prática psicométrica auxiliada por um agente inteligente de fácil acesso para um conjunto vasto de pessoas, o ChatGPT. A despeito de ser ainda um agente em fase beta, ou seja, ainda em “crescimento”, ele demonstra pleno potencial e nos permite vislumbrar o que poderá vir a ser em um futuro relativamente próximo.
Article
Full-text available
A teoria das abordagens de aprendizagem afirma que os estudantes interagem de forma superficial ou profunda com o conteúdo de ensino. Aqueles que adotam a abordagem profunda apresentam motivação intrínseca e usam estratégias cognitivas que favorecem a integração dos conhecimentos. Já os que adotam a abordagem superficial se motivam de forma extrínseca e usam estratégias de aprendizagem mecânica. Por isso, os alunos de abordagem profunda aprendem melhor. As evidências sobre as abordagens de aprendizagem têm sido sustentadas por medidas de testes baseados em autorrelato, os quais são suscetíveis a vieses que prejudicam a medida desses construtos e suas evidências. Diante dessa limitação, o Teste Abordagem-em-Processo Versão 2 foi criado para medir as abordagens do aluno pelo seu desempenho. Por meio de seis itens abertos, o teste demanda ao aluno desempenhar seis comportamentos de abordagem profunda no contexto da aprendizagem de um conteúdo de ensino. As respostas aos itens abertos são corrigidas pelo professor por intermédio do Guia de Correção do Teste Abordagem-em-Processo Versão 2. Esse guia é composto por cinco seções que devem ser preenchidas pelo professor, com a finalidade de auxiliá-lo na aplicação e correção do teste. No guia, o professor define o conteúdo de ensino envolvido na aplicação do teste, nomeia os conceitos fundamentais do conteúdo, apresenta a estrutura conceitual, entre outros elementos pedagógicos importantes. Por isso, o guia tem sido utilizado complementarmente como uma ferramenta de reflexão da prática pedagógica. Neste trabalho, o template do Guia de Correção é apresentado e suas seções são comentadas.
Article
Full-text available
A teoria das abordagens de aprendizagem distingue duas formas fundamentais de construção de conhecimento. A abordagem superficial, baseada em memorização sem construção de significado, e a abordagem profunda, voltada à construção de conhecimento integrado e consistente. As primeiras avaliações das abordagens de aprendizagem utilizavam o método fenomenográfico, que exigia análise qualitativa por juízes. Com o desenvolvimento do campo, os questionários de autorrelato passaram a ser adotados. Apesar dos avanços proporcionados por ambas as metodologias, elas ocasionam vieses que prejudicam a avaliação das abordagens. O uso de testes baseados em desempenho é uma alternativa para avaliar as abordagens sem os vieses do método fenomenográfico e do autorrelato. Por essa razão, o Laboratório de Investigação da Arquitetura Cognitiva (LAICO) iniciou uma agenda de desenvolvimento de testes baseados em desempenho para mensurar as abordagens. O Teste Abordagem-em-Processo Versão 2 é o mais recente dessa agenda, sendo desenvolvido para o próprio professor avaliar as abordagens de seus alunos em relação a um conteúdo específico. A medida baseada em desempenho do Teste Abordagem-em-Processo Versão 2 é feita por meio de itens abertos e o Guia de Correção foi criado para orientar o professor na avaliação desses itens. Embora o Guia de Correção tenha sido aplicado previamente em disciplinas universitárias, ainda não foi utilizado em conteúdos do ensino médio. O Ministério da Educação do Brasil propôs recentemente uma reforma educacional chamada Novo Ensino Médio, visando criar um ambiente de ensino adequado às necessidades dos estudantes e promover uma aprendizagem de melhor qualidade. A aplicação do Teste Abordagem-em-Processo Versão 2 é relevante nesse contexto, pois poderia ser utilizada para avaliar se a qualidade do aprendizado dos estudantes corresponde às expectativas da reforma. Nesse sentido, este artigo apresenta o preenchimento completo do Guia de Correção do Teste Abordagem-em-Processo Versão2 no conteúdo “A adolescência como construção social” da disciplina Projeto de Vida do 1º ano do ensino médio em uma escola pública estadual de Minas Gerais, Brasil.
Article
Full-text available
p>A teoria das abordagens de aprendizagem define duas formas distintas de interação do sujeito com os objetos de conhecimento: abordagem profunda e abordagem superficial. Essa teoria tem proporcionado contribuições relevantes para a área da educação, como por exemplo predizer o desempenho acadêmico e auxiliar o professor em suas práticas pedagógicas a fim de melhorar o processo de aprendizagem do estudante. Embora diversas contribuições tenham sido proporcionadas pela teoria das abordagens de aprendizagem, existe uma limitação que precisa ser superada para o seu desenvolvimento. Até muito recentemente, pelo que sabemos, as medidas das abordagens eram produzidas exclusivamente por instrumentos baseados em autorrelato. O Teste Abordagem-em-Processo (Versão 2) é parte da agenda do Laboratório de Investigação da Arquitetura Cognitiva (LAICO) de elaboração de testes baseados em desempenho para a medida das abordagens de aprendizagem. Este teste avalia de forma inédita as abordagens por meio da performance do estudante ao aprender determinado conteúdo escolar/acadêmico. O teste possui seis questões com um item aberto por questão. Esse item é o que avalia as abordagens por meio do desempenho. Um Guia de Correção dos itens abertos foi criado no LAICO com o objetivo de nortear a correção desses itens pelo professor. Alguns trabalhos já apresentaram o Guia de Correção aplicado a alguns conteúdos universitários, mas nenhum deles em conteúdos da disciplina de Física do Ensino Médio. Neste artigo, será apresentada a aplicação do Guia de Correção no conteúdo corrente elétrica da disciplina Física do Ensino Médio. Nesta apresentação, mostramos que o Teste Abordagem-em-Processo (Versão 2) pode ser aplicado no conteúdo corrente elétrica e possivelmente em todos os conteúdos de Física do Ensino Médio. Ademais, mostramos que o preenchimento do Guia de Correção pela professora foi um momento de reflexão e autoavaliação sobre suas práticas pedagógicas. The theory of learning approaches defines two distinct forms of interaction between the subject and objects of knowledge: deep approach and superficial approach. This theory has provided relevant contributions to the field of education, such as predicting academic performance and helping teachers in their pedagogical practices in order to improve the student's learning process. Although several contributions have been provided by the theory of learning approaches, there is a limitation that needs to be overcome for its development. Until very recently, as far as we know, measures of approaches were produced exclusively by instruments based on self-report. The Approach-in-Process Test (Version 2) is part of the Cognitive Architecture Research Laboratory's (LAICO) agenda of designing performance-based tests to measure learning approaches. This performance-based test evaluates in an unprecedented way approaches through student performance when learning certain school/academic content. The test has six questions with one open item per question. This item is what evaluates the approaches through performance. A Correction Guide for open items was created at LAICO with the objective of guiding the correction of these items by the teacher. Some works have already presented the Correction Guide applied to some higher education contents, but none of them in contents of the subject of High School Physics. In this article, the application of the Correction Guide will be presented in the electric current content of the High School Physics discipline. In this presentation, we show that the Approach-in-Process Test (Version 2) can be applied to the electrical current content and possibly to all High School Physics content. In addition, we show that the completion of the Correction Guide by the teacher was a moment of reflection and self-assessment about her pedagogical practices. Article visualizations: </p
Article
Full-text available
The students' learning approaches theory investigates a significant topic which is the interaction between subject and object of knowledge and its impact on learning. Nevertheless, the exclusive use of self-report instruments for its measures has become a fundamental limitation in that field. In this article, rationality and content validity of SLAT-Thinking (Students' Learning Approach Test) are introduced as the first test to measure learning approaches by means of performance. We also present its conceptual basis, building strategies and structure. The assessment of four construct experts, one expert in Portuguese and 10 people from the target-audience regarding the content validity, is shown. A new category was created to classify the items, the answer key of two items was changed and the statement of one item was reformulated. The experts certified the content validity of the test, and the target-audience stated the test was easy to understand and to perform.
Article
Full-text available
Acknowledging the relevance of mathematics education, as well the evidence about predictors related to achievement in this domain, the present study performed a predictive analysis of students’ mathematics achievement in the National Exam for Secondary Education, employing the Regression Tree Method and a model with 53 predictors. Results indicated that the model explained 29.97% of the mathematics achievement variance. Certain variables are related to worse achievement in mathematics: Students’ family monthly income equal or smaller than 2 minimum wages, be female, have not attended Primary and Secondary Education in private schools, live in North, North East and Center West regions of Brazil, be highly motivated to perform the exam to obtain Secondary Education certificate or scholarship. The results obtained highlight the role of variables related to the individual, school and family as predictors of mathematics achievement.
Article
Full-text available
The subject-object interaction of knowledge studied by the field of learning approaches has been evaluated exclusively in the school/academic context. However, the field does not assume that these interactions are manifested only in this context. This article studies the validity of the Video Game Approach Test (LAT-Video Game), with the novel proposal to evaluate approaches in a non-academic context. The structural validity and its generality were investigated, as well as the predictive and divergent validity of the LAT-Video Game in two independent samples. Three models were tested in the first sample and the constrained bifactorial model gave the best fit and parsimony. By comparing the two samples, this model proved to be invariant even to the scalar level. The LAT-Video Game predicts the self-declaration of people as gamer or non-gamer at 84% [65%-100%]. The deep approach in video games does not correlate with approaches in the academic context, measured by the Learning Approaches Scale (LAS). The motivation related to the practice of video games correlates positively with the superficial approach and negatively with the deep approach. The LAT-Video Game shows structural validity, invariance, predictive and divergent validity. Copyright © 2020, SORO Sibirina et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Article
Full-text available
A Escala de Relação Criança-Terapeuta na Experiência Musical Coativa é utilizada desde a década de 1960 nos Estados Unidos. Esta escala avalia em sete graus os níveis de participação e a qualidade de resistividade observáveis durante um atendimento de Musicoterapia. Para que ela seja utilizada no Brasil, é necessário um processo de validação. Escolhemos para esse processo o Modelo Universalista de Validação. Com base nisto, medimos a equivalência de mensuração da escala através do teste de confiabilidade inter-examinadores. Os escores Inter-examinadores apresentaram média de correlações (Spearman) fortes, indicando evidências de confiabilidade para a versão brasileira da escala.
Article
Full-text available
Na década de 1960, os pesquisadores Nordoff e Robbins começaram a desenvolver escalas para avaliação em atendimentos musicoterapêuticos. Dentre elas, a “Escala de Musicabilidade: Formas de Atividade, Estágios e Qualidades de Engajamento”. Esta escala foi desenvolvida para avaliar as “sutilezas” presentes na produção musical de um paciente em um atendimento musicoterapêutico. No Brasil, é grande a necessidade de instrumentos de medida validados para nosso idioma. A fim de contribuir com a validação no contexto musicoterapêutico brasileiro, objetivamos avaliar a tradução desta escala e seu respectivo manual explicativo. Como metodologia, realizamos 3 etapas do Modelo Universalista de Validação desenvolvido por Herdman, Fox-Hushby e Badia (1998) denominadas equivalência de itens, equivalência semântica e equivalência operacional. Participaram desse estudo 6 tradutores na etapa inicial e 9 avaliadores no processo de avaliação da tradução. Foram utilizados como instrumentos, a “Escala de Musicabilidade: Formas de Atividade, Estágios e Qualidades de Engajamento” e seu respectivo manual explicativo. Foi elaborada para este estudo uma Ficha para análise das traduções e um Questionário de Análise da Equivalência de Itens, Semântica e Operacional. De acordo com a análise das respostas coletadas dos avaliadores, a tradução dessa escala apresenta linguagem compreensível, seus itens são pertinentes para o contexto brasileiro e podem contribuir para futuras pesquisas em musicoterapia e em música.
Article
Full-text available
O Exame Nacional do Ensino Médio (ENEM) gera uma pontuação para cada domínio que avalia: matemática, linguagens, ciências da natureza e ciências humanas. Reconhecendo a relevância do Exame no acesso ao ensino superior e em outros aspectos da vida prática do estudante brasileiro, o presente estudo investiga a fidedignidade dos escores do ENEM nos seus quatro domínios. Utilizou-se como amostra os escores dos estudantes que participaram da edição de 2011 do Exame. As análises envolveram a estimação dos parâmetros de um modelo de quatro fatores correlacionados e de um modelo bifatorial por meio de análise fatorial confirmatória, além da estimação da fidedignidade composta e da fidedignidade omega dos quatro domínios e do fator geral de desempenho, no caso do modelo bifatorial. Utilizou-se como variáveis observáveis as 30 competências de cada domínio. Os resultados indicaram alta fidedignidade apenas para os escores provenientes do fator geral.
Article
Full-text available
The study investigated, in a longitudinal cohort, predictive models of arithmetic competence (AC) in the 1st year from language and executive functions assessed at preschool age. A total of 71 children were evaluated in oral language skills, preliminary writing abilities and executive functions. In the 1st year, the children were also evaluated in AC. Parents provided information on the socioeconomic level and teachers indicated children with difficulties. Language, oral (phonological awareness and vocabulary) and preliminary writing (knowledge of letters, reading and writing of words) abilities, together with indications of difficulty by the teacher in the ECE, were able to explain a mean of 62% of the variability in AC in the 1st year. The findings reveal predictive variables for the performance in arithmetic in the initial stage of Elementary Education, which can assist in early identification and the design of preventive intervention models.
Article
Full-text available
The CART algorithm has been extensively applied in predictive studies, however, researchers argue that CART produces variable selection bias. This bias is reflected in the preference of CART in selecting predictors with large numbers of cutpoints. Considering this problem, this article compares the CART algorithm to an unbiased algorithm (CTREE), in relation to their predictive power. Both algorithms were applied to the 2011 National Exam of High School Education, which includes many categorical predictors with a large number of categories, which could produce a variable selection bias. A CTREE tree and a CART tree were generated, both with 16 leaves, from a predictive model with 53 predictors and the students' writing essay achievement as the outcome. The CART algorithm yielded a tree with a better outcome prediction. This result suggests that for large data sets, called big data, the CART algorithm might give better results than the CTREE algorithm.
Article
Science teaching is one of the most important tools for developing individuals’ critical thinking skills. Therefore, studies about achievement predictors of science teaching are increasingly being performed in order to provide evidence of what could influence students’ academic performance. As a contribution to those studies, this paper applies a predictive analysis to students’ science achievement in the 2011 National Examination for Secondary Education (ENEM). The sample is composed of Brazilian students who took the 2-day exam ENEM in 2011. The CART algorithm was applied through a model with 53 predictors. The model explained 24.50% of the science achievement variance. The results showed a lower achievement for those students who (1) were not enrolled in a private school to attend Secondary Education; (2) are female; (3) live in the North, North East, and Center-West regions of Brazil; (4) were strongly motivated to take the exam to obtain a Secondary Education certificate or scholarship; (5) had not yet finished Secondary Education until 2011; and (6) whose family income was equal or lower than a 1.5 minimum wage, as well as equal or lower than 5 minimum wages, depending on the type of school attended by the student in Secondary Education.