Conference PaperPDF Available

Using Bloom's Cognitive Domain in Web Evaluation Environments.

Authors:
USING BLOOM'S COGNIT
IVE DOMAIN IN WEB EV
ALUATION
ENVIRONMENTS
Gustavo H. S. Alexandre
1
,
Simone C. dos Santos
1,2
1 C.E.S.A.R., Centro de Estudos e Sistemas Avançados do Recife, Bione Street , Recife, Brazil
gugahenrique@gmail.com, simone.santos@cesar.org.br
2 UPE, Universidade de Pernambuco, Av. Agamenon Magalhães, Recife Brazil.
Patrícia C. A. R. Tedesco
CIn - Centro de Informática da UFPE, Universidade Federal de Pernambuco, Recife, Brazil
pcart@cin.ufpe.br
Keywords: Assessment process, Bloom Taxonomy, Web-based Information System, ICT in Education.
Abstract: This article proposes a web-based Information System based on Bloom Taxonomy, which aims to support
the assessment and tracking of learning process. From an assessment methodology defined, a prototype of
this model was implemented with focus on educational objectives, performance reports and feedbacks to the
students and teachers - called Smart Education. A short experiment was run in a Software Engineering
graduate course achieving key results in relation to its use and application.
1 INTRODUCTION
Information and Communication Technology (ICT)
is provoking notable cultural and educational
changes when used as important resources of
instrumentation of research and academic renewal,
benefiting professors, researchers and students
(Levy, 1993). Considering the internet resource as
one of the main actors, and its application in the
classroom context, as an outstanding support tool to
teaching activities, offering a "virtual extension of
the actual classroom" (Gomes, 2005).
This new educational context provides education
with greater flexibility and accessibility to
information, however, demands the construction of
new pedagogical practices and concepts that respond
to students and professors needs who benefit by the
use of ICT. Particularly, there is the challenge of
learning evaluation, looking for incorporating the
peculiarities brought by the digital learning
environments during the construction of instruments
and evaluation strategies that are appropriate for the
new educational contexts. In this process, it is
essential to define evaluation objectives correctly,
choosing the proper manners and methods, making it
possible to evaluate with higher effectiveness
(Bloom, 1977).
Educational objectives elaboration can be made
based on classification schemes. The Taxonomy of
Educational Objectives - Cognitive Domain is one
of the most popular schemes, elaborated by Bloom
and his contributors in (Bloom, 1977). Although
Bloom's Taxonomy is divided in three areas
(Affective, Psychomotor and Cognitive), the
cognitive domain was selected as the center of this
research, considering that the achievement of these
objectives is an essential requirement for the
majority of educational programs and training.
Considering the presented context, this article
proposes an Information System model on the Web,
based on Bloom's Taxonomy regarding the
Cognitive Domain, with the purpose of supporting
the evaluation and accompaniment of the learning
process. A prototype of this model was
implemented, entitled Smart Education, starting
from the definition of an evaluation methodology
focused in the definition of questions based on
educational objectives, accompaniment and
id12137265 pdfMachine by Broadgun Software - a great PDF writer! - a great PDF creator! - http://www.pdfmachine.com http://www.broadgun.com
feedback reports for students and professors. Smart
Education works attached to the virtual learning
environment Moodle (free and open source)
[www.moodle.org], from which are extracted all the
basic information of courses, subjects, teachers and
students. A case study was carried through a post-
graduate course in Software Engineering, presenting
satisfactory results regarding its application.
This article is divided into six sections. Section 2
presents some of the concepts used in the definition
of the evaluation methodology, described in Section
3. Smart Education, developed from this evaluation
methodology, is described briefly in Section 4, as
well as a carried through experiment, presented in
Section 5. Finally, the last section presents the final
conclusions and considerations.
2 EVALUATION IN THE LEARNING
PROCESS
The evaluation process as part of the learning
process must be based on clear and well defined
propositions. In (Earl, 1998), six purposes of
evaluation are presented: (1) Know about the
students, identifying the level of previous knowledge
that they possess when initiating a course or
discipline; (2) Verify which level of educational
objectives had been reached; (3) Continuously
improve the teaching and learning process; (4)
Detect the learning difficulties, discriminating and
characterizing its possible causes; (5) Promote
students according to the proficiency level obtained
in the evaluation and; (6) Motivate and provide
feedback to students. In this context, the assessment
of learning takes a central position within the
process of teaching and learning in a cycle that
begins with students' knowledge and the definition
of educational objectives, proceeding with the
choice of methods, criteria and evaluation
monitoring.
As already stated in the opening of this article,
for the elaboration of educational objectives,
professors can make use of classification schemes,
such as the Taxonomy of Educational Objectives -
Cognitive Domain, elaborated by Bloom and his
contributors. The cognitive domain is concerned
about information and knowledge. This way, the
achievement of cognitive objectives is the
fundamental activity of most educational programs
and training. According to Bloom, this domain is
subdivided in six main abilities:
Knowledge: defined as the student's ability to
memorize learned information. The evaluation
of this category verifies the capacity of the
student to retain what was taught.
Comprehension: student's capacity to reason to
understand or to learn the concepts and
information worked by the professor. At this
point, the evaluation verifies student's
interpretation and explanation capacity.
Application: utilization of learned information
in real situations. Once that a student already
knows a concept and understands it, he is apt to
apply it. When a student is able to correctly
apply a concept, it can be said that he
"learned", because he knows, understands and
uses the new concept to solve real problems.
Analysis: information must be decomposed
and, thus, to relate and understand its formation
and organization. The evaluation of this
cognitive ability has the intent to assess
convergent production capacity.
Synthesis: capacity of joining two or more
concepts together to form a single one. The
evaluation of this ability verifies creative and
productive capacity.
Evaluation: assessment of informations
importance to attend to a set of norms and
criteria. Here the evaluation verifies all the
other categories.
The hierarchy of these cognitive abilities
follows, according to its order, from the simplest and
concrete (Knowledge) to most complex and abstract
(Evaluation).
Bloom, in (1983), defines that three modalities
of evaluation can be carried through the circular
process of evaluation: Diagnostic, Formative and
Summative.
The Diagnostic evaluation is used to determine if
the student has the necessary prerequisites for the
acquisition of new specific knowledge. The
recommendation is for this evaluation to be carried
out at the beginning of the course, semester or unit
of education (Haydt, 2000).
The Formative evaluation is done with the
intention of verifying if the student is reaching the
established objectives during the course. This
evaluation aims at, basically, evaluating if the
student will be able to continue to a subsequent stage
of the course (Albuquerque, 1995). Therefore,
formative evaluation allows: to provide feedback to
the student of what he learned and what he still
needs to learn; to provide feedback to the professor,
identifying students' failures and which aspects of
instruction that must be modified; to look for the
attendance to the individual differences of students
and prescription of alternative measures for
recovering from learning failures (Bloom, 1977).
Finally, the Summative evaluation, the
evaluation model most commonly used by
educational institutions, is used to classify students.
Held at the end of a school year or unit of
instruction, it consists of classifying the students in
accordance with levels of exploitation previously
established, generally aiming at its promotion from a
level to the next one, therefore it totalizes the results
of a concluded study. Through the use of this
evaluation model it can be observed if the
established objectives were reached by the students
and also to provide data to refine the process of
teach-learning (Haydt, 2000).
In (Santos, 2006), the author says that evaluation
functions should not have been used separately,
because each one serves as complement to the other.
Thus, diagnostic function would only mean
something if used at the beginning of didactic-
pedagogical process, which would serve to indicate
the direction to be followed in the teach-learning
process. This process should be constantly reviewed
by the data gathered from the formative evaluations,
in order to keep educational objectives as designed,
making it possible to classify each student by the
average achieved in its exploitation, according to the
metrics established by the educational institution.
3 AN EVALUATION
METHODOLOGY PROPOSAL
An effective evaluation methodology is the one that
doesn't worry only about the condition of pass / fail,
but which is concerned, especially in monitoring
student's behaviour before an evaluation, also
providing resources to enable it to strengthen and
improve his knowledge on the weak points identified
by the evaluation.
Aiming at a really efficient evaluation process,
contemplating the main features and goals of
evaluations and, thus, allowing a better use of the
different evaluation instruments, an evaluation
methodology was defined and systematized, based
on Bloom's Taxonomy. Figure 1 illustrates this
methodology stages and activities, divided in three
phases: Preparation, Formative Evaluations and
Summative Evaluation.
Figure 1: Proposed evaluation methodology.
At Preparation phase, questions that will form
exams are created, both formative and summative. It
is also in this phase that are defined which cognitive
abilities the professor desires to evaluate. Professor
must be very cautious during questions' creation,
mainly referring to its difficulty level and the
amount of questions available for each level. This
precaution is vital for preventing the problem of
false expectations for the student. The choice of
which Bloom's cognitive abilities the professor
wants to evaluate must be made following his own
criterion, having the evolution of teaching and
learning process as reference. Each chosen ability
will have to be associated to one or more questions.
Second phase is dedicated to the elaboration and
application of formative evaluations, focused on the
accomplishment of continuous evaluations, with the
intention of identifying learning gaps. The amount
of evaluations to be applied in this phase is defined
by the professor. However, its necessary to always
have an amount of formative evaluations equal or
superior to the summative evaluations. The
evaluations that are carried through in this phase
won't determine the approval or failure of the
students. Therefore, the values achieved by the
students on these evaluations will serve only for the
measurement of their acquisition of knowledge
level.
Finally, at the third phase, summative
evaluations are elaborated and applied, aiming at
verifying the learning results achieved by the
students, in accordance with the achievement levels
that were established which will determine the
approval or failure of the students.
Formative and Summative Evaluations stages are
composed of four activities:
Activity 1 - Performance Prediction: in this
stage students answer a self-assessment exam that
will measure the degree of confidence each student
has in answering questions related to subjects/topics
that form the evaluation. The self-assessment exam
consists of a questionnaire to be filled out by the
student, answering with one of the following options
Yes, Perhaps and No about his ability for
solving questions related to subjects and topics that
will form the exam.
Activity 2 - Exam Resolution: in this stage, exam
is applied to the students, who must try to resolve
the questions with the objective of identifying the
degree of knowledge in each subject or topic of
disciplines.
Activity 3 - Exam Correction: in this stage,
professor corrects student's exams, comments on the
given answers per item and releases the corrected
exams so that the students can verify in which
questions had gotten rightness and errors. It is in this
stage that occurs the generation of quantitative and
qualitative indices that will contribute for a
successful accomplishment in the next stage.
Activity 4 - Feedback and Orientation: in this
stage, professor elaborates and sends a feedback for
the student, based on their performance. Using the
quantitative and qualitative indices generated with
the correction of evaluations during the previous
stage, the professor will analyze them and will send
his feedback to the student. The indices help to
indicate with precision the aspects where the
students are having better and worse performance,
making the creation of a feedback easier for the
professor.
4 THE INFORMATION SYSTEM
SMART EDUCATION
With the purpose of validating the methodology
proposed in section 3, an information system
centered in an effective evaluation process was
implemented, named Smart Education. Its proposal
is to assist in questions and evaluations
management, as well as to facilitate learning
accompaniment and proving feedback for students
and professors.
This system is basically divided in two profiles:
professor and student. Professors and students go
through the login process, gaining access to system
features in accordance with their profile. Figure 2
presents professor's profile interface.
Figure 2: Smart Education: Professor's profile
UI.
Smart Education works attached to the virtual
learning environment Moodle (free, open source)
[www.moodle.org], from which are extracted all the
basic information of courses, subjects, teachers and
students, this way contents already registered doesn't
need to be migrated and nether to reply the courses
structure already created within the virtual learning
environment, common nowadays in many
educational institutions. So, to start using the system
it is necessary that users (teachers or students) are
previously registered in Moodle. It is precisely with
this registry, which both teachers and students may
log into the system. After a successful authentication
operation a window is shown with its content related
to teacher or student, depending on the profile
registered on Moodle.
In general, professor can create exams for all
three methodology phases (Preparation, Formative
and Summative), to apply and correct them; create
questions containing several formats and types
associated with Bloom's cognitive abilities; organize
questions by subjects and topics; consult reports
with diversified information regarding students'
performance in determined subjects, topics and
cognitive abilities and to produce his students
learning follow up. Professor can also visualize the
evaluation methodology indicated by the tool.
One of this system's differentials is in the feature
Questões, there professors can find the Manter
Questões functionality, that allows them to register,
modify, exclude, search and visualize questions,
which can be both discursive (open) and objective (
multiple choices) and which will be used on exams'
creation. During the registration of a new question
some information are requested by the system, such
as, the difficulty level, subject, topic and to which
Bloom's cognitive ability the question is related to,
as illustrated at Figure 3. Thus, when a professor
accesses the questions with the intention of
elaborating an exam, he will also be able to check
the difficulty level of each one of them,
automatically calculated by the tool and will have
the certainty that the exam will contain only
questions related to the subjects, topics and
cognitive abilities chosen.
Figure 3: Smart Education: Professor UI.
Other important feature is Acompanhamento,
which is responsible for providing the students and
classs performance reports to professors,
automatically after the correction of all exams are
concluded. This report will provide the qualitative
indices referring to exams' results (as illustrated in
Figure 4). It will also contain performance charts
divided by topics, cognitive abilities and level of
knowledge acquisition referring to the current exam
or the last ones. Based on this information professor
will be able to provide feedback to students, added
by his personal opinion, if he believes to be
necessary. This report will be automatically stored in
the database, to count as historical data of student's
learning development.
Figure 4: Sample performance report on
assessments of a student.
For students there are features like answering
exams; consulting accompaniment reports
containing results achieved in the exams; to
visualize his exam correction and the comments
made by his professor; and to visualize all the grades
achieved for all exams of all disciplines.
5 EVALUATING SMART
EDUCATION TOOL
Smart Education tool has been used in Software
Testing discipline of a Master course at C.E.S.A.R.
(www.cesar.org.br), an ICT innovation institute, to a
group of four students, having three exams to be
taken: two of formative character, each one of them
including a self-assessment test, and one of
summative character, ending the evaluation cycle of
the discipline. At the beginning of the two first
exams, students received orientations regarding
evaluation methodology and discipline's related
educational purposes.
Students and professors were registered in
Moodle, so that they could obtain access to Smart
Education. Professors created the amount of
questions needed to be used in all exams. Altogether
30 questions were developed and for each one of
them professor was asked to inform, besides the
actual question, subject, topic and knowledge area
related to the question, and also registering the
correct answers for multiple choice questions.
System automatically created the self-assessment
tests in accordance with the subjects of the chosen
questions. After that, an email was sent to students,
informing date, time to begin and to end the exam,
followed by the instructions and rules for taking the
exam.
Multiple choice questions were automatically
corrected by the system, whereas subjective
questions were corrected by the professor, adding
comments on each given response. After corrections
were concluded, corrected exams were sent by email
to the students. Feedback reports were generated by
the system, analysed, commented by the professor
and sent via email to each student.
Finally, a research questionnaire was sent to
everyone (professors and students) involved in the
process, containing 15 questions, aiming at making
it possible to collect opinions and impressions of the
methodology applied. Great acceptance was
identified, with an average 8,4 grade given by the
ones involved, which stated to prefer this evaluation
format in order of the traditional evaluation's
methods.
For a better visualization of the results achieved
during the three exams, graph displayed at Figure 5
presents each student performance. This graph
represents NAI (Level of Acquisition of
Information) that the students achieved in each of
the exams. This metric, that was adapted from
(Pimentel, Omar, 2006), is used to measure and
monitor student's degree of knowledge for each
subject or topic of disciplines, thus, the score
achieved in each exam is a NAI.
Figure 5: Student's performance evaluation in
Software Testing discipline.
It is possible to observe in this graph that two
students had a better performance between the first
one and second exam and other two presented a
performance decrease. Important to explain that, by
following and doing all activities foreseen by the
methodology, students were able to achieve a
significant improvement in their NAIs, since it was
possible to identify with precision their learning
difficulties and to act in a precise way for correcting
them. This improvement can be noticed by
comparing the students' evolution throughout the
hole evaluation process, where three students (B, C
and D) achieved at the third exam a better
performance in relation to the others two previous
ones. Student A practically kept his excellent
performance, with a reduction of only 2 points in
relation to the previous one.
It is worth mentioning that the performance
report is a very complete instrument (an average,
four pages of size), consisting of performance
graphics referring to each exam and the class,
besides abilities definition information and
professor's opinion, not contemplated in this article
for matters of space limitation.
6 CONCLUSIONS
Nowadays there is a great variety of systems that
works with students' evaluation through the Web,
such as, Sisa-Web, AvalWeb, WebTest,
HotPotatoes, Net Class, WebCT and Moodle itself,
which Smart Education is attached. However, these
tools ignore important aspects of the learning
evaluation process, mainly regarding the creation of
qualitative evaluations, focused on student's learning
accompaniment, seeking to identify learning gaps
and allowing the generation of personalized and
individualized feedback. The proposal of a web
system that can automate some of these tasks and
support others, represents an excellent alternative to
support the teaching and learning process. By
adopting Smart Education, the activities to evaluate
and follow student's learning can be more agile and
less costly, not representing a reduction of
responsibility to professor as an educator, and giving
them more solid and precise information for
evaluating.
Regarding the experiment presented, its known
by the authors that it needs to be further explored,
applying it to bigger groups and to a greater number
of disciplines. However, it was already possible to
notice that the definition of educational objectives
using Bloom's taxonomy constituted a basic element
in the evaluation process, since it made possible for
professors to previously define and plan the results
to be reached by their students, as well as
establishing which cognitive abilities would have to
be developed. With the educational objectives
definition, goals to be reached were made clear,
since it made possible to measure learning quality
and effectiveness. Additionally, facilitated the
choice of subjects to be taught during disciplines,
listing those that had greater relevance and,
therefore, would have to compose the exam
according to professors view.
REFERENCES
Albuquerque, I. M. (1995) Avaliação no Processo de
Ensino-Aprendizagem. Monografia, Especialização
em Planejamento Educacional, Universidade de
Fortaleza, Fortaleza.
Alexandre, G. H. S (2008). Smart Education Uma
ferramenta WEB para avaliação e acompanhamento
do aprendizado. Tese de Mestrado, C.E.S.A.R.,
Recife.
Bloom, Benjamim S. et al. (1983) Manual de avaliação
formativa e somativa do aprendizado escolar.
Pioneira, São Paulo, 1st edition.
Bloom, Benjamin S. et al., (1977), Taxionomia de
objetivos educacionais: domínio cognitivo. Globo,
Porto Alegre, 6th edition.
Earl, Shirley; MCCONNELL, Mike; MIDDLETON, Iain
et al (1998). Assessing Student Performance: A
Course Booklet for the Postgraduate Certificate in
Tertiary-Level Teaching. Curso web, The Robert
Gordon University, Inglaterra, 1998.
Gomes, Maria João (2005). E-Learning: reflexões em
torno do conceito. In Paulo Dias e Varela de Freitas
(orgs.), Atas da IV Conferência Internacional de
Tecnologias de Informação e Comunicação na
Educação Challenges05, Braga: Centro de
Competência da Universidade do Minho, pp. 229-
236, ISBN 972-87-46-13-05 [CD-ROM].
Haydt, Regina Cazux. (2000) Avaliação do processo
Ensino-Aprendizagem. Ática, São Paulo, 6th edition.
Levy, Pierre., (1993). As tecnologias da inteligência: o
futuro do pensamento na era da informática. Rio
de Janeiro, edition. 34.
Pimentel, E. P.; Omar, Nizam. (2006) Métricas para o
Mapeamento do Conhecimento do Aprendiz em
Ambientes Computacionais de Aprendizagem. In:
XVII Simpósio Brasileiro de Informática na
Educação, 2006, Brasília. Anais do XVII
Simpósio Brasileiro de Informática na Educação,
2006. p. 247-256.
Santos, J. F. S. (2006) Avaliação no ensino a distância.
Revista iberoamericana de educacion (Online), Madrid, v.
38, n. 4.
... In [7], the author describes the teaching / learning process as a cyclical process that begins with the definition of educational objectives and continues with the choice of methods and criteria of the assessment process. ...
... In this circular process is performed three methods of assessment defined by Bloom [8], cited in [7]: diagnostic, used to determine if the student has the necessary prerequisites for the acquisition of new expertise; formative, held in order to verify that the student is achieving the objectives established during the course and; summative, used to classify a student, carried out to the end of a course, school period or unity of teaching. ...
... According to Alexandre in [7], this evaluation methodology is not only concerned with the condition of pass / fail the student, but worry, especially in monitoring the behavior of the student before an evaluation, also providing resources to enable it to deepen and improve their knowledge on weaknesses identified by the evaluation. ...
Article
Full-text available
The growing presence of the software in the products and services consumed daily by the society demands a level of completely dependent quality not only of technology, but of its development process and of the involved professionals. By focusing on the professionals responsible for quality assurance, as the Test Engineer, the skills and competences of these need to be developed on basis of a vision very critical and detailed of the problem. The Test Engineer needs to be an "explorer" of the solution, discovering hidden bugs and looking to elimination of defects of the applications. In this context, this article proposes an approach of teaching focuses on training of “test” discipline that make use of problem-based learning to develop real skills required, supported by processes of planning and continuous assessment, in a computer aided software factory. To prove the applicability of this proposal, an empirical study was developed with positive results in teaching the discipline of “exploratory testing”.
... O acompanhamento do aprendizado é realizado de forma contínua, recorrendo-se a três modalidades de avaliação definidas por Bloom [7]:  Diagnóstica: determinar se o estudante possui os prérequisitos necessários para a aquisição de novos conhecimentos específicos;  Formativa: monitora o progresso da aprendizagem, e tem como propósito prover feedback para o aluno, daquilo que ele aprendeu e do que precisa aprender, e para o professor, identificando as falhas dos alunos e quais aspectos da instrução que devem ser modificados [8]. No contexto de projetos do PCTS, as avaliações formativas são realizadas analisando os seguintes aspectos: (a) conteúdo, avaliação baseada no conteúdo ministrado nos treinamentos, observando o conhecimento teórico adquirido; (b) processo, avaliação baseada no acompanhamento das atividades do processo e cumprimento de prazos, observando a oralidade, postura, pontos fortes e de melhoria e pontualidade; (c) entregas, avaliação dos artefatos produzidos, observando características como padronização, organização, completude, corretude, criatividade e aplicabilidade;  Somativa: modelo de avaliação mais comumente utilizado pelas instituições de ensino, foi utilizada para classificar um estudante no desfecho de aprendizagem desejado. ...
... Se não fosse o acompanhamento individual através das avaliações formativas, esse aluno poderia ser considerado como possuidor do conhecimento de 60% de tudo o que foi abordado no módulo. No entanto, de acordo com [8] é possível que este aluno tenha aprendido mais (ou menos) do que ele obteve na sua nota somativa ou no conceito final. Por isso a importância de um processo de avaliação contínuo favorecendo o devido acompanhamento e orientação dos alunos no decorrer do módulo. ...
Conference Paper
Full-text available
The activity of software testing is an area of IT that has grown over the years and that is directly related to the need to produce quality products that meet increasing demands. By focusing on the professionals responsible for quality assurance, as the Engineer of Tests, skills and competence of these need to be developed based on a very critical and detailed view of the problem. In this context, this paper proposes the adoption of PBL (problem-based learning) as a teaching methodology for the professionals training in software testing. To prove the applicability of this proposal, an empirical study was developed with positive results in teaching the discipline of testing. Index Terms  Problem-Based Learning, Software Testing, Training. INTRODUÇÃO A indústria de software atual tem apresentado uma demanda cada vez maior por profissionais capacitados em testes de software, devido à preocupação das empresas em entregar produtos de qualidade para os seus clientes. Neste cenário, as habilidades e competências dos Engenheiros de Testes precisam ser desenvolvidas com base numa visão crítica e detalhada do problema, dentro de um contexto real de mercado. Do ponto de vista acadêmico, geralmente, a formação destes profissionais é baseada na aprendizagem de conceitos e fundamentos, com pouca ênfase no desenvolvimento de habilidades voltadas para a aplicação prática de conceitos e resolução de problemas reais [1]. Neste sentido, a aprendizagem baseada em problemas (PBL) surge como uma possível solução para este desafio, e tem sido aplicada em diferentes áreas de mercado. De acordo com [2], métodos que criam espaços para aprender fazendo, aprender a aprender, trabalhar em equipes autênticas e refletir sobre o aprendizado através de comunicação oral e escrita são especialmente desejados. Em [3], os autores relatam que este método vem se firmando, nas últimas décadas, como uma das mais importantes inovações da educação, tornando-se, em diversos países, um poderoso instrumento para a reflexão e questionamento a cerca da razão de ser, das finalidades da formação profissional e das mudanças que devem ser implementadas. Diante deste contexto, a principal contribuição deste artigo é relatar a experiência da aplicação do método de aprendizagem baseado em problemas em um Programa de Capacitação em Testes de Software (PCTS), direcionado a alunos de graduação (futuros profissionais de testes de software), no qual o aprendizado é motivado por meio de um projeto real. O PCTS foi implementado pelo A aprendizagem baseada em problemas (PBL) é um método de ensino que tem como característica principal um processo que utiliza problemas para iniciar e motivar a aprendizagem de conceitos e promover habilidades e atitudes necessárias a sua solução, diferentemente dos métodos convencionais que colocam um problema de aplicação ao final da apresentação de um conteúdo [4]. Neste modelo, os papéis do aluno e do professor/tutor são diferentes da abordagem tradicional. De acordo com [5], os professores/tutores atuam como direcionadores (técnicos) ajudando os alunos a identificarem um caminho para alcançar o aprendizado necessário para solucionar um problema. Assim, o estudante muda de papel no processo de aprendizagem, passando de receptor passivo para ativo, responsável pelo seu aprendizado. O PBL não almeja apenas a solução de um problema. O objetivo deste método é também incluir a aquisição de uma base de conhecimentos integrada e estruturada em torno de problemas da vida real, bem como a promoção de habilidades de trabalho em grupo, aprendizagem autônoma e
Article
Full-text available
The continuous assessment that occurs during the teaching and learn-ing process are essential for the student's knowledge mapping. However, the conventional metrics for knowledge's measuring based on grades and averages and those used by distance learning environments based on students' participa-tion indices are not enough to set the cognitive student's profile. This work con-siders cognitive and metacognitive metrics for the students' knowledge mapping in computational learning environments in order to supply efficient measures for the learning accompaniment in both, classroom and distance education system. Resumo. As avalia oes contínuas que ocorrem durante o processo de ensino e aprendizagem são essenciais para o mapeamento do conhecimento do estu-dante. No entanto, as métricas convencionais para mensura ao do conheci-mento baseadas em notas e médias e aquelas utilizadas pela maioria dos am-bientes de ensino a distância, baseados e ındices de participa ao não são suficientes para indicar o perfil cognitivo do aprendiz. Este trabalho propõe métricas cognitivas e metacognitivas para o mapeamento do conhecimento do aprendiz em ambientes computacionais de aprendizagem. Espera-se que essas métricas sejam capazes de fornecer medidas eficientes para o acompanhamento da aprendizagem tanto no ensino presencial como no ensino a distância.
Article
Full-text available
Resumo A problemática do e-learning entrou claramente na agenda dos temas educacionais em debate, sendo que o termo e-learning é actualmente um dos mais discutidos no domínio da utilização das tecnologias na educação/formação. É cada vez mais necessária uma reflexão em torno do conceito de e-learning que facilite a comunicação e estabeleça limites em relação à utilização do termo. Essa reflexão mais do que procurar formulações ou definições rígidas deve promover a discussão em torno dos factores que melhor podem justificar a adopção de nova terminologia no domínio da utilização das tecnologias de informação e comunicação na educação. Neste sentido, defende-se uma adopção do termo e -learning menos centrada nos aspectos tecnológicos e mais próxima do potencial pedagógico decorrente do uso das "tecnologias de redes" no desenho de situações de formação a distância baseada na interacção e na colaboração, no sentido da construção de aprendizagens significativas.
Article
Full-text available
O presente artigo aborda o fenômeno do ensino superior a distância no que toca a avaliação desse processo mediado por meio da Internet ou material impresso como instrumento didático-pedagógico de ensino-aprendizagem, num recorte que delimita a investigação à graduação e pós-graduação universitária (lato e stricto sensu). As demandas do ensino e a inclusão tecnológica no país tem despertado a atenção de um número cada vez maior de instituições publicas e privadas para essa área que promete alavancar uma verdadeira revolução na democratização do ensino superior no país. Este trabalho tem como objetivo esclarecer o conceito de avaliação educacional e analisar como ela vem sendo aplicada no ensino a distância.
Avaliação do processo Ensino-Aprendizagem. Ática, São Paulo
  • Regina Haydt
  • Cazux
Haydt, Regina Cazux. (2000) Avaliação do processo Ensino-Aprendizagem. Ática, São Paulo, 6 th edition.
Taxionomia de objetivos educacionais: domínio cognitivo. Globo
  • Benjamin S Bloom
Bloom, Benjamin S. et al., (1977), Taxionomia de objetivos educacionais: domínio cognitivo. Globo, Porto Alegre, 6 th edition.
Smart Education -Uma ferramenta WEB para avaliação e acompanhamento do aprendizado
  • G H Alexandre
Alexandre, G. H. S (2008). Smart Education -Uma ferramenta WEB para avaliação e acompanhamento do aprendizado. Tese de Mestrado, C.E.S.A.R., Recife.
Avaliação no Processo de Ensino-Aprendizagem. Monografia, Especialização em Planejamento Educacional
  • I M Albuquerque
Albuquerque, I. M. (1995) Avaliação no Processo de Ensino-Aprendizagem. Monografia, Especialização em Planejamento Educacional, Universidade de Fortaleza, Fortaleza.
Manual de avaliação formativa e somativa do aprendizado escolar
  • Benjamim S Bloom
Bloom, Benjamim S. et al. (1983) Manual de avaliação formativa e somativa do aprendizado escolar. Pioneira, São Paulo, 1 st edition.
As tecnologias da inteligência: o futuro do pensamento na era da informática
  • Regina Haydt
  • Cazux
Haydt, Regina Cazux. (2000) Avaliação do processo Ensino-Aprendizagem. Ática, São Paulo, 6 th edition. Levy, Pierre., (1993). As tecnologias da inteligência: o futuro do pensamento na era da informática. Rio de Janeiro, edition. 34.