Conference PaperPDF Available

Authentic assessment in Software Engineering education based on PBL principles a case study in the telecom market

Authors:

Abstract and Figures

The continuous growth of the use of Information and Communication Technology in different sectors of the market calls out for software professionals with the qualifications needed to solve complex and diverse problems. Innovative teaching methodologies, such as the "Software Internship" model and PBL teaching approaches that are learner-centered and focus on bringing market reality to the learning environment, have been developed and implemented with a view to meeting this demand. However, the effectiveness of these methods cannot always be satisfactorily proved. Prompted by this, this paper proposes a model for assessing students based on real market practices while preserving the authenticity of the learning environment. To evaluate this model, a case study on skills training for software specialists for the Telecom market is discussed, and presents important results that show the applicability of the proposed model for teaching Software Engineering.
Content may be subject to copyright.
Authentic Assessment in Software Engineering
Education Based on PBL Principles
A Case Study in the Telecom Market
Simone C. dos Santos
Informatics Center
Federal University of Pernambuco
Recife, Brazil
scs@cin.ufpe.br
Felipe S. F. Soares
C.E.S.A.R - Recife Center of Advanced Studies and Systems
Informatics Center - Federal University of Pernambuco
Recife, Brazil
furtado.fs@gmail.com
Abstract—The continuous growth of the use of Information and
Communication Technology in different sectors of the market
calls out for software professionals with the qualifications needed
to solve complex and diverse problems. Innovative teaching
methodologies, such as the "Software Internship" model and
PBL teaching approaches that are learner-centered and focus on
bringing market reality to the learning environment, have been
developed and implemented with a view to meeting this demand.
However, the effectiveness of these methods cannot always be
satisfactorily proved. Prompted by this, this paper proposes a
model for assessing students based on real market practices while
preserving the authenticity of the learning environment. To
evaluate this model, a case study on skills training for software
specialists for the Telecom market is discussed, and presents
important results that show the applicability of the proposed
model for teaching Software Engineering.
Index Terms—Assessment processes, Software Engineering
Education, PBL principles.
I. I
NTRODUCTION
The growing and continuous presence of software in
integrated products and services which are available and
consumed daily by society call for the infrastructure of
Information and Communications Technology (ICT) in the
world to evolve continuously. In particular, in the Brazilian
market, the outlook is one of likely growth given that it will
hold major international events such as the World Cup in 2014
and the Olympics in 2016. Therefore, governmental plans that
target expanding broadband in Brazil have created excellent
opportunities for companies in the Telecom sector. Since they
recognize that non-ICT companies wish to use software, they
have been seeking to enhance the quality of their products and
services so as to meet the demands from the non-ICT sector
for improvements to in-country telecommunications networks.
It is against this background that skills training for software
professionals who have specialized in mastering applications
that target the development of embedded software and
network management with specific complexities related to
communication platforms and protocols that are very rarely
replicated in academic environments becomes a critical
requirement if the telecommunications sector is to evolve.
As in any area that applies ICT to solve complex problems,
what is needed in order to give software professionals
expertise in the Telecom sector is an effective model for
education that leads to technical and non-technical skills and
competences being developed, grounded on practices of real
projects which have complexities similar to those found in the
labor market.
One such alternative form of education based on real
practices for solving problems is Problem-Based Learning
(PBL) [1] which has been applied in different market areas
that range from the medical field, to which its origin is linked,
to the areas of engineering and technology. Tynälä [2] stresses
the main benefits of PBL when he defines it as an approach to
teaching and student-centered learning, in which these are
steeped in the practices of real projects, team-work is used to
solve problems and also to foster the development of skills and
attitudes, including group work, self-initiative and
cooperation, and being co-responsible for one´s learning.
Despite the obvious benefits of PBL, it is important to
emphasize that this approach is frequently confused with
practical experiments in which students receive little support
from teachers / tutors with professional experience in the area
of knowledge in question and are supported by subjects which
have a standard content and conventional processes for
assessment based on tests and group work that is scored.
However, an effective PBL methodology needs to preserve its
principles by defining processes that ensure theory and practice
will go hand-in-hand [3].
In [1], Savery & Duffy set out eight PBL principles and
stress the need to anchor all teaching and learning process
activities on a real, relevant and complex problem within a
collaborative learning environment similar to the work
environment. This is to enable students to develop the ability
to analyze possible solutions and to reflect on the learning
process. In practice, ensuring these principles are followed
requires a high investment in planning for and monitoring
PBL, which includes management time, effort, resources and
processes and therefore they are not always strictly adhered to.
For these reasons, many education programs choose to
preserve some PBL principles so as to maximize positive
results based on drawing the academic world and the market
closer to each other. It is important to emphasize that no
matter how "pure" the application of PBL is, planning and
monitoring processes are essential for evaluating its results
and thus cannot be set aside even in an education program that
adopts only some of its principles.
Prompted by this, this article puts forward a model for the
authentic assessment of students for education programs based
on PBL, which preserve, at the very least, the "real-world"
characteristics of problems and of the learning environment.
As a main reference, the strategy of authentic assessment set
out by Herington & Herington is the basis for defining this
model [4]. In authentic assessment, students are involved in
learning environments in which activities are geared towards
applying their knowledge, stimulating their thinking and
critical insight towards solving real problems and deploying
different ways of solving them. Therefore, this proposal shows
itself to be totally aligned to PBL-based approaches.
This paper also sets out how to validate a model based on
a real case study conducted on an education program which
seeks to train professionals to develop embedded software and
network management for the Telecom market.
This education program was implemented by means of a
Software Internship [5], similar to a medical internship/
residence, in which students learn by doing. Conducted in a
partnership between an institute of technological innovation
and a company that manufactures solutions and equipment for
the telecommunications industry, the program was run after
first of all creating a learning environment based on a
Software Factory, supported by processes for developing
software, designing real applications and multidirectional
interactions between interns students, specialized software
professionals in the role of teachers and tutors and the
intensive participation of the client, represented by the
company from the telecommunications industry.
II. A
N
A
UTHENTIC
A
SSESSMENT
M
ODEL
The concept of authentic assessment used as a reference for
the assessment model proposed was defined by Herington &
Herington in [4]. They draw attention to seven essential
elements in an authentic assessment:
1. The context needs to be real, thus reflecting the
conditions for assessing the students’ performance
within this context;
2. Students need to participate effectively in solving
problems, as doers, based on knowledge acquired
while being trained;
3. Students need to devote time and effort to
collaborating with others involved in solving
problems;
4. The problem needs to be real, and of relevant
complexity;
5. The assessment needs to be integrated with students’
activities;
6. The assessment should include multiple performance
indicators;
7. The indicators need to have well-defined and reliable
criteria.
These elements underscore the need for a real learning
environment, focusing on problem solving based on
collaborative work supported by well defined processes that
cover different aspects. When these elements are taken to the
software industry, it is easy to relate them to the processes that
support software engineering, which are placed on top of
delivery schedules and artifacts built iteratively.
Although these elements bring out the critical factors to be
considered in authentic assessment, they do not indicate how
this can be applied in a real learning environment, i.e. they do
not describe how assessment strategies should be used.
In [6], the author discusses some important points when
defining the evaluation process in PBL, highlighting the need
to define who does the assessment, what the best assessment
tools (oral or written) are, what the best approach (formative or
summative) is and what type of indicators could be used. Based
on an experiment in which teachers were trained in PBL, some
conclusions emphasized the importance of all those involved in
the assessment process taking part, and there being continuous
feedback throughout the process, a characteristic of formative
approach. In this context, the study presented in [7] reinforces
the importance of formative assessments in the PBL approach,
both in the assessments of groups of students, and in individual
assessment. Another important aspect discussed in [8] stresses
the need for alignment between the educational objectives and
the evaluation process: "Assessment of PBL needs to focus on
the objectives that PBL fosters in conjunction with the
educational course objectives". These studies discuss relevant
issues concerning the evaluation process in PBL, but none of
them proposes a model that can facilitate its implementation.
In [9], the author defines assessment strategies in the
context of PBL from three perspectives: Content, which is
related to the knowledge acquired by the student; Process,
which is related to the ability to apply the knowledge acquired
to solving problems and; Output, which is related to the
products generated as results. The combination of these
perspectives allows an assessment process that identifies not
only what the student understands with regard to the
fundamentals and concepts needed to solve problems, from the
perspective of content, but also provides an analysis of the
process of solving the problem that includes procedures and
analyzing alternatives, beyond the end solution proposed for
this problem.
Additionally, for an authentic assessment, it is important to
consider indicators compatible with each perspective within the
learning environment built. For example, if the learning
environment is represented by a software factory, on evaluating
the perspective of Process, indicators related to the process of
developing software used by the factory, with clear criteria
related to the development methodology adopted, will certainly
reflect the reality of the environment in which students are
learning. From the perspective of Output, which may be
associated with developing software engineering documents or
software code, it is also necessary to establish indicators related
to the quality of these products, such as organization and
clarity, in the case of a document, or meeting the architecture
standards in the case of software code.
Although the three perspectives of Content, Process and
Output assess much of the teaching and learning process in
PBL, they do not take account of interpersonal characteristics
developed by this approach, such as self-initiative, teamwork
and leadership in guidance towards solutions. Moreover, when
these assessment strategies are compared with assessment
solutions adopted by the software industry, what cannot be left
out is evaluation from the perspective of customer satisfaction,
which is normally related to characteristics such as
productivity, quality of the service and transparent
communication, which once more lie outside the perspectives
defined in [9]. In this context, the model for authentic
assessment put forward in this paper adds two more
perspectives to the assessment strategies: Performance, a
subjective analysis of the students’ interpersonal
characteristics; Client satisfaction, an assessment based on
criteria for client satisfaction.
Once again it is important to emphasize the need to use
indicators (and methods) that the market has already adopted.
Various models for assessing performance have been adopted
by Human Resources departments in software companies that
can also be used in the context of the learning environment
[10]. As to evaluating satisfaction, it is essential that indicators
be defined with the effective participation of the client.
Finally, given the importance of this article setting out
some guidance as to the application of assessment strategies, it
is important to relate this to the main types of evaluation
defined in [11]: formative and summative ones. The purpose of
formative assessment is to evaluate the student's performance
throughout the problem solving process, by encouraging
continuous feedback, whereas summative assessment evaluates
results at the end of the stages of learning [12]. In other words,
as the Handbook says [11]: "When the cook tastes the soup,
that's formative; When the guest tastes the soup, that's
summative."
Based on these definitions, Table I shows a relationship
between the perspectives of assessment and the possible type of
assessment to be adopted. This relationship shows that
perspectives of assessment related to Processes and
Performance, resulting from continuous monitoring and
feedback, are characterized as assessments of the formative
type, while prospects such as output are inherently summative.
TABLE I. P
ERSPECTIVES AND TYPES OF ASSESSMENT
Assessment
Perspectives Formative Summative
Content x x
Process x _
Output _ x
Performance x _
Client _ x
It is worth noting that the combination of types of
assessment in a process of authentic assessment is a solution
that reflects the assessment procedures of the labor market,
which very often err as to formative assessments, due to
resource constraints and the timing of projects.
However, despite the limitations inherent in real scenarios
of software development, it is believed that the application of
this evaluation model is suitable for any software development
environment aimed at solving problems based on groups of
people collaborating and cooperating with clear goals for
serving and meeting the demands of real clients.
III. C
ASE
S
TUDY
:
S
OFTWARE
I
NTERNSHIP FOR THE
T
ELECOM
M
ARKET
The concept of a "Software Internship", just as in Medical
Internship, at bottom includes: (1) the formal teaching of
relevant knowledge by a teaching entity and; (2) going deeper
into practices within an area of specific knowledge, acquired
directly into an environment of developing real software, in an
analogous way to the hospital during Medical Internship [5].
To apply this concept, Internship in Embedded Software
and for Network Management was created with the goal of
training software professionals in the skills needed for jobs in
companies from the Telecom industry that make infrastructure
products, and covered the period from November 2011 to April
2012.
This was conducted under a partnership between C.E.S.A.R
(www.cesar.org.br), a technological innovation institution, and
Datacom Brazil (www.datacom.com.br), the leading Brazilian
manufacturer of equipment and solutions for the
telecommunications industry. Given the growth of
opportunities in this market and the need to hire professionals
with the skills needed for new projects, the design of this
program was aimed at training specialized professionals in a
short period of time whom Datacom could suitably hire. For
simplicity, this program will henceforward be deemed as the
"Datacom Internship".
In order to have a better understanding of the program, this
section describes the structure of training, with reference to the
model proposed by Santos [13], which highlights five key
elements in the PBL approach to teaching: Learning
Environment, Content, Problems, Human Capital involved and
Control Processes.
A. Learning Environment
The Datacom Internship used the model of a Software
Factory as a practical learning environment; Datacom
employees as real clients; and C.E.S.A.R as a teaching
institution. It is worth pointing out that the Software Factory
concept adopted is related to structured and integrated units of
development, with clear roles and responsibilities, supported by
well defined tools and processes, as described in [14]. Figure 1
illustrates the structure of the learning environment.
The Datacom Internship consisted of a group of 18 interns,
organized into 2 groups of 9 interns: (1) Group A, whose focus
was on developing embedded software for Datacom products;
(2) Group B, whose focus was on developing software for
managing networks in the language of Java. The interns had a
technical profile, being students who had graduated from
information technology courses, but who had no experience of
Datacom products and platforms, and in almost all cases, little
experience in Java.
Fig. 1. Learning Environment in the Datacom Internship.
Interns were selected through a selection process that
included evidence of technical skills, interviews and group
dynamics.
B. Content
The program lasted for 5 months: the first month focused
only on subjects taught; two months of combining practices
supported by subject teaching and; the last two months focused
exclusively on the practices of real projects.
The training included a set of taught subjects common to
the two groups and one group of subjects specific to each
group. Thus, the entire class attended the courses on Agile
Project Management, Computer Networks, Datacom Products,
Linux and Managing Configurations and Change. The specific
subjects in Group A focused on C, C ++ and Advanced Linux,
while Group B attended courses on Java and Software Testing.
It is worth pointing out that the way in which the content of
the program was delivered, since it contained subjects taught
which were not necessarily directly associated with solving a
real problem, had a direct impact on one of the main principles
of PBL, which holds that all tasks of the learning program need
to be anchored on a problem. The decision taken to have this
format was justified by the client’s need for a "leveling" of
knowledge among all the student interns in some subjects,
given the specificity of the sector, within the time limits and
resources of the program. Thus, the teaching approach adopted
cannot be considered purely PBL, but only "based" on some of
its principles.
C. Real Problems
The real problems were driven by Datacom´s wants. Each
group received specific demands for software development:
initially Group A received four demands for embedded
software, while Group B received two demands, related to
developing and testing software in Java.
The initial demands were of low complexity, and to the
extent that the subjects were tutor-led, the demands gained a
higher degree of difficulty as the course progressed.
D. Human Capital
To create real applications to be offered to the market on
this platform, it was necessary to structure the Software
Factory including a team of professionals whose skills were
compatible with the goals of the training and who would
interact continuously with the teams formed by the interns.
Figure 2 gives the organogram of the program.
Fig. 2. Human Resource in the Datacom Internship.
Thus, each team of interns received continuous support
from a technical tutor whose skills and experience were
specific to their respective group, and together they interacted
continuously with two representatives from Datacom, the
client, one for each group, A and B. Interns also received skills
training from a group of teachers, and experts in the respective
subjects of training.
To ensure the effectiveness of the teaching methodology,
an Educational consultant (the first author of this article) took
part in the program, by defining the teaching methodology in
order to preserve its principles, within the constraints on
resources, and to implement and monitor the model for
authentic assessment proposed. Finally, a management team
comprising an academic coordinator, who was responsible for
monitoring the academic and educational program, and a
project manager (the second author of this article), responsible
for planning and monitoring projects and the delivery of
applications, formed the Datacom Internship team.
E. Control Processes
For planning and monitoring the development of each
demand, Scrum [15] agile management techniques for projects
were used, thus enabling the tutors and managers involved to
monitor the interns continuously.
Scrum is an empirical approach focused on people,
developed for environments where requirements emerge and
change quickly. The implementation of Scrum is based on
defining tasks and setting priorities (Product Backlog), made
with the client’s support, and grouped in development units of
short duration (maximum of 4 weeks), called Sprints. The
process is governed by an actor called the Scrum Master who
aggregates monitoring and feedback activities. These activities
are generally conducted during short daily meetings, attended
by the entire team, to pinpoint and correct any shortcomings in
and/or impediments to the development process [15].
The Scrum method also uses a visual board, usually divided
into three columns: Backlog, Doing and Done. Each column
represents the actual status of the activity. When a task is
begun, this is indicated by transferring it from the “Backlog” to
the “Doing” column, and its conclusion is evidenced by
moving it to the “Done” column.
One Scrum visual board was created for every demand of
each team.
The use of Scrum during the Datacom Internship provided
some important contributions. The presence of a product owner
(the respective client), who was responsible for the application
conception and its validation, saw to it that the requirements of
the demands were made clearer. The Scrum Master’s role was
shared between the technical tutors and the project manager,
who prioritized activities, chaired the daily meetings to monitor
how the demands were developing and to assess impediments.
The Scrum visual board, which showed the “Backlog”,
“Doing” and “Done” tasks, provided greater visibility and
transparency to the flow of tasks in each demand, and
identified general issues that many of them shared.
Despite the various benefits of Scrum, it worth pointing out
that one of the groups of interns had some trouble keeping the
Scrum framework updated and, at times, pondered using
another technique for monitoring. These considerations are
discussed in more detail in the section that describes how the
Authentic Assessment model was applied (Section IV).
Even in the context of control processes, this internship was
applied the full model proposed in Section II, and consisted of
the following perspectives:
1. Content, based on conceptual, practical and
contextual assessments used in the disciplines;
2. Process, based on assessments conducted in project
status meetings and meetings required by the Scrum
methodology;
3. Output, based on assessing the demands from
Datacom;
4. Performance, based on assessing the individual
performance of each interns;
5. Client satisfaction, based on assessing criteria defined
by the clients, Datacom.
In the Content perspective, the teacher conducted the
assessment, based on the content and practices discussed in
his/her discipline, in the iterations with interns and in
monitoring that demands were being met. This assessment
used a 5-point scale of values: "excellent" (5); "very good"
(4); "good" (3); "satisfactory" (2) "insufficient" (1).
From the perspective of process, this was evaluated by the
technical tutors and project manager monitoring how
applications were developed. This monitoring was conducted
by means of meetings recommended by the Scrum
methodology, in general, on a daily and weekly basis. Bearing
in mind that the Scrum agile method is already rather
prescriptive, the criteria for making evaluations from this
perspective were defined as follows: (1) compliance with the
the frequency of the meetings recommended by Scrum; (2)
assiduity and documenting in managing the configuration and
change; (3) undertaking unit testing. Each indicator could
assume a value from a simple 3-point scale of values:
"Considered" (100%), "Partly considered" (50%) and "not
considered" (0).
As to the perspective of Output, this focused on analyzing
artifacts of the applications produced by the interns. These
analyzes were conducted throughout the development process
under the following criteria: adherence of the code to the
architecture defined; a code within the Datacom standard for
coding; a code written in accordance with good programming
practices (clarity, documentation, reuse, etc.); quality of the
documents (form and content); approval of the sprint (iteration
of the Scrum) by the client. On tackling how documents were
to be drawn up, rather than software code, the respective
criterion received the value of "not applicable". Once again the
same simple 3-point scale of values was used. The technical
tutor of each team conducted these assessments.
From the perspective of performance, seven competencies
were assessed: initiative; ease of understanding/learning,
teamwork, communication, flexibility, self-development, and
being results-oriented. Due to the subjectivity of this analysis,
a 5-point scale of values was used in this perspective: (1)
"needs to develop a lot", (2) "needs to develop", (3) "meets the
need of the function", (4) "has a superior performance", (5) "is
an example for the others”. This assessment was conducted by
the tutors and project manager and applied in the form of self-
assessment, assessment of the technical leader and peer
assessment.
Finally, assessing customer satisfaction was based on
criteria normally used when evaluating software factories:
meeting deadlines and goals; team productivity,
communication and transparency; technical quality and quality
of the final product. This assessment used a 5-point scale with
values similar to those used for Content, "excellent" (5);
"good" (4); "satisfactory" (3); "unsatisfactory" (2); "very poor"
(1). It was conducted by the C.E.S.A.R’s Project Management
Officer together with the clients’ representatives.
IV. A
PPLYING THE
A
UTHENTIC
A
SSESSMENT
M
ODEL
Based on the five perspectives of authentic assessment, it
was possible to evaluate the interns’ performance in different
aspects. Content Assessments enabled the subjects which were
of greatest difficulty for the students to be identified, whether
this was due to how their content was approached, their degree
of complexity or the assessment method being inappropriate.
The perspectives of Process, Results and Client enabled an
analysis to be made of the teams, which thus identified the
groups which most matured in the development process and the
applications with the greatest technical quality of their
components. Finally, the evaluation of performance enabled an
individual look at interpersonal skills that are, in general,
developed based on PBL-based learning approaches. The
following sections present some of these results.
A. Assessment of Content
Table II shows the overall assessment of the interns from
the point of view of Content for groups A and B. The result of
these evaluations was announced at the end of each course, so
followed the summative approach shown in Table I.
TABLE II. S
UBJECTS AND
T
EAM
A
VERAGES
Disciplines
Team A
Average
Team B
Average
Network 4,8 5,0
Agile Project Management 4,8 4,8
Datacom products 4,3 4,4
Linux 3,7 3,3
Configuration Management 3,9 3,9
C 4,4 _
Linux (advanced) 5,0 _
OOP and C++ 3,6 _
OOP and JAVA _ 3,2
JAVA (advanced) _ 3,3
Software Testing _ 3,9
5 – Excellent | 4 – Very good | 3 – Good | 2 – OK | 1 – Insufficient.
On looking at Table II, note that the interns of both teams
performed less well in the subject of Linux. Since points of
improvement were identified at the start of the course and
discussed with the interns, remedial and reinforcement
measures were taken to minimize them. The results of these
measures were proved, given the success of performance of the
subject of Advanced Linux for which Group A were trained. A
similar situation occurred with the subjects of more technical
and complex programming languages, which also needed
measures to adopt content for the interns.
Although outside the scope of this analysis, which focuses
on evaluating students, it is important to point out that interns
carried out evaluations on teachers (on aspects such as
knowledge, experience, security and ethics) and the approach
of each subject (on aspects such as the clarity of objectives,
content, relevance, good references), which resulted in an
average of more than 4 (Very Good). This assessment level
remained high even in the subjects in which the interns
performed less well.
In general, despite occasional difficulties in some
disciplines, the interns' performance remained above level 3
(good), with an overall average of around 4 (Very Good), as
can be seen in Table II.
B. Assessment of Process
Figures 3 and 4 show the evolution of the assessments from
the perspective of process for Groups A and B, respectively,
and do so in a formative way. The frequency of these
assessments was guided by the development methodology
used, in the case of Scrum and, therefore, each group received
an assessment for planning under this perspective at the end of
each Sprint. The percentage equivalent with respect to Process
was calculated by summing the values assigned to each
criterion (Section III.E), divided by the number of criteria
evaluated in that Sprint. Thus, if all three criteria of this aspect
received a value of 100% (fully satisfied), this would result in
300% divided by three, totaling 100%. The assessments were
conducted by the project manager in conjunction with the
technical tutors from each team.
On analyzing the behavior of group A, described in Figure
3, some impacts on the process are seen. This group had
communication problems in Sprint 1, which adversely affected
its performance. An attempt to use another tool to monitor on
the Web instead of using the physical Scrum framework also
ended up adversely affecting the performance of this
perspective in Sprint 4. The absence of Sprint Review meetings
in Sprints 5 and 6, due to the team focusing on producing
results, maintained this result.
Fig. 3. Evolution of the perspective of Process of Group A.
As for Group B, represented by Figure 4, the team´s main
difficulty was to incorporate the culture of monitoring the
Scrum framework. This problem was not overcome in the first
three Sprints, but did so gradually in the sprints that followed
until the end of the project.
Fig. 4. Evolution of the perspective of Process in Group B.
As a general result of the assessment from this perspective,
note there is a high degree of complying with processes: above
80% in most of the sprints in both groups. It is also possible to
observe the natural difficulty in adopting processes in the early
stages of the projects, a consequence of the period of learning
and acculturation of the teams.
C. Assessment of Output
The graph in Figure 5 shows the outputs improved
throughout Group A’s project. Similar to the calculation in
respect of Procedure, the percentage equivalent to the Result
aspect was calculated by summing the values assigned to each
criterion (Section III.E), divided by the number of criteria
evaluated in that Sprint. While the sprints were being carried
out, it was observed that the complexity of Sprint 3 made
planning difficult, which had an impact on the final result, even
though its processes were correctly followed. On the other
hand, in the final sprints, this approach obtained a 100%
performance and even had an impact on the compliance of
processes (the absence of Sprint Review meetings).
Fig. 5. Evolution of the perspectives of Output and Process of Group A.
As to Group B, the perspective of Output remained highly
satisfactory, the only variations being in Sprint 3 (due to the
complexity of the task, as mentioned above), and thus even
showed a good alignment between Processes and Output.
Fig. 6. Evolution of the perspectives of Result and Process of Group B.
The results of the assessments from the perspective of
Processes showed a curious aspect as to evaluating software
professionals in real working environments. A problem of
compliance with processes does not always represent a
negative impact on the outcomes of projects and client
satisfaction. After all, the processes are there to achieve good
results and high satisfaction, and if at some point they impede
what is taking place, this is because they need to be reevaluated
and modified. This observation is evident when evaluating the
two perspectives, Process and Output, together.
D. Assessment of Performance
Due to restrictions of time and effort, the assessment from
the perspective of Performance was performed only once in
this program, in mid-March, at which point the teams were
already producing at a stable rate.
On analysing Table III, which shows the average of the
teams for each aspect assessed, it is seen that the interns
demonstrated a good performance in all aspects, as they reach a
point in the scale higher than 3, indicating, at the least, that the
competence developed met the requirements of its function.
This evaluation also pointed up the strengths of each group,
with respect to more subjective criteria. Group A was
outstanding in the competence of "self-development" and
"being easy to understand", while Group B showed themselves
to be more "flexible" with greater competence in "teamwork"
and "self-initiative".
TABLE III. T
YPES OF
P
ERFORMANCE AND
T
EAM
A
VERAGES
Performance
Average Time A Average Time B
Initiative 3,3 3,6
Easy to understand 3,7 3,5
Team work 3,0 3,6
Communication 3,6 3,1
Flexibility 3,4 3,8
Self-development 4,4 3,4
Results-oriented 3,2 3,5
5 – Exemplary | 4 – Superior performance | 3 – OK | 2 – Few demonstrable results | 1 – No results.
Note that these features are well aligned to the demands of
each group. Group A, charged with developing embedded
software, needed to be more investigative and geared to
discovering solutions, while Group B, charged with developing
Java software and testing, needed to be more collaborative and
adaptable.
It is important to point out that this evaluation has an
individual character, which is not recorded in Table III for
reasons of confidentiality, feedback being carried out
personally with each intern. In this context, two interns, one
from each team, showed that they had developed greatly with
regard to aspects of performance. One of them obtained
performance 5 ("is an example to others") in all aspects.
Therefore, this type of evaluation still tends to emphasize the
character of student leadership within group activities.
E. Assessment of Client Satisfaction
From the perspective of client satisfaction, two rounds of
summative assessment of character were held, one at the
beginning of the practical activities, and the other at the end of
the Program. Table IV summarizes the results achieved by the
two groups, A and B.
TABLE IV. C
LIENT
S
ATISFACTION AND
A
SSESSMENTS OF THE
G
ROUPS
Client Satisfaction
1
s
t
Assessment
2nd Assessment
Group A 4,4 4,4
Group B 4,0 4,3
5 – Excellent | 4 – Good | 3 – Satisfactory | 2 – Unsatisfactory | 1 – Very poor.
The results from the evaluations show a high rate of client
satisfaction, even at the beginning of the activities, when the
teams faced challenges related to adopting processes. One of
the reasons identified for this outcome, based on making
assessments from other points of view, was that the teams
constantly targeted the results of the projects, which
characterizes concerns related to most of the aspects of client
satisfaction described in Section III. This feature is also
reflected in the assessments conducted on performance.
It is important to emphasize the assessment of customer
satisfaction in this program had a connotation that was only
slightly academic, considering the goal was for Datacom to hire
qualified professionals with the skills it needs. This is a
common feature in the context of Software Residency
programs, which are almost always focused on specific
outcomes for the clients involved. This learning environment,
which is more realistic than other education programs,
reinforces the importance of this aspect within the valuation
model proposed.
V. C
ONCLUSIONS
The model for assessing students put forward in this article
sought to assess a training course in Software Engineering,
based on practices used in the market, with a view to proving
the degree of effectiveness of a teaching methodology that
preserves principles such as solving real problems within real
development environments. Given the difficulty and costs
associated with creating such an environment, the "Software
Internship” applied in this case study is a good alternative for
making such programs viable since it respects the essential
elements of teaching approaches based on PBL principles.
If the various aspects of "Authentic Assessment" are
explored, the different points of view for assessment may be
matched and analyzed, and will offer important information
both to those who lead the process of teaching and learning and
manage training, and to clients who require professionals with
the expertise who meet their wants and needs. In particular, the
application of the model based on continuous monitoring,
prompted by feedback meetings throughout the program, not
only enabled points of improvement to be identified, but
strategies to be defined that might enable the weaknesses found
to be overcome and strengths to be maximized. Furthermore,
this model makes it possible both to assess the student from the
perspective of group work and individually, thus contributing
to developing improvements specific to each individual, within
the criteria evaluated in the labor market. Finally, the case
study presented successfully achieved its objectives: of the 18
interns trained, 94% were approved for hire by a client, with
only one refusing the invitation at his own initiative. Currently,
the ex-interns form teams which fully develop new software
projects for the Telecom market, and involve the partner
companies of this program.
A
CKNOWLEDGMENT
The results presented here were developed as part of a joint
project between C.E.S.A.R (Recife Center of Advanced Studies
and Systems) and Datacom, and drew on (using resources of)
the "Law on Informatics", No 8.248/91, which legislates for
Brazil´s electronics industry. Additionally, this program would
not have obtained the results presented without the involvement
and commitment of students and all members of the technical
and academic teams for which the authors are most grateful.
R
EFERENCES
[1] J. R Savery and T. M. Duffy, “Problem based learning: An
instructional model and its constructivist framework”. Education
Technology. 1995.
[2] P. Tynälä, “Towards expert knowledge? A comparison between
a constructivist and a traditional learning environment in the
university”. Int. J. Educ. Res., v.31, p.357-442, 1999.
[3] S. C. Santos, M. C. M. Batista, A. P. C. Cavalcanti, J.
Albuquerque and S. R. L Meira. “Applying PBL in Software
Engineering Education”. CSEET 2009, Hyderabad, Índia, 2009.
[4] J. Herrington & A. Herrington, “Authentic assessment and
multimedia: How university students respond to a model of
authentic assessment”, Higher Education Research and
Development, 17 (3), 1998, 305-22.
[5] A. Sampaio, C. Albuquerque, J. Vasconcelos, L. Cruz, L.
Figueiredo, S. Cavalcante, "Software Test Program: A Software
Residency Experience”, International Conference on Software
Engineering (ICSE), 2005, p. 611-6112.
[6] R. Tuohi, “Assessment in Problem Based Learning connected
with IT Engineering Education”, International Conference on
Engineering Education & Research December 2-7, 2007
Melbourne, Australia.
[7] F. Yin, “Applying methods of formative and summative
assessment to problem-based learning in computer courses”, The
China Papers, November 2006.
[8] L. L. Elizondo-Montemayor, "Formative and Summative
Assessment of the Problem-Based Learning Tutorial Session
Using a Criterion-Referenced System", JIAMSE, 2004 Volume
14, pages 8-14.
[9] G. X. L Tai and M. C.Yuen, “Authentic assessment strategies in
problem based learning”. In ICT: Providing choices for learners
and learning. Proceedings ascilite, Singapore, 2007, 983-993.
[10] J. Fitz-enz (Author), B. Davison. “How to Measure Human
Resource Management”. McGraw-Hill, 3rd Edition, 2001.
[11] J. F. Westat, (2002). “The 2002 User Friendly Handbook for
Project Evaluation”. The National Science Foundation, Contract
REC 99-12175.
[12] C. O. Figuêredo, S. C. Santos, G. H. S. Alexandre, and P. H. M.
Borba, “Using PBL to develop Software Test Engineering”,
CATE, Cambridge, UK, 2011.
[13] S. C. Santos and A. Pinto, “Assessing PBL with Software
Factory and Agile Processes”, CATE, Naples, Italy, 2012.
[14] J. Greenfield, “Software Factories, Assembling Applications
with Patterns, Models, Frameworks and Tools”, ACM Press
New York, NY, USA, 2003.
[15] Schwaber K., “Agile Project Management With Scrum”.
Microsoft, 2004.
... The vast majority focused on the learning process, using information about the evaluation process to comment on the monitoring and results of this process. Among the studies focused on the assessment model, the [PS02], [PS10], [PS13], [PS20], [PS36] and [PS38] stand out. ...
... In [PS13], an Authentic Assessment Model is proposed that aims to assess the student from different perspectives, providing more relevant indicators. This study was a precursor to the study in [PS20], commented on in Section II, which proposes assessing the student based on the 5 aspects used as a reference in the current study (Content, Process, Results, Performance, and Client Satisfaction). ...
... For each identified aspect, studies that consider it within their respective study were associated, as shown in Table III. Confidence [PS29] Creativity [PS42, PS44,PS45] Dialectical thinking [PS07] Ease of learning [PS12,PS13] Flexibility [PS13] Focus on results [PS12,PS13] Presentation skills [PS25] Leadership [PS44] Group work [ Looking at Table IV, it is possible to clearly see the concern with the personal aspects of students in studies based on the PBL approach. Several soft skills have been mapped in this context, making evident the method's suitability for stimulating both technical and non-technical skills. ...
Conference Paper
This Research Full Paper presents an overview of student assessment proposals for Problem-Based Learning (PBL) in Computing Education. Computing teaching has many challenges, as it requires different skills from students, often subjective and difficult to assess. In fact, technical knowledge alone is not enough to fully understand what is being taught, but the interpretive and logical skills to deal with practical problems and non-technical skills such as group work, creativity, critical vision, ability to cooperate and communicate. Active learning methodologies as Problem-Based Learning (PBL) have been used to dealing with such challenges, broadly developing technical and non-techniques skills in students. However, despite the benefits of PBL, the student assessment process is one of the points that present its own adversities and, therefore, an aspect that deserves greater attention. To better understand the nuances of this process and how it can contribute to the teaching and learning process based on PBL, this study aimed to investigate primary studies in the last two decades, seeking answers to the following research questions: RQ1) What assessment models are being used?; RQ2) Which aspects are evaluated?; RQ3) What criteria and media have been defined?; RQ4) Who gets involved in the assessment process?; RQ5) What is the ideal frequency to conduct the evaluations?; RQ6) What can these models reveal? As a research method, this study used the Systematic Literature Review method proposed by Kitchenham. As main conclusions, it was possible to identify that: generally, computing education based on PBL occurs at the undergraduate level, having as main educational objective the teaching of technical content; in practice, the need for a diverse teaching team is not reflected, the traditional student-teacher remains; to evaluate students, it is necessary to consider several aspects, technical and non-technical, defining specific criteria for each one of them; the main benefits for students are related to changes in behavior, development of soft skills and better absorption of technical knowledge; as main challenges for students, the difficulty to understand the nuances of the proposed problem and to be the main responsible for devising a solution for it without the figure of a teacher to give a clear definition of how to do it stands out.
... Table III shows an overview of the general-purpose of the studies. Among these, three main concerns are related to the classic challenges of PBL: approaches to implementation of the method, through models, methodologies and frameworks [PS17], [PS23], [PS49], [PS57], [PS65]; assessment models, the most part focused on student assessment in several aspects, technical (student performance, grades) and personal (motivation, engagement, self-initiative, learning reflections) [PS21], [PS27], [PS43], [PS56], [PS73], [PS78], [PS85], [PS102]; and the use of virtual environments and tools to facilitate the PBL adoption [PS3]- [PS5], [PS19], [PS33], [PS42], [PS67], [PS83]. Regarding the proposals to implement the PBL method, it can be observed that in the first 5 years there were no studies with this objective. ...
... The development of "soft" skills (such as communication, negotiation, critical thinking, and leadership), as well as technical skills (such as fundamental software requirements, specification, validation, coding, and testing), is one of the main promises of the PBL approach. In this context, assessment models aim to collaborate with the PBL-based teaching process, based not only on the implementation of the approach but on its ongoing management, as highlighted in [PS43]: "application of the model based on continuous monitoring, prompted by feedback meetings throughout the program, not only enabled points of improvement to be identified, but strategies to be defined that might enable the weaknesses found to be overcome and strengths to be maximized". Assessment models can also offer more clarity on how to monitor student motivation and engagement, allowing the teaching team to adjust the PBL process for better results [PS51], [PS64], [PS94]. ...
... Concerning environment element, the main challenges stand out in the first decade was the difficulty of communication related to PBL dynamics [PS3], [PS6], [PS16], and the effort to stimulate individual learning in learning environments, warranting its effective use [PS20]. As the PBL became more popular and learning environments became more sophisticated, other challenges became evident in the studies, especially regarding complexity and usability of the technology that is used in specific learning environment [PS46], [PS83], the high cost involved in implementing an environment compatible with the reality of the labor market [PS43], and the limitations of environments that are not appropriate for the PBL approach [PS62]. Study [PS46] comments: "Mostly with technical and not educational issues. ...
Article
Contribution: This article adds to the results of previous systematic mapping study by addressing a more ample context of problem-based learning (PBL) in computing education. Background: PBL is defined as an instructional method of constructivist teaching that uses real problems as a motivating element for learning. Although PBL was born in medical education, it has been used in computing education to facilitate the students' engagement and learning capacity, contributing to developing skills, such as teamwork, holistic vision, critical thinking, and solving problem. Considering that approach much more descriptive than prescriptive, it favors the implementation of diverse methodologies on its behalf.
... Authentic assessment requires the educators to design tasks for students while simulating the challenges of real-life work environments in which they have to focus on problem-solving skills based on their previously gained knowledge and the management practices [34,35] [36]. However, it is also the challenging part that SE and RE educators face, how to bring the right balance of 'realism' within the constraints of the academic environment [37,38]. ...
... In this study we present the pedagogical design and implementation for a module on requirements inspection as part of a postgraduate RE course at <<Anonymous University>>. The assessments and tasks for this course are designed using the pedagogies of collaborative learning, authentic assessment [34,39] role-playing [38,40,41] and contributing student pedagogy [42]. The overall assessment in this course involved three sets of tasks i.e. ...
... Over the years, software engineering education researchers have proposed alternative approaches to industry-based learning, by designing curriculum and task activities based on project-based learning and authentic assessment principles [34,39,50]. These approaches stress the need for the design of activities to be based on 'realistic' problems that students have to solve in a collaborative environment, thus simulating the realworld environment within the classroom. ...
Conference Paper
Full-text available
The core aim of requirements inspection is to ensure the high quality of already elicited requirements in the Software Requirements Specification. Teaching requirements inspection to novices is challenging, as inspecting requirements needs several skills as well as knowledge of the product and process that is hard to achieve in a classroom environment. Published studies about pedagogical design specifically for teaching requirements inspection are scarce. Our objective is to present the design and evaluation of a pedagogical design for requirements inspection training. We conducted an empirical study with 138 postgraduate students, teamed up in 34 groups to conduct requirements inspection. We performed qualitative analysis on the data collected from students’ reflection reports to assess the effects of the pedagogical design in terms of benefits and challenges. We also quantitatively analyze the correlation between the students’ performance in conducting inspections and their ability of writing specifications. From the analysis of students’ reflections, several themes emerged such as their difficulty with working with limited in-formation, but also revealed the benefits of learning teamwork and writing good requirements. This qualitative analysis also provides recommendations for improving the related activities. The quantitative analysis revealed a moderate positive correlation between the performance in writing specification and inspection.
... After defined objectives, evaluation strategies were used, with the purpose of verifying their accomplishment. To define these strategies, it was used the authentic assessment model described in [25] and [26][27][28]. In [25], Tai & Yuen define authentic assessment strategies in the PBL context from three perspectives: Content, related to the knowledge acquired by students; Process, related to the ability to apply that knowledge to solve problems; and Output, related to the products and artifacts generated as a result. ...
... In [25], Tai & Yuen define authentic assessment strategies in the PBL context from three perspectives: Content, related to the knowledge acquired by students; Process, related to the ability to apply that knowledge to solve problems; and Output, related to the products and artifacts generated as a result. Santos and Soares [26][27][28] enhanced this proposal and added two dimensions to the assessment process: Performance, which refers to a subjective analysis of the student's interpersonal characteristics, developed in the PBL approach; and Client Satisfaction, based on assessment criteria in the client's perspective of the solution. ...
Chapter
The continuous advancement of Information Technology and the range of industries and services dependent on technology have required profound changes in the education of software professionals. In fact, the education of these professionals must include diverse skills (technical and non-technical), in order to enable them to solve real problems that impact the lives of companies and people. In this scenario, active learning approaches can make a lot of difference, when applied effectively, with well-defined educational goals and continuous follow-up and feedbacks. One of these approaches that are working well in Computer Education is the Problem-Based Learning (PBL) approach. PBL uses real problems as an instrument to develop skills such as holistic knowledge, business understanding, task management and group work, essential in the software professional. In this context, this paper describes a case of an undergraduate course in Information Systems, conducted in the PBL approach. In order to guarantee the application of PBL in an effective way, a Framework for PBL application in the teaching of Computing, described by Santos and Rodrigues (2016) was used. This framework systematizes the application of PBL in the four stages Plan, Do, Check and Act (based on the management cycle of Deming), which are repeated in learning cycles aligned to educational objectives. As the main results of this experience, the following stand out: a proposal for applying PBL in a managed way, based on a Framework for Computer Education; benefits of using the Framework; possibilities for improvements in this approach.
... The motivation to assess programming skills is definitely not to score students but to explore the ways that the skill developed and opportunities to optimize the development. The typical subjective evaluations might assess the presentation of the assignment, the work plan, the daily report, and design diagrams, etc [31] [32]. However, those traditional assessment methods do not work on solid measurements of program design and testing. ...
Article
Full-text available
This pilot study examines how students’ performance has evolved in an Object-oriented (OO) programming course and contributes to the learning analytic framework for similar programming courses in university curriculum. First, we briefly introduce the research background, a novel OO teaching practice with consecutive and iterative assignments consisting of programming and testing assignments. We propose a planned quantitative method for assessing students’ gains in terms of programming performance and testing performance. Based on real data collected from students who engaged in our course, we use trend analysis to observe how students’ performance has improved over the whole semester. By using correlation analysis, we obtain some interesting findings on how students’ programming performance correlates with testing performance, which provides persuasive empirical evidence in integrating software testing practices into an Object-oriented programming curriculum. Then, we conduct an empirical study on how students’ design competencies are represented by their program code quality changes over consecutive assignments by analyzing their submitted source code in the course course system and the GitLab repository. Three different kinds of profiles are found in the students’ program quality in the OO design level. The group analysis results reveal several significant differences in their programming performance and testing performance. Moreover, we conduct systematical explanations on how students’ programming skill improvement can be attributed to their object-oriented design competency. By performing principal component analysis on software statistical data, a predictive OO metrics suite for both students’ programming performance and their testing performance is proposed. The results show that these quality factors can serve as useful predictors of students’ learning performance and can provide effective feedback to the instructors in the teaching practices.
... Considering the student level, the focal point of this work, five aspects were defined: "content", considering the understanding of concepts and fundamentals of the knowledge area; "performance", interpersonal characteristics; "process", regarding the way to solve a problem; "output", considering the proposed solution and; "client satisfaction", with respect to the quality of the proposed solutions. This model has been applied to undergraduate courses in Computing in the last five years [3], [4], [5], [13], [14], [15]. ...
Conference Paper
This Research to Practice Full Paper presents a proposal for monitoring student progress in Problem-Based Learning (PBL). The adoption of the PBL approach has been growing in computer education, where problem-solving and group work are essential. Despite the compatibility and benefits of PBL, some challenges remain, in particular, with respect to the assessment process. For an effective assessment process, it needs to be well defined and managed by both teachers and students themselves, considering that, in PBL, the students are at the center of the teaching and learning process, they are active, and self-regulating. In this context, this paper proposes an interface for student progress monitoring (a "student board") based on an authentic assessment model called PBL-SEE. Constructed using the Design Science Research (DSR) method, this interface was initially prototyped and validated by PBL specialists. The results showed a good acceptance of the student board and important recommendations for improvements.
... However, those traditional assessment methods do not work on solid measurements of program design and testing, thus subjective evaluations are usually included to get the quantitative assessment. The typical subjective evaluations used could be the presentation of the assignment, the work plan, the daily report, and design diagrams, etc [22] [18]. ...
Conference Paper
In this paper, we propose a novel teaching method practiced in our Object-oriented (OO) course and a quantitative method for assessing how the students' programming skill have been improved. We report the empirical study on the 1956 items of data collected from 249 students who engaged in our course in the spring semester, 2018. A measurement model is proposed to assess students' programming skill in terms of programming performance and testing performance. The observation of how students' programming skills improve along assignments in our OO course is discussed by trend analysis and correlation analysis based on the collected data. The empirical results show that students' testing performance have positive significant correlation with programming performance. Furthermore, the correlation analyses show that design quality correlates with both programming performance and testing performance. Therefore, an early warning metric suite for both programming performance and testing performance in our OO course is proposed to indicate students' success in programming skills and provide hints to code reviewers on conducting the necessary testing activities. This study can give meaningful and useful feedback to programming and testing course instructor regarding to evaluate students' achievement in a programming course.
Article
This paper presents a case of project-based learning organizational model implementation in the Engineering School of Information Technologies, Telecommunications and Control Systems (IRIT-RTF) of Ural Federal University for undergraduate educational programs. The authors analyze three stages of organizational model transformation: initiating, piloting, and scaling. Within each stage, there are discussed technologies of collecting applications for students’ projects based on a role model of competences; organizational and technical support of students’ project work in accordance with the activity classifi cation of projects; the number of supervisors and their level of training required to accompany the process. The presented case can provide the reader with practical recommendations for implementing project-based learning at any university.
Article
The purpose of this study was to determine to what extent the Problem-Based Learning method improved reading comprehension during the second year of MA Muhajirin As'adiyah Kampiri. This study employed a pre-experimental approach to attain its goal. The researcher employed a cluster random sampling technique with a sample of 24 students from classes XII-A. The pre-test and post-test were used to collect data. Problem-Based Learning was used to teach the experimental class. The data was analyzed using the education data analysis formula. The pre-test and post-test results revealed a considerable improvement. It was demonstrated by the fact that the mean score of the students' experimental post-test (77.9) was greater than the mean score of the students' pre-test (41.8), and the post-test T-test value was 0.02, which was less than (α) = 0.05. The treatment causes an improvement in kids' reading comprehension. Problem-Based Learning improves students' thinking skills and encourages them to use their abilities to read literature. So that students participating in Problem-Based Learning could easily understand the texts. As a result, it is concluded that implementing problem-based learning improves students' reading comprehension throughout their second year of MA Muhajirin As'adiyah Kampiri.
Conference Paper
Team exercises for software development project-based learning (SDPBL) adopting an agile development model have become popular for training and education worldwide. In the agile development model, an essential part is the build process. In this study, we investigated students’ build errors in agile SDPBL projects by monitoring and collecting logs of the build process from 2013 to 2016. From 2013 to 2015, we categorized the build errors and then discussed the resolutions for each types of build errors. In 2016, the instructors modified the SDPBL project the build error types and corresponding cause and resolution. As the result, in 2016, the number of build errors and the time required to solve the build errors decreased compared to previous years
Article
Full-text available
The growing presence of the software in the products and services consumed daily by the society demands a level of completely dependent quality not only of technology, but of its development process and of the involved professionals. By focusing on the professionals responsible for quality assurance, as the Test Engineer, the skills and competences of these need to be developed on basis of a vision very critical and detailed of the problem. The Test Engineer needs to be an "explorer" of the solution, discovering hidden bugs and looking to elimination of defects of the applications. In this context, this article proposes an approach of teaching focuses on training of “test” discipline that make use of problem-based learning to develop real skills required, supported by processes of planning and continuous assessment, in a computer aided software factory. To prove the applicability of this proposal, an empirical study was developed with positive results in teaching the discipline of “exploratory testing”.
Article
Full-text available
Many medical schools have moved towards problem-based learning (PBL). Unfortunately, the use of PBL in many medical schools has not been followed with appropriate changes in evaluation of students. Assessment of PBL needs to focus on the objectives that PBL fosters in conjunction with the educational course objectives. In an effort to appropriately assess PBL sessions, The School of Medicine Tec de Monterrey uses a criterion-based system that includes three checklists: 1) tutor assessment of students, 2) self-assessment, and 3) peer-assessment. Each checklist contains criteria that correspond to the four objectives (rubrics) of PBL: knowledge application, critical thinking, self-directed study and collaboration, and a fifth rubric for professionalism and attitude during the discussion. Course objectives are integrated within each of the rubrics. The three checklists are used for summative and formative purposes in all PBL core courses of the Basic Medical Sciences department and for the Gynecology PBL core clinical course. Although no quantifiable data have been obtained, the use of this criterion- based system has helped establish appropriate standards of performance. Additionally, it has assisted in identifying those students who are having trouble developing critical thinking and decision-making skills and has greatly fostered feedback to students. If PBL assessment is consistent with curricular goals and course learning objectives, validity of assessment is enhanced and subjectivity across instructors' evaluations can be diminished.
Article
Full-text available
A problem for educators and the developers of interactive multimedia is the apparent incongruity between the demands of authentic assessment and the deliverables of computer‐based assessment. Lecturers wishing to use interactive multimedia are commonly limited to assessment using multiple choice tests which are easily marked by the computer.This article describes seven defining characteristics of authentic assessment which have been operationalized in a learning environment employing interactive multimedia. The article describes the multimedia program and its implementation with a class of pre‐service teachers. The implication of these findings for educational practice are that authentic assessment can be used within interactive multimedia learning environments, albeit not totally contained within the software itself. The qualitative study reported here showed that students responded favourably to the elements of authentic assessment; that they had a good understanding of the content of the interactive multimedia program; and that the assessment was corroborated by observation of teaching strategies used by the students in their teaching practice.
Article
Full-text available
This handbook was developed to provide managers working with the National Science Foundation (NSF) with a basic guide for the evaluation of NSF's educational programs. It is aimed at people who need to learn more about what evaluation can do and how to do an evaluation rather than those who already have a solid base of experience in the field. This handbook builds on firmly established principles, blending technical knowledge and common sense to meet the special needs of NSF and its stakeholders. Quantitative and qualitative evaluation methods are discussed, suggesting ways in which they can be used as complements in an evaluation strategy. As a result of reading this handbook, it is expected that program managers will increase their understanding of the evaluation process and NSF's requirements for evaluation as well as gain knowledge that will help them communicate with evaluators and manage the actual evaluation. Several NSF program areas were selected to provide concrete examples of the evaluation issues discussed. The handbook is divided into four major sections: (1) "Evaluation and Types of Evaluation"; (2) "The Steps in Doing an Evaluation"; (3) "An Overview of Quantitative and Qualitative Data Collection Methods"; and (4) "Strategies That Address Culturally Responsive Evaluation." A glossary of commonly used terms, references for additional readings, and an appendix that presents some tips for finding an evaluator are also included. (MM)
Article
Problem-based learning (PBL) uses real world problems and tasks as the initiative objective in constructing knowledge and enhancing learning experience. This paper looks into authentic assessment strategies in problem-based learning using an interactive multimedia project as a subject of investigation. Through the use of a range of authentic assessments like process assessment which contains of process assessment (consists of students' self reflection, peer's evaluation and task completion reports); content assessment (consists of pretest and posttest); together with portfolio assessment, this paper outlined strategies that have worked, as well as those that have not in a PBL setting. The collective data showed positive feedback towards learning tasks including problem solving skills, team collaboration and knowledge enhancement.
Article
Problem-based learning (PBL) is now widely acknowledged and regarded for its educational and training objectives. Assessment plays an important role in PBL and can be a multi-faceted activity and a key factor influencing the way students learn and respond to teaching. Teachers, peers, and self-assessment should all make an appropriate contribution to the final assessment. A comprehensive model of formative and summative assessment for use in computer courses is presented.
Article
This research monograph examines the potential of constructivist learning environments for developing prerequisites of expert knowledge during university studies. Drawing on recent theories of the development of expert knowledge and on the constructivist view of learning, an experiment was conducted in an educational psychology course. The primary purpose of the study was to compare the learning outcomes of students who studied the course material in a constructivist learning environment with those of students who learned it under traditional teaching and studying conditions. Students in the constructivist learning environment acquired more diversified knowledge. In addition, a theory will be presented about what actually changes when conceptual change occurs.