Content uploaded by S.C. dos Santos
Author content
All content in this area was uploaded by S.C. dos Santos on Mar 06, 2015
Content may be subject to copyright.
Authentic Assessment in Software Engineering
Education Based on PBL Principles
A Case Study in the Telecom Market
Simone C. dos Santos
Informatics Center
Federal University of Pernambuco
Recife, Brazil
scs@cin.ufpe.br
Felipe S. F. Soares
C.E.S.A.R - Recife Center of Advanced Studies and Systems
Informatics Center - Federal University of Pernambuco
Recife, Brazil
furtado.fs@gmail.com
Abstract—The continuous growth of the use of Information and
Communication Technology in different sectors of the market
calls out for software professionals with the qualifications needed
to solve complex and diverse problems. Innovative teaching
methodologies, such as the "Software Internship" model and
PBL teaching approaches that are learner-centered and focus on
bringing market reality to the learning environment, have been
developed and implemented with a view to meeting this demand.
However, the effectiveness of these methods cannot always be
satisfactorily proved. Prompted by this, this paper proposes a
model for assessing students based on real market practices while
preserving the authenticity of the learning environment. To
evaluate this model, a case study on skills training for software
specialists for the Telecom market is discussed, and presents
important results that show the applicability of the proposed
model for teaching Software Engineering.
Index Terms—Assessment processes, Software Engineering
Education, PBL principles.
I. I
NTRODUCTION
The growing and continuous presence of software in
integrated products and services which are available and
consumed daily by society call for the infrastructure of
Information and Communications Technology (ICT) in the
world to evolve continuously. In particular, in the Brazilian
market, the outlook is one of likely growth given that it will
hold major international events such as the World Cup in 2014
and the Olympics in 2016. Therefore, governmental plans that
target expanding broadband in Brazil have created excellent
opportunities for companies in the Telecom sector. Since they
recognize that non-ICT companies wish to use software, they
have been seeking to enhance the quality of their products and
services so as to meet the demands from the non-ICT sector
for improvements to in-country telecommunications networks.
It is against this background that skills training for software
professionals who have specialized in mastering applications
that target the development of embedded software and
network management with specific complexities related to
communication platforms and protocols that are very rarely
replicated in academic environments becomes a critical
requirement if the telecommunications sector is to evolve.
As in any area that applies ICT to solve complex problems,
what is needed in order to give software professionals
expertise in the Telecom sector is an effective model for
education that leads to technical and non-technical skills and
competences being developed, grounded on practices of real
projects which have complexities similar to those found in the
labor market.
One such alternative form of education based on real
practices for solving problems is Problem-Based Learning
(PBL) [1] which has been applied in different market areas
that range from the medical field, to which its origin is linked,
to the areas of engineering and technology. Tynälä [2] stresses
the main benefits of PBL when he defines it as an approach to
teaching and student-centered learning, in which these are
steeped in the practices of real projects, team-work is used to
solve problems and also to foster the development of skills and
attitudes, including group work, self-initiative and
cooperation, and being co-responsible for one´s learning.
Despite the obvious benefits of PBL, it is important to
emphasize that this approach is frequently confused with
practical experiments in which students receive little support
from teachers / tutors with professional experience in the area
of knowledge in question and are supported by subjects which
have a standard content and conventional processes for
assessment based on tests and group work that is scored.
However, an effective PBL methodology needs to preserve its
principles by defining processes that ensure theory and practice
will go hand-in-hand [3].
In [1], Savery & Duffy set out eight PBL principles and
stress the need to anchor all teaching and learning process
activities on a real, relevant and complex problem within a
collaborative learning environment similar to the work
environment. This is to enable students to develop the ability
to analyze possible solutions and to reflect on the learning
process. In practice, ensuring these principles are followed
requires a high investment in planning for and monitoring
PBL, which includes management time, effort, resources and
processes and therefore they are not always strictly adhered to.
For these reasons, many education programs choose to
preserve some PBL principles so as to maximize positive
results based on drawing the academic world and the market
closer to each other. It is important to emphasize that no
matter how "pure" the application of PBL is, planning and
monitoring processes are essential for evaluating its results
and thus cannot be set aside even in an education program that
adopts only some of its principles.
Prompted by this, this article puts forward a model for the
authentic assessment of students for education programs based
on PBL, which preserve, at the very least, the "real-world"
characteristics of problems and of the learning environment.
As a main reference, the strategy of authentic assessment set
out by Herington & Herington is the basis for defining this
model [4]. In authentic assessment, students are involved in
learning environments in which activities are geared towards
applying their knowledge, stimulating their thinking and
critical insight towards solving real problems and deploying
different ways of solving them. Therefore, this proposal shows
itself to be totally aligned to PBL-based approaches.
This paper also sets out how to validate a model based on
a real case study conducted on an education program which
seeks to train professionals to develop embedded software and
network management for the Telecom market.
This education program was implemented by means of a
Software Internship [5], similar to a medical internship/
residence, in which students learn by doing. Conducted in a
partnership between an institute of technological innovation
and a company that manufactures solutions and equipment for
the telecommunications industry, the program was run after
first of all creating a learning environment based on a
Software Factory, supported by processes for developing
software, designing real applications and multidirectional
interactions between interns students, specialized software
professionals in the role of teachers and tutors and the
intensive participation of the client, represented by the
company from the telecommunications industry.
II. A
N
A
UTHENTIC
A
SSESSMENT
M
ODEL
The concept of authentic assessment used as a reference for
the assessment model proposed was defined by Herington &
Herington in [4]. They draw attention to seven essential
elements in an authentic assessment:
1. The context needs to be real, thus reflecting the
conditions for assessing the students’ performance
within this context;
2. Students need to participate effectively in solving
problems, as doers, based on knowledge acquired
while being trained;
3. Students need to devote time and effort to
collaborating with others involved in solving
problems;
4. The problem needs to be real, and of relevant
complexity;
5. The assessment needs to be integrated with students’
activities;
6. The assessment should include multiple performance
indicators;
7. The indicators need to have well-defined and reliable
criteria.
These elements underscore the need for a real learning
environment, focusing on problem solving based on
collaborative work supported by well defined processes that
cover different aspects. When these elements are taken to the
software industry, it is easy to relate them to the processes that
support software engineering, which are placed on top of
delivery schedules and artifacts built iteratively.
Although these elements bring out the critical factors to be
considered in authentic assessment, they do not indicate how
this can be applied in a real learning environment, i.e. they do
not describe how assessment strategies should be used.
In [6], the author discusses some important points when
defining the evaluation process in PBL, highlighting the need
to define who does the assessment, what the best assessment
tools (oral or written) are, what the best approach (formative or
summative) is and what type of indicators could be used. Based
on an experiment in which teachers were trained in PBL, some
conclusions emphasized the importance of all those involved in
the assessment process taking part, and there being continuous
feedback throughout the process, a characteristic of formative
approach. In this context, the study presented in [7] reinforces
the importance of formative assessments in the PBL approach,
both in the assessments of groups of students, and in individual
assessment. Another important aspect discussed in [8] stresses
the need for alignment between the educational objectives and
the evaluation process: "Assessment of PBL needs to focus on
the objectives that PBL fosters in conjunction with the
educational course objectives". These studies discuss relevant
issues concerning the evaluation process in PBL, but none of
them proposes a model that can facilitate its implementation.
In [9], the author defines assessment strategies in the
context of PBL from three perspectives: Content, which is
related to the knowledge acquired by the student; Process,
which is related to the ability to apply the knowledge acquired
to solving problems and; Output, which is related to the
products generated as results. The combination of these
perspectives allows an assessment process that identifies not
only what the student understands with regard to the
fundamentals and concepts needed to solve problems, from the
perspective of content, but also provides an analysis of the
process of solving the problem that includes procedures and
analyzing alternatives, beyond the end solution proposed for
this problem.
Additionally, for an authentic assessment, it is important to
consider indicators compatible with each perspective within the
learning environment built. For example, if the learning
environment is represented by a software factory, on evaluating
the perspective of Process, indicators related to the process of
developing software used by the factory, with clear criteria
related to the development methodology adopted, will certainly
reflect the reality of the environment in which students are
learning. From the perspective of Output, which may be
associated with developing software engineering documents or
software code, it is also necessary to establish indicators related
to the quality of these products, such as organization and
clarity, in the case of a document, or meeting the architecture
standards in the case of software code.
Although the three perspectives of Content, Process and
Output assess much of the teaching and learning process in
PBL, they do not take account of interpersonal characteristics
developed by this approach, such as self-initiative, teamwork
and leadership in guidance towards solutions. Moreover, when
these assessment strategies are compared with assessment
solutions adopted by the software industry, what cannot be left
out is evaluation from the perspective of customer satisfaction,
which is normally related to characteristics such as
productivity, quality of the service and transparent
communication, which once more lie outside the perspectives
defined in [9]. In this context, the model for authentic
assessment put forward in this paper adds two more
perspectives to the assessment strategies: Performance, a
subjective analysis of the students’ interpersonal
characteristics; Client satisfaction, an assessment based on
criteria for client satisfaction.
Once again it is important to emphasize the need to use
indicators (and methods) that the market has already adopted.
Various models for assessing performance have been adopted
by Human Resources departments in software companies that
can also be used in the context of the learning environment
[10]. As to evaluating satisfaction, it is essential that indicators
be defined with the effective participation of the client.
Finally, given the importance of this article setting out
some guidance as to the application of assessment strategies, it
is important to relate this to the main types of evaluation
defined in [11]: formative and summative ones. The purpose of
formative assessment is to evaluate the student's performance
throughout the problem solving process, by encouraging
continuous feedback, whereas summative assessment evaluates
results at the end of the stages of learning [12]. In other words,
as the Handbook says [11]: "When the cook tastes the soup,
that's formative; When the guest tastes the soup, that's
summative."
Based on these definitions, Table I shows a relationship
between the perspectives of assessment and the possible type of
assessment to be adopted. This relationship shows that
perspectives of assessment related to Processes and
Performance, resulting from continuous monitoring and
feedback, are characterized as assessments of the formative
type, while prospects such as output are inherently summative.
TABLE I. P
ERSPECTIVES AND TYPES OF ASSESSMENT
Assessment
Perspectives Formative Summative
Content x x
Process x _
Output _ x
Performance x _
Client _ x
It is worth noting that the combination of types of
assessment in a process of authentic assessment is a solution
that reflects the assessment procedures of the labor market,
which very often err as to formative assessments, due to
resource constraints and the timing of projects.
However, despite the limitations inherent in real scenarios
of software development, it is believed that the application of
this evaluation model is suitable for any software development
environment aimed at solving problems based on groups of
people collaborating and cooperating with clear goals for
serving and meeting the demands of real clients.
III. C
ASE
S
TUDY
:
S
OFTWARE
I
NTERNSHIP FOR THE
T
ELECOM
M
ARKET
The concept of a "Software Internship", just as in Medical
Internship, at bottom includes: (1) the formal teaching of
relevant knowledge by a teaching entity and; (2) going deeper
into practices within an area of specific knowledge, acquired
directly into an environment of developing real software, in an
analogous way to the hospital during Medical Internship [5].
To apply this concept, Internship in Embedded Software
and for Network Management was created with the goal of
training software professionals in the skills needed for jobs in
companies from the Telecom industry that make infrastructure
products, and covered the period from November 2011 to April
2012.
This was conducted under a partnership between C.E.S.A.R
(www.cesar.org.br), a technological innovation institution, and
Datacom Brazil (www.datacom.com.br), the leading Brazilian
manufacturer of equipment and solutions for the
telecommunications industry. Given the growth of
opportunities in this market and the need to hire professionals
with the skills needed for new projects, the design of this
program was aimed at training specialized professionals in a
short period of time whom Datacom could suitably hire. For
simplicity, this program will henceforward be deemed as the
"Datacom Internship".
In order to have a better understanding of the program, this
section describes the structure of training, with reference to the
model proposed by Santos [13], which highlights five key
elements in the PBL approach to teaching: Learning
Environment, Content, Problems, Human Capital involved and
Control Processes.
A. Learning Environment
The Datacom Internship used the model of a Software
Factory as a practical learning environment; Datacom
employees as real clients; and C.E.S.A.R as a teaching
institution. It is worth pointing out that the Software Factory
concept adopted is related to structured and integrated units of
development, with clear roles and responsibilities, supported by
well defined tools and processes, as described in [14]. Figure 1
illustrates the structure of the learning environment.
The Datacom Internship consisted of a group of 18 interns,
organized into 2 groups of 9 interns: (1) Group A, whose focus
was on developing embedded software for Datacom products;
(2) Group B, whose focus was on developing software for
managing networks in the language of Java. The interns had a
technical profile, being students who had graduated from
information technology courses, but who had no experience of
Datacom products and platforms, and in almost all cases, little
experience in Java.
Fig. 1. Learning Environment in the Datacom Internship.
Interns were selected through a selection process that
included evidence of technical skills, interviews and group
dynamics.
B. Content
The program lasted for 5 months: the first month focused
only on subjects taught; two months of combining practices
supported by subject teaching and; the last two months focused
exclusively on the practices of real projects.
The training included a set of taught subjects common to
the two groups and one group of subjects specific to each
group. Thus, the entire class attended the courses on Agile
Project Management, Computer Networks, Datacom Products,
Linux and Managing Configurations and Change. The specific
subjects in Group A focused on C, C ++ and Advanced Linux,
while Group B attended courses on Java and Software Testing.
It is worth pointing out that the way in which the content of
the program was delivered, since it contained subjects taught
which were not necessarily directly associated with solving a
real problem, had a direct impact on one of the main principles
of PBL, which holds that all tasks of the learning program need
to be anchored on a problem. The decision taken to have this
format was justified by the client’s need for a "leveling" of
knowledge among all the student interns in some subjects,
given the specificity of the sector, within the time limits and
resources of the program. Thus, the teaching approach adopted
cannot be considered purely PBL, but only "based" on some of
its principles.
C. Real Problems
The real problems were driven by Datacom´s wants. Each
group received specific demands for software development:
initially Group A received four demands for embedded
software, while Group B received two demands, related to
developing and testing software in Java.
The initial demands were of low complexity, and to the
extent that the subjects were tutor-led, the demands gained a
higher degree of difficulty as the course progressed.
D. Human Capital
To create real applications to be offered to the market on
this platform, it was necessary to structure the Software
Factory including a team of professionals whose skills were
compatible with the goals of the training and who would
interact continuously with the teams formed by the interns.
Figure 2 gives the organogram of the program.
Fig. 2. Human Resource in the Datacom Internship.
Thus, each team of interns received continuous support
from a technical tutor whose skills and experience were
specific to their respective group, and together they interacted
continuously with two representatives from Datacom, the
client, one for each group, A and B. Interns also received skills
training from a group of teachers, and experts in the respective
subjects of training.
To ensure the effectiveness of the teaching methodology,
an Educational consultant (the first author of this article) took
part in the program, by defining the teaching methodology in
order to preserve its principles, within the constraints on
resources, and to implement and monitor the model for
authentic assessment proposed. Finally, a management team
comprising an academic coordinator, who was responsible for
monitoring the academic and educational program, and a
project manager (the second author of this article), responsible
for planning and monitoring projects and the delivery of
applications, formed the Datacom Internship team.
E. Control Processes
For planning and monitoring the development of each
demand, Scrum [15] agile management techniques for projects
were used, thus enabling the tutors and managers involved to
monitor the interns continuously.
Scrum is an empirical approach focused on people,
developed for environments where requirements emerge and
change quickly. The implementation of Scrum is based on
defining tasks and setting priorities (Product Backlog), made
with the client’s support, and grouped in development units of
short duration (maximum of 4 weeks), called Sprints. The
process is governed by an actor called the Scrum Master who
aggregates monitoring and feedback activities. These activities
are generally conducted during short daily meetings, attended
by the entire team, to pinpoint and correct any shortcomings in
and/or impediments to the development process [15].
The Scrum method also uses a visual board, usually divided
into three columns: Backlog, Doing and Done. Each column
represents the actual status of the activity. When a task is
begun, this is indicated by transferring it from the “Backlog” to
the “Doing” column, and its conclusion is evidenced by
moving it to the “Done” column.
One Scrum visual board was created for every demand of
each team.
The use of Scrum during the Datacom Internship provided
some important contributions. The presence of a product owner
(the respective client), who was responsible for the application
conception and its validation, saw to it that the requirements of
the demands were made clearer. The Scrum Master’s role was
shared between the technical tutors and the project manager,
who prioritized activities, chaired the daily meetings to monitor
how the demands were developing and to assess impediments.
The Scrum visual board, which showed the “Backlog”,
“Doing” and “Done” tasks, provided greater visibility and
transparency to the flow of tasks in each demand, and
identified general issues that many of them shared.
Despite the various benefits of Scrum, it worth pointing out
that one of the groups of interns had some trouble keeping the
Scrum framework updated and, at times, pondered using
another technique for monitoring. These considerations are
discussed in more detail in the section that describes how the
Authentic Assessment model was applied (Section IV).
Even in the context of control processes, this internship was
applied the full model proposed in Section II, and consisted of
the following perspectives:
1. Content, based on conceptual, practical and
contextual assessments used in the disciplines;
2. Process, based on assessments conducted in project
status meetings and meetings required by the Scrum
methodology;
3. Output, based on assessing the demands from
Datacom;
4. Performance, based on assessing the individual
performance of each interns;
5. Client satisfaction, based on assessing criteria defined
by the clients, Datacom.
In the Content perspective, the teacher conducted the
assessment, based on the content and practices discussed in
his/her discipline, in the iterations with interns and in
monitoring that demands were being met. This assessment
used a 5-point scale of values: "excellent" (5); "very good"
(4); "good" (3); "satisfactory" (2) "insufficient" (1).
From the perspective of process, this was evaluated by the
technical tutors and project manager monitoring how
applications were developed. This monitoring was conducted
by means of meetings recommended by the Scrum
methodology, in general, on a daily and weekly basis. Bearing
in mind that the Scrum agile method is already rather
prescriptive, the criteria for making evaluations from this
perspective were defined as follows: (1) compliance with the
the frequency of the meetings recommended by Scrum; (2)
assiduity and documenting in managing the configuration and
change; (3) undertaking unit testing. Each indicator could
assume a value from a simple 3-point scale of values:
"Considered" (100%), "Partly considered" (50%) and "not
considered" (0).
As to the perspective of Output, this focused on analyzing
artifacts of the applications produced by the interns. These
analyzes were conducted throughout the development process
under the following criteria: adherence of the code to the
architecture defined; a code within the Datacom standard for
coding; a code written in accordance with good programming
practices (clarity, documentation, reuse, etc.); quality of the
documents (form and content); approval of the sprint (iteration
of the Scrum) by the client. On tackling how documents were
to be drawn up, rather than software code, the respective
criterion received the value of "not applicable". Once again the
same simple 3-point scale of values was used. The technical
tutor of each team conducted these assessments.
From the perspective of performance, seven competencies
were assessed: initiative; ease of understanding/learning,
teamwork, communication, flexibility, self-development, and
being results-oriented. Due to the subjectivity of this analysis,
a 5-point scale of values was used in this perspective: (1)
"needs to develop a lot", (2) "needs to develop", (3) "meets the
need of the function", (4) "has a superior performance", (5) "is
an example for the others”. This assessment was conducted by
the tutors and project manager and applied in the form of self-
assessment, assessment of the technical leader and peer
assessment.
Finally, assessing customer satisfaction was based on
criteria normally used when evaluating software factories:
meeting deadlines and goals; team productivity,
communication and transparency; technical quality and quality
of the final product. This assessment used a 5-point scale with
values similar to those used for Content, "excellent" (5);
"good" (4); "satisfactory" (3); "unsatisfactory" (2); "very poor"
(1). It was conducted by the C.E.S.A.R’s Project Management
Officer together with the clients’ representatives.
IV. A
PPLYING THE
A
UTHENTIC
A
SSESSMENT
M
ODEL
Based on the five perspectives of authentic assessment, it
was possible to evaluate the interns’ performance in different
aspects. Content Assessments enabled the subjects which were
of greatest difficulty for the students to be identified, whether
this was due to how their content was approached, their degree
of complexity or the assessment method being inappropriate.
The perspectives of Process, Results and Client enabled an
analysis to be made of the teams, which thus identified the
groups which most matured in the development process and the
applications with the greatest technical quality of their
components. Finally, the evaluation of performance enabled an
individual look at interpersonal skills that are, in general,
developed based on PBL-based learning approaches. The
following sections present some of these results.
A. Assessment of Content
Table II shows the overall assessment of the interns from
the point of view of Content for groups A and B. The result of
these evaluations was announced at the end of each course, so
followed the summative approach shown in Table I.
TABLE II. S
UBJECTS AND
T
EAM
A
VERAGES
Disciplines
Team A
Average
Team B
Average
Network 4,8 5,0
Agile Project Management 4,8 4,8
Datacom products 4,3 4,4
Linux 3,7 3,3
Configuration Management 3,9 3,9
C 4,4 _
Linux (advanced) 5,0 _
OOP and C++ 3,6 _
OOP and JAVA _ 3,2
JAVA (advanced) _ 3,3
Software Testing _ 3,9
5 – Excellent | 4 – Very good | 3 – Good | 2 – OK | 1 – Insufficient.
On looking at Table II, note that the interns of both teams
performed less well in the subject of Linux. Since points of
improvement were identified at the start of the course and
discussed with the interns, remedial and reinforcement
measures were taken to minimize them. The results of these
measures were proved, given the success of performance of the
subject of Advanced Linux for which Group A were trained. A
similar situation occurred with the subjects of more technical
and complex programming languages, which also needed
measures to adopt content for the interns.
Although outside the scope of this analysis, which focuses
on evaluating students, it is important to point out that interns
carried out evaluations on teachers (on aspects such as
knowledge, experience, security and ethics) and the approach
of each subject (on aspects such as the clarity of objectives,
content, relevance, good references), which resulted in an
average of more than 4 (Very Good). This assessment level
remained high even in the subjects in which the interns
performed less well.
In general, despite occasional difficulties in some
disciplines, the interns' performance remained above level 3
(good), with an overall average of around 4 (Very Good), as
can be seen in Table II.
B. Assessment of Process
Figures 3 and 4 show the evolution of the assessments from
the perspective of process for Groups A and B, respectively,
and do so in a formative way. The frequency of these
assessments was guided by the development methodology
used, in the case of Scrum and, therefore, each group received
an assessment for planning under this perspective at the end of
each Sprint. The percentage equivalent with respect to Process
was calculated by summing the values assigned to each
criterion (Section III.E), divided by the number of criteria
evaluated in that Sprint. Thus, if all three criteria of this aspect
received a value of 100% (fully satisfied), this would result in
300% divided by three, totaling 100%. The assessments were
conducted by the project manager in conjunction with the
technical tutors from each team.
On analyzing the behavior of group A, described in Figure
3, some impacts on the process are seen. This group had
communication problems in Sprint 1, which adversely affected
its performance. An attempt to use another tool to monitor on
the Web instead of using the physical Scrum framework also
ended up adversely affecting the performance of this
perspective in Sprint 4. The absence of Sprint Review meetings
in Sprints 5 and 6, due to the team focusing on producing
results, maintained this result.
Fig. 3. Evolution of the perspective of Process of Group A.
As for Group B, represented by Figure 4, the team´s main
difficulty was to incorporate the culture of monitoring the
Scrum framework. This problem was not overcome in the first
three Sprints, but did so gradually in the sprints that followed
until the end of the project.
Fig. 4. Evolution of the perspective of Process in Group B.
As a general result of the assessment from this perspective,
note there is a high degree of complying with processes: above
80% in most of the sprints in both groups. It is also possible to
observe the natural difficulty in adopting processes in the early
stages of the projects, a consequence of the period of learning
and acculturation of the teams.
C. Assessment of Output
The graph in Figure 5 shows the outputs improved
throughout Group A’s project. Similar to the calculation in
respect of Procedure, the percentage equivalent to the Result
aspect was calculated by summing the values assigned to each
criterion (Section III.E), divided by the number of criteria
evaluated in that Sprint. While the sprints were being carried
out, it was observed that the complexity of Sprint 3 made
planning difficult, which had an impact on the final result, even
though its processes were correctly followed. On the other
hand, in the final sprints, this approach obtained a 100%
performance and even had an impact on the compliance of
processes (the absence of Sprint Review meetings).
Fig. 5. Evolution of the perspectives of Output and Process of Group A.
As to Group B, the perspective of Output remained highly
satisfactory, the only variations being in Sprint 3 (due to the
complexity of the task, as mentioned above), and thus even
showed a good alignment between Processes and Output.
Fig. 6. Evolution of the perspectives of Result and Process of Group B.
The results of the assessments from the perspective of
Processes showed a curious aspect as to evaluating software
professionals in real working environments. A problem of
compliance with processes does not always represent a
negative impact on the outcomes of projects and client
satisfaction. After all, the processes are there to achieve good
results and high satisfaction, and if at some point they impede
what is taking place, this is because they need to be reevaluated
and modified. This observation is evident when evaluating the
two perspectives, Process and Output, together.
D. Assessment of Performance
Due to restrictions of time and effort, the assessment from
the perspective of Performance was performed only once in
this program, in mid-March, at which point the teams were
already producing at a stable rate.
On analysing Table III, which shows the average of the
teams for each aspect assessed, it is seen that the interns
demonstrated a good performance in all aspects, as they reach a
point in the scale higher than 3, indicating, at the least, that the
competence developed met the requirements of its function.
This evaluation also pointed up the strengths of each group,
with respect to more subjective criteria. Group A was
outstanding in the competence of "self-development" and
"being easy to understand", while Group B showed themselves
to be more "flexible" with greater competence in "teamwork"
and "self-initiative".
TABLE III. T
YPES OF
P
ERFORMANCE AND
T
EAM
A
VERAGES
Performance
Average Time A Average Time B
Initiative 3,3 3,6
Easy to understand 3,7 3,5
Team work 3,0 3,6
Communication 3,6 3,1
Flexibility 3,4 3,8
Self-development 4,4 3,4
Results-oriented 3,2 3,5
5 – Exemplary | 4 – Superior performance | 3 – OK | 2 – Few demonstrable results | 1 – No results.
Note that these features are well aligned to the demands of
each group. Group A, charged with developing embedded
software, needed to be more investigative and geared to
discovering solutions, while Group B, charged with developing
Java software and testing, needed to be more collaborative and
adaptable.
It is important to point out that this evaluation has an
individual character, which is not recorded in Table III for
reasons of confidentiality, feedback being carried out
personally with each intern. In this context, two interns, one
from each team, showed that they had developed greatly with
regard to aspects of performance. One of them obtained
performance 5 ("is an example to others") in all aspects.
Therefore, this type of evaluation still tends to emphasize the
character of student leadership within group activities.
E. Assessment of Client Satisfaction
From the perspective of client satisfaction, two rounds of
summative assessment of character were held, one at the
beginning of the practical activities, and the other at the end of
the Program. Table IV summarizes the results achieved by the
two groups, A and B.
TABLE IV. C
LIENT
S
ATISFACTION AND
A
SSESSMENTS OF THE
G
ROUPS
Client Satisfaction
1
s
t
Assessment
2nd Assessment
Group A 4,4 4,4
Group B 4,0 4,3
5 – Excellent | 4 – Good | 3 – Satisfactory | 2 – Unsatisfactory | 1 – Very poor.
The results from the evaluations show a high rate of client
satisfaction, even at the beginning of the activities, when the
teams faced challenges related to adopting processes. One of
the reasons identified for this outcome, based on making
assessments from other points of view, was that the teams
constantly targeted the results of the projects, which
characterizes concerns related to most of the aspects of client
satisfaction described in Section III. This feature is also
reflected in the assessments conducted on performance.
It is important to emphasize the assessment of customer
satisfaction in this program had a connotation that was only
slightly academic, considering the goal was for Datacom to hire
qualified professionals with the skills it needs. This is a
common feature in the context of Software Residency
programs, which are almost always focused on specific
outcomes for the clients involved. This learning environment,
which is more realistic than other education programs,
reinforces the importance of this aspect within the valuation
model proposed.
V. C
ONCLUSIONS
The model for assessing students put forward in this article
sought to assess a training course in Software Engineering,
based on practices used in the market, with a view to proving
the degree of effectiveness of a teaching methodology that
preserves principles such as solving real problems within real
development environments. Given the difficulty and costs
associated with creating such an environment, the "Software
Internship” applied in this case study is a good alternative for
making such programs viable since it respects the essential
elements of teaching approaches based on PBL principles.
If the various aspects of "Authentic Assessment" are
explored, the different points of view for assessment may be
matched and analyzed, and will offer important information
both to those who lead the process of teaching and learning and
manage training, and to clients who require professionals with
the expertise who meet their wants and needs. In particular, the
application of the model based on continuous monitoring,
prompted by feedback meetings throughout the program, not
only enabled points of improvement to be identified, but
strategies to be defined that might enable the weaknesses found
to be overcome and strengths to be maximized. Furthermore,
this model makes it possible both to assess the student from the
perspective of group work and individually, thus contributing
to developing improvements specific to each individual, within
the criteria evaluated in the labor market. Finally, the case
study presented successfully achieved its objectives: of the 18
interns trained, 94% were approved for hire by a client, with
only one refusing the invitation at his own initiative. Currently,
the ex-interns form teams which fully develop new software
projects for the Telecom market, and involve the partner
companies of this program.
A
CKNOWLEDGMENT
The results presented here were developed as part of a joint
project between C.E.S.A.R (Recife Center of Advanced Studies
and Systems) and Datacom, and drew on (using resources of)
the "Law on Informatics", No 8.248/91, which legislates for
Brazil´s electronics industry. Additionally, this program would
not have obtained the results presented without the involvement
and commitment of students and all members of the technical
and academic teams for which the authors are most grateful.
R
EFERENCES
[1] J. R Savery and T. M. Duffy, “Problem based learning: An
instructional model and its constructivist framework”. Education
Technology. 1995.
[2] P. Tynälä, “Towards expert knowledge? A comparison between
a constructivist and a traditional learning environment in the
university”. Int. J. Educ. Res., v.31, p.357-442, 1999.
[3] S. C. Santos, M. C. M. Batista, A. P. C. Cavalcanti, J.
Albuquerque and S. R. L Meira. “Applying PBL in Software
Engineering Education”. CSEET 2009, Hyderabad, Índia, 2009.
[4] J. Herrington & A. Herrington, “Authentic assessment and
multimedia: How university students respond to a model of
authentic assessment”, Higher Education Research and
Development, 17 (3), 1998, 305-22.
[5] A. Sampaio, C. Albuquerque, J. Vasconcelos, L. Cruz, L.
Figueiredo, S. Cavalcante, "Software Test Program: A Software
Residency Experience”, International Conference on Software
Engineering (ICSE), 2005, p. 611-6112.
[6] R. Tuohi, “Assessment in Problem Based Learning connected
with IT Engineering Education”, International Conference on
Engineering Education & Research December 2-7, 2007
Melbourne, Australia.
[7] F. Yin, “Applying methods of formative and summative
assessment to problem-based learning in computer courses”, The
China Papers, November 2006.
[8] L. L. Elizondo-Montemayor, "Formative and Summative
Assessment of the Problem-Based Learning Tutorial Session
Using a Criterion-Referenced System", JIAMSE, 2004 Volume
14, pages 8-14.
[9] G. X. L Tai and M. C.Yuen, “Authentic assessment strategies in
problem based learning”. In ICT: Providing choices for learners
and learning. Proceedings ascilite, Singapore, 2007, 983-993.
[10] J. Fitz-enz (Author), B. Davison. “How to Measure Human
Resource Management”. McGraw-Hill, 3rd Edition, 2001.
[11] J. F. Westat, (2002). “The 2002 User Friendly Handbook for
Project Evaluation”. The National Science Foundation, Contract
REC 99-12175.
[12] C. O. Figuêredo, S. C. Santos, G. H. S. Alexandre, and P. H. M.
Borba, “Using PBL to develop Software Test Engineering”,
CATE, Cambridge, UK, 2011.
[13] S. C. Santos and A. Pinto, “Assessing PBL with Software
Factory and Agile Processes”, CATE, Naples, Italy, 2012.
[14] J. Greenfield, “Software Factories, Assembling Applications
with Patterns, Models, Frameworks and Tools”, ACM Press
New York, NY, USA, 2003.
[15] Schwaber K., “Agile Project Management With Scrum”.
Microsoft, 2004.