To read the full-text of this research, you can request a copy directly from the authors.
We distinguish between cognitive competence, captured in Bloom et al.'s cognitive taxonomy, and operational competence, expressed in Simpson's hierarchy. We argue that the skills gap observed by employers can be addressed by designing computing degrees that focus on developing competence rather than on knowledge.
To read the full-text of this research, you can request a copy directly from the authors.
... Observation → Trial → Repetition → →Refinement → Consolidation → Mastery The fundamental implication of this expression of the psychomotor learning model is that repeated practice is necessary to attain the desired goal of developing competency. Its basis was the repeated practice aspect that Bowers et al.  proposed Simpson's psychomotor learning model as an alternative to Bloom's cognitive learning model for the development of competency in computing degree programs. ...
... Simpson's and Miller's hierarchies both imply repeated practice to achieve the higher levels. Bowers  argues that it is possible to address the skills gap observed by employers by pivoting from "cognitive competence" captured in Bloom's taxonomy to "operational competence" expressed through Simpson's hierarchy . Assessment of skill development requires some framework that captures the skills needed in a structured manner. ...
Competency-based learning has been a successful pedagogical approach for centuries, but only recently has it gained traction within computing. Competencies, as defined in Computing Curricula 2020, comprise knowledge, skills, and professional dispositions. Building on recent developments in competency and computing education, this working group examined relevant pedagogical theories, investigates various skill frameworks, reviewed competencies and standard practices in other professional disciplines such as medicine and law. It also investigated the integrative nature of content knowledge, skills, and professional dispositions in defining professional competencies in computing education. In addition, the group explored appropriate pedagogies and competency assessment approaches. It also developed guidelines for evaluating student achievement against relevant professional competency frameworks and explores partnering with employers to offer students genuine professional experience. Finally, possible challenges and opportunities in moving from traditional knowledge-based to competency-based education were also examined. This report makes recommendations to inspire educators of future computing professionals and smooth students' transition from academia to employment.
Internet of Things (IoT) provides a thematic umbrella that allows educators to combine various theoretical aspects of computer science with substantial problems in everyday life. As such, building IoT device prototypes has been suggested by many as a means for teaching computer science and software engineering. However, how assessment should be conducted in these exploratory courses is often left vague, and thus, there is a need for applicable assessment methodologies.
This article reports results from three years of action research in teaching prototyping of Internet of Things devices in a practical, problem-based setting. We present an example course outline for arranging learning experiences in IoT prototyping and provide a general assessment framework, along with recommendations for best practices for facilitating personalized learning in similar contexts.
The results highlight that a general evaluation criteria can be composed, despite the versatile nature and varying complexity of the students' project outcomes. Also, we underline that students' learning can be evaluated in detail, despite large variations in their prior knowledge of IoT technologies or new product development as they enter the classroom for the first time.
The purpose of this paper is to qualitatively examine the relationship between problem based learning, authentic assessment and the role of community in fostering learning in digital contexts. The authors used “Digital Moments” to create a meaningful learning environment and build the online class community. They then collaboratively developed assessment strategies and tools with students following problem-based learning methodologies. Given that the pace of information is rapid and changing, the authors argue that online learning must occur in a context that embraces these three concepts: 1. Students must be empowered through PBL to choose real world tasks to demonstrate their knowledge, 2. Students are allowed to choose the modality to represent that knowledge and participate in designing the tools for assessing that knowledge and 3. They do so in a supportive online community built through the sharing of Digital Moments. The paper chronicles the interconnection between problem based learning, authentic real world assessment tasks and a supportive online community. This resulted in developing learner autonomy, improving student engagement and motivation, greater use of meaningful self and peer assessments and shared development of collective knowledge. Further to this, it builds a foundation from which authentic assessment, student ownership of learning and peer support can occur in an ongoing way as learners make the important shifts in power to owning their learning and becoming problem-based inquirers in future courses. As a result, in order to fully embrace the online learning environment, we cannot limit ourselves to simple text based measures of student achievement. Stepping into this brave new world requires innovation, creativity and tenacity, and the courage to accept that as the nature of knowledge has evolved in the digital landscape, so must our means of assessing it.
Bloom's taxonomy of education objectives has been an important source for investigations of curriculum since its development. In the original taxonomy the authors addressed the issues of cognitive and affective objectives in education, and provided a hierarchy of kinds of capability in each of these domains that could be used as evidence of achievement. In addition, the hierarchy of capabilities provides a framework for correlating educational attainment with evidence of qualities that relate to abilities relevant to the performance of professional, or in the case of lower elements of the hierarchy, sub-professional work roles. The authors of the original taxonomy indicated that they believed that there are three domains relevant to educational outcomes. These are the cognitive, knowledge of and ability to work with information and ideas; the affective, ability to organise, articulate, and live and work by a coherent value system relevant to the capabilities achieved through education; and the psychomotor skills, ability to do acts relevant to the field of study. In engineering it is necessary for the student to develop skills working with the tangible stuff related to the discipline because the role of an engineer is to do either or both of development work of products and systems and to direct other people in the development and manufacture of products and systems. In roles where the engineer must personally perform work related to developmental experimentation, prototyping or contributions to maintenance and construction it is necessary for the engineer to have appropriately developed psychomotor skills to be able recognise and handle both test and developmental components and the equipment used to manipulate, work upon, or test those work pieces. In cases where the engineer's role is to direct the work of others it is important for the engineer to have appreciation of the tasks that the engineer calls upon those others to do and to have sufficient experience to understand the potential difficulties and dangers associated with the performance of the tasks. This appreciation will also provide a significant influence to the design activities of the engineer, as the engineer considers the usefulness and usability of the intended product. The paper will present a hierarchical taxonomy of psychomotor skills and discuss these skills specifically from the viewpoint of the needs of engineers.
It is important for both computer science academics and students to clearly comprehend the differences between academic and professional perspectives in terms of assessing a deliverable. It is especially interesting to determine whether the aspects deemed important to evaluate by a computer science expert are the same as those established by academics and students. Such potential discrepancies are indicative of the unexpected challenges students may encounter once they graduate and begin working. In this article, we propose a learning activity in which computer science students made a video about their future profession after hearing an expert in the field who discussed about the characteristics and difficulties of his or her work. Academics, professional experts, and students assessed the videos by means of a questionnaire. This article reports a quantitative study of the results of this experience, which was conducted for three academic years. The study involved 63 students, 6 academics, and 4 computing professionals with extensive experience, and 14 videos were evaluated. Professional experts proved to be the most demanding in the assessment, followed by academics. The least demanding group was the students. These differences are more salient if more substantial issues are examined. The experts focused more on aspects of content, whereas the student preferred to concentrate on format. The academics’ focus falls between these two extremes. Understanding how experts value knowledge can guide educators in their search for effective learning environments in computing education.
Software engineering (SE) students not only need sufficient technical knowledge and problem solving ability but also social and interpersonal skills in order to be industry
ready. To prepare the students for the ‘real world’ the SE educators frequently use ‘Authentic Assessment’ and ‘Project Based Learning (PBL)’ approaches in their curricula. However, the level of ‘authenticity’ should vary within PBL courses offered in different years of a degree program. In this paper, we present and discuss the results of the data collected and analyzed from the first SE course offered to the students. The aim of our research is to explore how much authenticity can be achieved in the first SE course. Our study was conducted at the University of Calgary with 64 software development project teams, totaling
229 undergraduate students. The data is collected from three semesters (2016-2018) in order to assess and monitor students performance. The course design used seven authentic assessments that focused on students skills while covering a complete software
development lifecycle. The results from data analysis show that students made progress in some areas of problem solving skills, however, they struggled in their social skills (e.g. people handling skills, negotiations skills and organizational skills), understanding software quality and adaptability.
The problem of how best to assess student learning is a fundamental one in education. Changes to computer science curricula seek to emphasise teaching practices that promote deep learning through direct, contextual examination of student performance on tasks that resemble those of practitioners, rather than more traditional methods. This kind of "authentic assessment" is becoming more popular as it appears to incorporate employability skills associated with professional practice into the curriculum in a natural way.
In this paper, we report on an investigation into how computing students themselves understand the terminology of authentic assessment. We give a brief summary of some of the salient points of the theory before using a simple qualitative methodology to analyse responses from a cohort of first year students on their understanding of the term. We produce a learner characterisation of the concept and compare this to those found in educational models of this assessment approach. We comment on the similarities and differences that emerge and draw inferences about its use and the necessary scaffolding that should accompany it in order for it to be successful.
Assessing student learning in the practice setting is one of the most sophisticated and complex forms of evaluation undertaken by registered nurses. The Nursing and Midwifery Council sets standards relating to learning and assessment in practice, focusing on professional values, communication and interpersonal skills, nursing practice, decision making, leadership, management and teamworking. Assessment needs to include evaluation of skill (technical, psychomotor and interpersonal), attitudes and insights, and reasoning. As assessment of student learning is conducted in the practice setting, risks have to be managed, and targets and service standards met. Therefore, it is understandable that mentors may express doubts about their ability to assess student learning rigorously and fairly. It is particularly challenging for mentors to state confidently what represents a demonstration of learning and competence when asked to decide whether a student is fit to practise.
No abstract available. (C) 1990 Association of American Medical Colleges
Bloom's Taxonomy: The Psychomotor Domain
Donald Clark (2015), Bloom's Taxonomy: The Psychomotor Domain,
A New Revision of the [Revised] Bloom's Taxonomy
Darwazeh Afnan Nathir
Afnan Nathir Darwazeh (2017), A New Revision of the [Revised] Bloom's
Taxonomy, Distance Learning, 14(3), 13-28
Extensions to Bloom's taxonomy of educational objectives
W R Dawson
W. R. Dawson (1998) Extensions to Bloom's taxonomy of educational
objectives. Sydney, Australia: Putney Publishing.
Review of Computer Sciences Degree Accreditation and Graduate Employability Department for Business Innovation & Skills and Higher Education Funding Council for
Nigel Shadbolt (2016) Review of Computer Sciences Degree Accreditation
and Graduate Employability, Department for Business, Innovation & Skills
and Higher Education Funding Council for England, available from
le/518575/ind-16-5-shadbolt-review-computer-science-graduateemployability.pdf (accessed 03 June 2019)
Educational objectives in the psychomotor domain
D C Washington
How much authenticity can be achieved in software engineering project based courses?
Zahera Shakeri Hossein Abad
Zahera Shakeri Hossein Abad, Muneera Bano, and Didar Zowghi. 2019. How
much authenticity can be achieved in software engineering project based
courses? In Proceedings of the 41st International Conference on Software
Engineering: Software Engineering Education and Training (ICSE-SEET '19).
What is competency based assessment?
Cognology (2019), What is competency based assessment?,
https://www.cognology.com.au/learning_center/cbawhatis/ (accessed 10.9.19)
Assessment and measurement of competence in practice
G D Herman
R J Kenyon
Herman GD, Kenyon RJ (1987) Competency-Based Vocational Education. A
Case Study, Shaftsbury, FEU, Blackmore Press, cited in Fearon, M. (1998)
Assessment and measurement of competence in practice, Nursing Standard
Assessment Methodologies to Evaluate Competencies
Prachi Juneja, (ND) Assessment Methodologies to Evaluate Competencies, in
Management Study Guide,