Content uploaded by Vincent Nix
Author content
All content in this area was uploaded by Vincent Nix on Dec 23, 2021
Content may be subject to copyright.
AALHE 2021
Conference Proceedings
Online, June 7th – 11th
Contents
Technology Helps Assessment; Assessment Helps Technology, by Fiona H. Chrystall, Peter U. Kennedy,
and Vincent J. Donatelli................................................................................................................................ 2
Two Changemakers in Assessment Culture: Summary and Reflections from the 2021 AALHE Conference,
by Sheri Popp .............................................................................................................................................. 11
Assessing Affective Learning Outcomes through a Meaning-Centered Curriculum, by Misty Song, Vince
Nix, and Joe Levy ........................................................................................................................................ 15
Help Me Help You: Motivating Campus Enigmas to Become Exemplars, by Kate Oswald Wilkins and
Susan Donat ............................................................................................................................................... 38
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 2
Technology Helps Assessment; Assessment Helps Technology
By Fiona H. Chrystall, Peter U. Kennedy, and Vincent J. Donatelli, Asheville-Buncombe Technical
Community College
Abstract: What do the baking & pastry arts and computer technology fields have in common? At A-B
Technical Community College, they both take assessment of student learning seriously and work hard
to do it efficiently and effectively. Using authentic capstone projects for program assessment, both
programs include field experts in the assessment process, and harness technology in different ways to
gather and analyze the data efficiently. This allows time to focus on what the data shows and
determine what should be done going forward to improve the learning experience for students. It also
facilitates greater connections between industry experts and employers and program faculty, allowing
informed, responsive, and ongoing curriculum development. The strategies employed by the two
disciplines featured here may be applied in a number of different fields. The key is flexibility to allow
each program to find the technological tools that best fit each particular context
Keywords: Authentic assessment, industry reviewers, capstone projects, technology
Introduction
Capstone courses and projects are a feature of a number of different programs at Asheville-Buncombe
Technical Community College (A-B Tech). Regarded as a high impact practice (Kuh, 2008), capstone
projects have gained popularity particularly at baccalaureate institutions and in smaller, liberal arts
colleges (Jankowski et al., 2018; Padgett & Kilgo, 2012). Most of the academic literature focuses on
describing capstone experiences in four-year degree or graduate programs and covers a variety of
culminating experiences for students.
At A-B Tech, the focus in Baking and Pastry Arts Department and the Computer Technologies
Department – and in many other career and technical education programs at the college – is on
authentic projects where students undertake a semester-long project in their final semester which
synthesizes all aspects of student learning that have been articulated in the assessment plan for each
program. These authentic projects result in comprehensive demonstration of the creation,
development, and execution of tangible products, as well as demonstrating knowledge and key skills in
the process. Industry experts and instructors provide authentic assessment of students’ culminating
work as a summative assessment. It is an intensive experience for the students and involves a great
deal of guidance and formative assessment and feedback from instructors along the way. For the
programs highlighted here, there can be up to eighteen industry experts and several instructors
involved in the assessment of students’ capstone projects with consideration given to a number of
student learning outcomes. This generates a significant amount of data that can be overwhelming, and
at a time when both faculty and students are running on fumes, trying to make it to graduation.
The use of various technological platforms and tools has made the collection of assessment data more
efficient and has significantly reduced the timeframe between student demonstration of learning and
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 3
analysis and interpretation of results to identify potential improvements to the student learning
experience. The use of technology is tailored to the needs of each program, yet the end results are
similar – industry experts participate in assessment and provide timely, constructive feedback to
students; assessment results are available quickly, allowing faculty the opportunity to analyze and
discuss the findings while the story behind the numbers is still fresh in their minds; and improvements
can be implemented with little to no time lag.
Capstone Project Format
The Baking and Pastry Arts AAS program and five AAS programs in the Computer Technologies
department use rigorous and comprehensive capstone projects in the final semester of the student
learning experience to assess students’ knowledge, skills, and abilities. Instructors and industry experts
assess the student work together in a culminating presentation of the students’ work in late Spring,
using rubrics and various technologies to facilitate the efficient recording and gathering of assessment
data. Each program has its own preferred method of sharing instructions, information, and student
work with industry reviewers and program faculty. The Baking and Pastry Arts (BPA) program is
accredited by the American Culinary Foundation (ACF) and utilizes ACF assessment criteria as a basis for
judging a wide array of baking and pastry products created by students during their capstone project.
The Computer Technologies (CT) programs have developed their own specific assessment rubrics over a
number of years, through ongoing analysis, interpretation, and discussion of student learning
outcomes.
A range of hardware and software, such as the college’s learning management system, the Microsoft
Office suite of products, LinkedIn, iPads, and other external tools, have been employed according to the
needs and context of each capstone project. Some of the Computer Technologies capstone projects
include a greater element of collaborative work among students (Network Management and Systems
Security) and utilize platforms that facilitate student collaboration and communication while other
capstone projects assess student work as individual endeavors (Baking and Pastry Arts, Digital Media,
Software and Web Development, and Information Systems).
The Baking and Pastry Arts program utilizes the rubric and grading guide tool embedded in the college’s
Learning Management System (LMS), Moodle, to assess several aspects of the capstone project which
are assessed by a single person (an industry reviewer who is also an adjunct instructor in the program
or a full-time faculty member); the capstone packet, breads, and cake, chocolate, and sugar showpieces
are assessed in this way. The desserts and pastries, and chocolates and candies - which complete the
requirements of the BPA capstone project - are typically assessed by two judges per student using
customized spreadsheets accessed on iPads during tasting on Capstone Day. This negates the need to
create guest access to the LMS for industry reviewers external to the college and also overcomes the
inability to directly input more than one set of scores and calculate averages within the learning
management system. Verbal feedback is provided to the students by industry reviewers on the actual
Capstone Day and followed up within the next few days with written and numerical feedback uploaded
to the Moodle gradebook. Industry reviewer feedback and scoring is also reviewed by the lead BPA
instructor to check for any scoring that is an extreme outlier or feedback comments that are not
appropriately worded prior to sharing the feedback form with the student. This process has been in
place for a number of years.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 4
The quick move to remote emergency education in Spring 2020 meant that the full-blown realization of
the BPA capstone project could not be carried out. Alas, there was to be no consumption of delicious
baked delights in the name of assessment! However, the creation of themed products and plans to
execute those could still be accomplished and presented electronically in the LMS. The assessment of
this important preparatory work became the focus for determining what the students had learned
throughout the program for the Class of 2020. In Spring 2021, we were able to reinstate the actual
execution of the themed concepts of our BPA students with appropriate protocols in place to keep
everyone safe. Assessment, chocolate, and cake could once more converge in an intense, real-life
experience for all, aided by technology.
The Computer Technologies department has a presentation day for each program to which all faculty in
the department are invited along with industry experts in each particular area of specialization. Prior to
the pandemic, students did an in-person presentation followed by a Q & A session where faculty and
industry reviewers could gain further insight into the creative process, as well as the students’ technical
knowledge and ability to interact in a less structured format than the formal presentation. Digital
means of gathering data for assessment purposes were already established and had been improved
over the years. Each of the five programs in Computer Technologies chose their own tools and
technology for collecting assessment data.
The Information Systems AAS program used spreadsheets to record and gather data from industry
judges, having also provided a set of instructions and folders of materials for review electronically prior
to participating in a virtual question and answer session with the students. The Digital Media
Technology program chose to use Microsoft Office forms to gather expert reviews of student work,
while the Network Management and Systems Security programs utilized a number of technologies,
including building a LinkedIn site for collaboration, a SharePoint site, and using Vimeo for the sharing of
student presentations. When instruction moved to entirely online in Spring 2020, the presentations
were recorded on video with the Q & A sessions occurring in a synchronous video conference attended
by industry reviewers, faculty, and all capstone students. This format was refined and continued for
Spring 2021.
Benefits of using Technology for Assessment
Both the Baking and Pastry Arts and Computer Technologies programs have used electronic means of
gathering assessment results for a number of years, so when the pandemic hit in Spring 2020, it was
possible to continue using existing assessment methods with minimum disruption. This allowed faculty
to focus on determining the best options for students to complete and share their capstone project
work for review virtually, rather than in person. While it was not possible for the BPA students to hold
their usual Capstone Day table displays and tastings with industry judges in Spring 2020, it was possible
to increase the focus on the capstone packet to demonstrate the extensive planning process for this
challenging project (see Figure 1 for capstone packet requirements).
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 5
Figure 1
Overview of BPA Capstone Packet Requirements
BPA Capstone Packet Elements and Guidelines
1. Idea: Theme must flow through the entire capstone show pieces and dessert
products.
2. Marketing: Business card sample, Imaginary Menu of the proposed business
operation incorporating the dessert, pastries, and breads you wish to sell (should
fit the theme), costing of the menu you are preparing for tasting, Business plan,
budget for the proposed business operation), a sketch of the layout of the
operation.
3. Portfolio: Must include biographical paragraph, updated resume, professional
certificates, letters of recommendations, special interests, or awards, photos, and
any other pertinent information.
The capstone packet had been identified as an area of relative weakness for BPA students from
assessment results in 2017-18, and curricular improvements put in place in 2018-19 resulted in some
improvement in student performance in this aspect of the capstone project the following year. In Spring
2020, when the capstone packets formed the focus for the capstone project without the ability to
execute the plans contained within them, a further improvement in student performance was gained
with all students meeting the criteria. Having the ability to continue monitoring this aspect of student
performance through the use of technology was a bright spot in an otherwise challenging semester for
BPA students and faculty alike.
The use of digital means to gather assessment data greatly speeds up the timeline from the actual
assessment of student learning to the analysis, interpretation, and use of the results for implementing
improvements. Assessment reports are completed annually during the summer at A-B Tech and BPA
and Computer Technologies are consistently some of the first programs to complete and submit their
reports well before the deadline. The reports detail proposed actions for improvement based on the
assessment results from the academic year that has just concluded.
On several occasions the results have indicated that a particular skill needs to be practiced more
thoroughly in courses early in the program and the ability to reach this conclusion by early summer
allows faculty time to put things in place for the fall for the incoming cohort of students to the program,
ensuring there is no lag time between the identification of the problem and the implementation of a
potential solution. This also allows us to assess the efficacy of such implemented actions in improving
student learning within a 2-year timeframe, assuming that at least some of the new students exposed
to the extra practice early in the curriculum will reach the capstone project and demonstrate improved
skills by the end of their second year. An example of this occurred in the Information Systems AAS
degree program in Spring 2020 (see Figure 2 for project overview). Expert reviewers noted that
students had paid insufficient attention to risk analysis and mitigation of risk while completing their
capstone projects. This is an area of increased focus in the IT industry, so our faculty were able to
quickly incorporate this aspect into future capstone projects in a deliberate way and consider where the
teaching of risk management could be further infused into the curriculum to support student learning
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 6
of this key skill. Assessment results for the following year showed an improvement in student
performance, despite a greater scrutiny of the inclusion of risk management in the student projects.
Figure 2
Overview of Capstone Project in Information Systems AAS program
CTS-289 System Support Project Description/Explanation
You are designing the information system for a small to medium size bicycle shop business. You
will be given most of the data needed to create the products for this business. You will use the
knowledge you learned in previous courses to create this project using Word, Excel, Access, and
PowerPoint. You will create a project that will show the following four required skills: An online
presence; database; documentation; planning/completion. In addition, you need to choose a
minimum of four skills that interest you and match previous classes from the following items:
Network design; security design; operating System choices with rationale; programming skills;
hardware/software decisions; GIS; Data analysis/ business analytics. You will present this project
to a panel and also have a display at the Capstone Expo.
Required Skills (include all 4)
Other Skills (minimum of 4)
Online presence
Network design
Database
Security design
Documentation
Operating system choices with rationale
Planning/completion
Programming skills
Hardware/software decisions
GIS
Data analysis/business analytics
A further benefit of using digital methods of assessing student learning is that industry reviewers, who
are often employers in the field, are more easily able to participate in the assessment of capstone
projects. Materials can be shared ahead of time to ensure thoughtful review and affords the
opportunity to ask questions for clarification with program faculty before live interaction with the
students, if necessary. It also increases the pool of potential expert reviewers as the geographical and
time constraints of travel to and from campus for an in-person event have been removed.
The use of technology platforms, such as LinkedIn used by the Network Management and Systems
Security degree programs, has increased employment opportunities for students. Potential employers
were able to see direct evidence of student skills, knowledge, and abilities and students could share
resumes and other information. This resulted in students in the Network Management and Security
Systems degree programs obtaining job interviews and employment, either during, or very soon after
their capstone project experience.
Reflections and Audience Responses
Many of the expert reviewers participating in the assessment of student capstone projects are also
members of the Advisory Committee for each program. This allows these individuals a greater insight
into the educational experience of our students and can lead to more informed suggestions for
curricular improvements to better align with current workforce requirements. Subsequent to this
presentation at the 2021 AALHE conference, the recently published Exemplars of Assessment in Higher
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 7
Education (Souza and Rose, 2021) has been read with interest and many aspects of what we describe
here within the A-B Tech Community College context can be found in Chapter 12 of that text.
Stillman School of Business at Seton Hall University describes how capstone projects and expert
reviewers are used for assessment of student learning at both the undergraduate and graduate level
within the business management programs of that institution. However, the role of industry reviewers
is taken a bit further at Stillman, with industry reviewers serving as coaches and mentors for students as
they work through to the completion of their capstone projects. This provides an additional level of
connection between the workplace and the college curriculum that feeds the workforce pipeline. Our
model of inviting industry reviewers to assess the end point of capstone projects inherently limits the
industry reviewer to being able to directly assess the end product and not the process. We supplement
that by also having instructors assess the student projects but adding industry reviewers as additional
assessors of the project development process allows for even greater insight into the curriculum and
can shift the focus from merely content knowledge and technical prowess to better assessing the
development and demonstration of key core skills constantly sought by employers, such as teamwork,
communication, critical thinking, problem-solving.
Baking and Pastry Arts students complete a work-based learning experience in the summer between
the first and second years of the AAS program. This, coupled with exposure to a cadre of local industry
experts during the Capstone Day in their final semester, often results in students graduating from the
program having already secured positions in the field. While the use of technology cannot replace the
in-person tasting experience required for the BPA capstone, it does provide an additional method of
communicating constructive feedback to students. The BPA Capstone Day is an intense event that the
students take very seriously. The expert reviewers hold an in-person debrief with each student at the
conclusion of their judging on the day. At this point, however, the students are exhausted and
somewhat stressed and may not absorb and remember all that is conveyed to them in that moment.
The backup of legible feedback summaries that they can reflect on after the event and keep for
posterity is a valuable addition to the learning experience for the students and provides excellent data
for the assessment of student learning and curriculum development. The assessment of student
learning in the Baking and Pastry Arts program does not occur solely in this final, intensive capstone
project, however.
Hathcoat compares Gordon Ramsay’s Master Chef competitive cooking show to the assessment of
student learning (Hathcoat, 2018). He considers the potential merits of generalization (drawing
conclusions about ability from a limited number of observations) versus observation (drawing a
conclusion from a single direct observation of completing a task). Three ways of handling generalization
in performance assessment are suggested: 1) increase the number of observations, 2) restrict the
domain of generalization, and 3) infer what is “possible” instead of what is “typical”. Our BPA program
does all three: 1) students complete productions on Thursdays throughout their program that are
graded components of their classes, 2) they contribute desserts, pastries, and breads to weekly lunch
and dinner menus of different national cuisines, and 3) they create their own themes and recipes for
their capstone which demonstrates what is possible. In addition, the capstone project incorporates
direct observation/tasting of all baking and pastry techniques learned throughout the program. One
BPA student in Spring 2021 summed up the nature of the BPA capstone experience beautifully in a
short video vignette: “My favorite thing is that we got to do one of everything we have covered in our
pastry education.” (Cauble, 2021 BPA graduate).
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 8
The capstone project is an obvious place for program assessment to be conducted. Indeed, many
capstones were created for this very purpose. A candid paper by Catherine Berheide in 2007 shares the
perspective of a faculty member reluctant to fully engage in assessment because of the work involved
for little perceived additional insight into the student learning experience. Berheide heralds the use of
the capstone for program assessment purposes as a way to do less work and collect better data.
However, we argue that capstone projects are intensive experiences for both faculty and students
when done thoughtfully and well. They involve a significant amount of planning, guidance, and
formative feedback culminating in comprehensive summative assessment of multiple intended learning
outcomes. The harnessing of technology to do most of the heavy lifting of data collection takes time
and effort to set up and get right but has several benefits for student feedback. Faculty can maximize
input from industry reviewers to complement their own assessments in shorter timeframes and focus
on the most important task: teasing out what the data shows and what should be done to further
enhance the learning experience for our students. The disparate fields of baking and pastry arts and
computer technologies discussed here demonstrate that a variety of tools can be used to make the
assessment process more efficient. While there are many similarities in the approaches taken by each
program, it is important that faculty have the flexibility to utilize the technologies that are most
appropriate for their particular context. The role of the assessment professional is to provide
frameworks and an array of mechanisms for efficient and effective assessment that allows the faculty
to focus on providing the best learning experience possible for our students.
An interesting discussion during the conference presentation revolved around the use of expert
reviewers. A participant shared that the use of expert reviewers had been “prohibited” at their
institution (or perhaps it was at the school or program level – it was not specified) on the assertion that
only faculty could conduct direct assessment of student learning. Confusion over the definition of direct
and indirect assessment was evident at A-B Tech back in 2013 when the Director of Curriculum Quality
Assurance and Assessment position was put in place. A deliberate strategy was adopted to avoid the
use of these terms and replace them with a range of potential assessment measures, listed in
descending order of strong to weak measures of student learning at the program level (Figure 3). Of
course, the order of assessment measures on this descending scale may vary from discipline to
discipline and can be debated but it was generally accepted as a decent rule of thumb across the
college.
You will note that “Recitals, Exhibits, Performance (evaluated by experts)” is fifth on the list. This is not
because industry reviewers are considered unable or unqualified to assess student work; rather, it is a
reflection that most industry reviewers participate in assessment of a final product and have little to no
insight into the process by which the final student artifact was created and developed. This is why, at
our college, we complement assessment by industry reviewers with assessment by faculty also. After
reading the approach taken by the Stillman School of Business in Exemplars of Assessment in Higher
Education (Souza & Rose, 2021), a possible strategy for addressing the limitations of the industry
reviewer purview may be to have local employers or alumni working in the field serve as coaches or
mentors to students throughout their projects, in addition to formally assessing the end products of the
capstone project.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 9
Figure 3
A list of potential measures for assessing student learning at the program level, ordered from the
strongest measures to the weakest (Chrystall, unpublished)
A second factor that potentially decreases the “strength” of the assessment through expert review is
inter-rater reliability, but this is also a consideration for any institution conducting assessment of
student work with faculty groups, and can be mitigated through clear guidelines, definitions, and
norming sessions. At A-B Tech, we believe that the value added by having expert reviewers assess
student work against a professional standard outweighs the potential challenges to this assessment
method. In the world of career and technical education, the faculty generally bring considerable real-
life experience to the curriculum, but full-time faculty also have to consciously find ways to keep
updated with current practices and trends in their particular field, having moved into education from
industry settings. The relationship with advisory committee members and expert reviewers is an
excellent way to stay current with workforce developments and needs and ensure that current trends
are reflected in the curriculum through which we train our students to pursue careers in their chosen
field.
Summary
Just as there are many ways to assess student learning, there are equally as many technological tools
available to facilitate the inclusion of a range of stakeholders in the assessment process and to do the
heavy lifting of data collection and analysis. The time between the assessment of student work and
discussion of results by faculty can be shortened. Where capstone projects are used for summative
assessment of student learning at the program level, this can be crucial to allow faculty to articulate the
story behind the numbers while it is still fresh in their minds and before nine-month faculty go off
contract for the summer. If the assessment results indicate that changes are needed earlier in the
program curriculum to scaffold student learning towards the intended program outcomes, these can be
implemented with little time lag in the following academic year with preliminary evaluation of
improvement occurring in the capstone in two years for Associate of Applied Science degrees.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 10
Improvements directed at the capstone course or project can be implemented immediately with the
impact measured in the subsequent year. Allowing flexibility for faculty to use the technological
software and hardware relevant to the context of their particular program is key; there is no “one size
fits all” solution. Experience at A-B Tech Community College and the assessment literature tells us that
the assessment of student learning process is most effective when program faculty and assessment
professionals work together at the program or department level to harness tools appropriate to each
context. When every effort is made to avoid trying to put square pegs into round holes, it is much more
likely that the focus will be where it needs to be: striving to provide the best learning experience
possible for our students.
References
Berheide, Catherine White (Spring, 2007). Doing less work, collecting better data: Using capstone
courses to assess learning, Peer Review, 9(2). https://www.aacu.org/publications-
research/periodicals/doing-less-work-collecting-better-data-using-capstone-courses
Hathcoat, John D. (Fall, 2018). The Role of Assignments in the Multi-State Collaborative: Lessons
Learned from a Master Chef. Peer Review, 20(4).
https://www.aacu.org/peerreview/2018/Fall/Hathcoat
Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters:
Trending toward practices that document authentic student learning. Urbana, IL: University of
Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
Kuh, G. D. (2008). High-impact practices: What they are, who has access to them, and why they matter.
Washington, D.C.: Association of American Colleges and Universities.
Padgett, R. D., & Kilgo, C. A. (2012). 2011 National Survey of Senior Capstone Experiences: Institutional-
level data on the culminating experience (Research Reports on College Transitions No. 3).
Columbia, SC: University of South Carolina, National Resource Center for The First-Year
Experience and Students in Transition.
Souza, J.M. and Rose, T.A. (Eds.). (2021). Exemplars of Assessment in Higher Education. Sterling, VA:
Stylus Publishing, LLC.
About the authors
Dr. Fiona H. Chrystall is the Director of Curriculum Quality Assurance and Assessment at Asheville-
Buncombe Technical Community College. She can be reached at fionahchrystall@abtech.edu.
Peter U. Kennedy is an Instructor and Chair in the Computer Technologies Department at Asheville-
Buncombe Technical Community College. He can be reached at peterukennedy@abtech.edu.
Vincent J. Donatelli is the Lead Baking and Pastry Instructor at Asheville-Buncombe Technical
Community College. He can be reached at vincentjdonatelli@abtech.edu.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 11
Two Changemakers in Assessment Culture: Summary and Reflections from
the 2021 AALHE Conference
By Sheri Popp, WEAVE Education
Abstract: The following paper is a descriptive account of the proceedings from a panel discussion at the
Association for the Assessment of Learning in Higher Education 2021 conference. The facilitator leads
professional development courses for assessment professionals and in that capacity is privileged to see
and hear the stories of successful assessment work being done at a wide variety of institutions. The
facilitator’s background working in assessment at a small faith-related institution led her to invite
representatives from two similar institutions to co-present and tell their assessment stories. The
purpose of the session was to showcase their work in leading institutional effectiveness on campus by
building a collaborative culture of improvement focused assessment work. Participants heard practical
tips for engaging faculty and program leads in building and revising assessment plans at both the
undergraduate and graduate level. Included in this paper are some additional reflections by the author
prompted by other formal sessions and informal conversations in the conference app chats.
Keywords: professional identity, reflection, assessment culture, faith-related, assessment stories
Introduction
In their keynote presentation at the Association for the Assessment of Learning in Higher Education
(AALHE) Annual Meeting in 2021, Jane Marie Souza and Tara Rose shared stories from research for their
recently published Exemplars of Assessment in Higher Education: Diverse Approaches to Addressing
Accreditation Standards. As they talked, they identified some gaps in the accredited institutions
represented, mainly small, faith-related institutions. The facilitator’s background in higher education
began at a very small faith-related school, so the lack of participation and representation by these
institutions is keenly felt. This prompted reflection on the journey of an assessment professional and
what may be preventing the voices of those at smaller institutions from joining those of their peers.
As noted by Clucas Leaderman and Polychronopoulos (2019), understanding the diverse ways we arrive
at and ways we approach our roles as assessment professionals strengthens our work. Additionally,
Polychronopoulos and Clucas Leaderman (2019) call on assessment professionals to reflect on their
pathways to the profession as one means of advancing assessment to the next level of institutional
leadership (p. 2). Following is a brief account of the facilitator’s professional journey.
It seems like in many cases, we fell into our various roles by accident or appointment, but my journey
feels slightly more deliberate, as I come from K-12 education originally, where assessment is part and
parcel of educator training. After 14 years in public education, I had a better developed sense of
assessment writ large, when I moved into higher education teacher training and assessment. You see,
at very small institutions, nobody wears just one hat. We pull double and sometimes triple duty, and
the idea of an ‘office of one’ is often really closer to an ‘office of one half’. I didn’t really fall into or get
‘voluntold’ for an assessment role, but I certainly didn’t really know all that I was responsible for or how
to carry out that work in a higher education setting. So, I went looking for help.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 12
Fortunately, I stumbled upon AALHE and traveled to one of the earliest annual meetings in
Albuquerque, NM, in 2012. I felt like a little fish in a very large pond but was pleasantly surprised at the
support and collegiality I felt. I was also very pleased to learn that despite my newness to the
profession, and the very small size of my institution, my teacher training had not let me down. I knew
most of the right terminology and had already started leading my institution in the right direction to set
us up for success in upcoming reaccreditation with both the Higher Learning Commission (HLC) and the
Association for Biblical Higher Education (ABHE).
Ten years later, I’ve moved into a different role in higher education, but as I said, this most recent
AALHE conference made me think back on the eight years I spent as first an Assessment Director and
then as an Associate Dean for IE. Did I contribute to assessment as a discipline, outside of my
institution? No. Why not? A mix of factors that may sound something like this:
“I’m not really sure I know if what I am doing is right.”
“I don’t have my doctorate (yet).”
“I could maybe share something, but I don’t know where to start.”
“We are a very small school. What we are doing won’t apply to other institutions.”
“We are a faith-based school. What we are doing won’t apply to other institutions.”
“Our context is unique. What we are doing…(you get the idea).”
In those comments, readers may hear echoes of their own experiences as new or rising expert
assessment professionals. The facilitator currently teaches courses that introduce directors, program
leaders, and faculty members to principles of assessment, and often hears similar comments.
Nevertheless, in many cases people are reading the literature, accessing any training they can, and
making a difference at their institutions in big and small ways. Sometimes that work results in
quantifiable change and large data sets that meet the rigors of statistical analysis. But most times the
result is measured in conversations that inspire faculty members to reflect and make brave choices to
change individual courses or programs that improve learning for as few as two or three students a year.
Are these small victories any less valuable because the n isn’t research worthy? As Montenegro and
Jankowski (2020) argue, it is valuable, most notably to the students who are directly impacted by the
changes. To that end, the facilitator asked assessment leaders from two modestly sized faith-related
institutions to join in a panel discussion at the June 2021 AALHE conference. Following is a brief
summary of the panel discussions, given with permission from the speakers.
Assessment Stories
Central Methodist University
Central Methodist University (CMU) is a small, private college, historically affiliated with the Methodist
church. The residential campus, located in Fayette, Missouri, is a College of Liberal Arts and Sciences
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 13
(CLAS) and the extended locations, dual credit, and graduate programs combine to form the College of
Graduate and Extended Studies (CGES). Enrollment at the residential campus is 1133, with all programs
combined enrolling 6596. In 2021, the institution adopted a new mission statement: “Central Methodist
University fosters a diverse and caring community, empowering students to become lifelong learners,
committed to academic excellence, prepared to engage in a complex world” (Central Methodist
University, n.d.).
Sandra Wald, Assistant Dean, and Kasey Leech, Director of the Applied Behavior Analysis Program from
CMU addressed participants to talk about the benefits of one-on-one collaboration in improving
assessment culture. They connected when Wald was looking for a program to help through the process
of refining its assessment plan. Both were surprised to learn that an assessment plan and defined set of
outcomes existed for this program, but none of that was made apparent to Leech when she inherited
the program. Building their relationship, collaborating, and honoring the hard work that had previously
been done in the program were key to transforming Leech’s approach to assessment. Leech is clearly
passionate about her discipline, as so many faculty and program chairs are, and she now views
assessment as one way to ensure graduates who will enter the field of Applied Behavior Analysis are
well-prepared through CMU’s intentionally designed curriculum and assessment plan. For Wald,
understanding these steps as a process that can be replicated with other programs on campus was
particularly valuable as she looks to impact assessment work at the macro level. The story isn’t over for
CMU; as Wald stated, “Assessment is like cleaning your house. As soon as you get started, you realize
all the other things you need to do.”
Truett McConnell University
Truett McConnell University (TMU) is a private Baptist university located in rural Cleveland, Georgia. It
is operated under the auspices of the Georgia Baptist Convention. The total enrollment for TMU is
2,925, with 15 undergraduate programs and seven graduate programs. TMU’s mission “is to equip
students to fulfill the Great Commission by fostering a Christian worldview through a Biblically centered
education in a family friendly environment (Truett McConnell University, n.d.).”
Dr. Tammy Mize, Assessment Coordinator, and Dr. Heather Ayers, Dean of the School of Nursing,
shared TMU’s assessment story. Mize explained her goal of emphasizing the benefits of assessment
work with faculty at TMU, with Ayers confirming that a sense of complacency about assessment and
student achievement had crept in among the faculty in the nursing program. Through the Institutional
Effectiveness (IE) office’s ‘whatever it takes’ approach, again using a lot of one-on-one intervention,
faculty in the program have come to recognize that accreditors are interested in student achievement,
a goal which aligns with the program’s aims, and therefore, accreditors and the accreditation process
are not viewed as punitive. Mize’s goal is to help the faculty at TMU continue moving along a path from
good to better, continuously striving for excellence. Her three-part assessment mantra is keep it
simple, be willing to compromise, and agree on the non-negotiables. Ayers and her nursing colleagues
are the beneficiaries of ongoing attention to faculty development that has seen them move from a
paradigm of ‘doing assessment’ to ‘using assessment’ in order to improve student learning. One small
victory was a recent curriculum sequence change that was initially opposed by some faculty members,
but eventually embraced when assessment data indicated the students benefited from the change.
Mize acknowledged that the work is not finished and is committed to further conversations to cross the
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 14
cultural divide that sometimes exists between administration and those in varying disciplines on
campus.
Conclusion
The assessment leaders and program directors at these institutions had a lot of practical advice to
share. Wald and Leech encouraged participants to begin with meaningful conversations to explore the
department’s story and try to understand what works. Having said that, they both highlighted the need
for a willingness to change. Mize and Ayers focused on similar advice, pointing out the importance of
communicating and building a shared understanding of assessment terminology. Mize also encouraged
investing in faculty development for stakeholders as the main way to move the institution forward on
its assessment journey. Their experience and wisdom may serve as inspiration to others. Institutions
large and small with unique missions share many of the same goals for students and view assessment
as a means by which to deliver on promises made to them. If that thinking aligns with the aims of other
assessment professionals and the membership at AALHE, it is the facilitator’s hope that they will be
encouraged to speak up and tell their stories.
References
Central Methodist University. (n.d.). Mission. Retrieved from
https://www.centralmethodist.edu/about/mission.html
Clucas Leaderman, E., & Polychronopoulos, G. B. (2019). Humanizing the assessment process: How
RARE model informs best practice in educational assessment. Research & Practice in
Assessment, 14(1), 30-40. Retrieved from http://www. Rpajournal.com/dev/wp-
content/uploads/2019/07/A2.pdf
Montenegro, E., & Jankowski, N. A. (2020). A new decade for assessment: Embedding equity into
assessment praxis (Occasional Paper No. 42). University of Illinois and Indiana University,
National Institute for Learning Outcomes Assessment (NILOA). Retrieved from
https://www.learningoutcomesassessment.org/wp-content/uploads/2020/01/A-New-Decade-
for-Assessment.pdf
Polychronopoulos, G. B., & Leaderman, E. C. (2019). Strengths-based assessment practice: Constructing
our professional identities through reflection. University of Illinois and Indiana University,
National Institute for Learning Outcomes Assessment (NILOA).
Souza, J. M., & Rose, T. A. (2021). Exemplars of assessment in higher education: Diverse approaches to
addressing accreditation standards. Stylus Publishing.
Truett McConnell University. (n.d.). Who we are. Retrieved from https://truett.edu/about/who-we-are/
=
About the author
Dr. Sheri Popp is the Director of Professional Development at Weave Education. She can be reached at
sheri@weaveeducation.com.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 15
Assessing Affective Learning Outcomes through a Meaning-Centered
Curriculum
By Misty Song, Abilene Christian University; Vince Nix, Lamar University; and Joe Levy, National Louis
University
Abstract: This research presentation updates previous research on affective learning domain
assessment. Whilst presenting at AALHE 2020 the researchers discovered the Griffith University
Affective Learning Scale (GUALS) developed by Rogers et al. (2018). Dr. Rogers’ team granted our
researchers permission to utilize his rubric to deepen our affective-learning assessment efforts. This
session will review the GUALS-scale and present examples of data analyses. The researchers also
expand the previous session by incorporating additional doctoral coursework data and Student Affairs
Assessment Leaders’ MOOC assessment data. As recent politically charged events have demonstrated,
higher education institutions cannot afford to continue graduating valueless leaders. This research
answers recent research calls from Hansen, 2019; Hundley et al. 2019; Norris and Weiss, 2019; and
Zahl et al. 2019 to respectively assess growth mindsets, integrate affective learning outcomes based on
reflection and introspection, transdisciplinary learning and assessment, and measurements of
attitudes, skills, and values of professionals.
Keywords: affective learning domain, emotions, attitudes, formative assessment, online learning,
MOOC.
Introduction
Meaning-centered education (Kovbasyuk & Blessinger, 2013) makes the radical assertion that
instructors and students would benefit from tossing out the standardized paradigm, relying on
cognitive-domain metrics, in favor of one which incorporates learning from all domains. Over the past
21 years, the increasing emphasis on easily measured cognitive benchmarks has grown, arguably, into
an obsession (Nix, et al., 2021). We posit that all parties would do well to incorporate affective-domain
learning outcomes, as defined by Krathwohl, et al. (1964) :
Objectives which emphasize a feeling tone, an emotion, or a degree of acceptance or
rejection. Affective objectives vary from simple attention to selected phenomena to
complex but internally consistent qualities of character and conscience...objectives in the
literature expressed as interests, attitudes, appreciations, values, and emotional sets of
biases. (p. 7)
From the educator’s standpoint, we expect our students to master cognitive knowledge and
professional skills while developing constructive mindsets, positive attitudes, and sustaining certain
interests. For instance, the National League for Nursing (NLN) updated their first core value as “caring,
integrity, diversity, and excellence; advocacy and civility” (NLN, 2017, p. 14). In other words, NLN
requires a qualified nursing professional to not only come equipped with the necessary acquired
knowledge and skills, but also to possess those affective competencies mentioned above; this updated
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 16
core value can be considered as newly added affective learning domain learning objectives for nursing
students. The Community College of Aurora (CCA) and the University of Colorado's College of Nursing
(CON) offered an integrated nursing program that listed empathy and maturity as prerequisite qualities
for admission (Integrated Nursing Pathway Program at CCA, n.d.). Despite the long track record of
heavy reliance on cognitive outcomes, it is plain that institutions are seeking both to integrate the
affective domain into their curricula and inculcate their students with values informed by it.
As educators who served as faculty and student success staff, we experienced challenges of delivering
educational programs that aimed to reshape students’ value systems or construct their mindsets, such
as we hope for in social justice trainings. And yet, the fact is that educators often consider how
traditional teaching and learning activities are linked to the cognitive domain, but ignore the affective
domain, perhaps due to the wide acceptance of Bloom’s Taxonomy (Bloom, Madaus, & Hastings, 1981,
as cited in Bolin et al., 2005). Miller (2010) stated that extensive work had been done in the cognitive
domain, but found that educators were still neglecting the affective domain, especially with regard to
professional-values development. But as Spady (1994) observed, the cognitive domain might not do a
good job defining learning outcomes that involve values and other affective factors; the traditional
Bloom’s Taxonomy framework is, simply, an inappropriate construct with which to assess affective-
learning objectives.
Affective Attributes
To properly incorporate the affective learning domain, we need to understand the difference between
affective attributes and affective learning objectives. Affective attributes refer to people’s interests,
feelings, attitudes, emotions, and values (Krathwohl et al., 1964). However, these attributes are not
affective learning outcomes. Affective learning outcomes should represent long-term internalized
values that mediate behavior over extended periods of time; in other words, affective learning should
linger well past any initial learning-activity.
We have anecdotally observed students who earned “A”s yet still expressed negative emotions, such as
confusion, anxiety, and anger regarding their learning experiences. Some posed questions such as,
“Why did I need to study this?” or, “How could I apply this content?” even while earning high marks.
Those questions and emotions were the students’ affective attributes, and they communicate that
while these students have achieved the cognitive learning outcomes, their negative affect indicated
that their learning experiences may not have been positive ones. Regardless, affective attributes
cannot serve as evidence for nor against achievement of affective learning outcomes; first, one must
understand how these varied components of affect can hinder or contribute to changed mindsets.
Researchers identified non-cognitive constructs that may determine an individual’s success and well-
being; for example, empathy is considered an essential leadership skill that leads to successful goal
completion (Goleman, 2004). Fredrickson (2000) stressed that cultivating positive emotions such as joy
and contentment not only counteract negative emotions but also “extend an individual’s brain capacity
by building personal resources for coping” (p. 18). Moreover, growth mindsets enable creative and
flexible thinking, intensify resilience, and ultimately improve performance (Fredrickson, 2000; Yeager &
Dweck, 2012). Additionally, Duckworth et al. (2007) discovered that grit accounts for up to four percent
of the variance in successful outcomes. We argue that the benefits of including non-cognitive
constructs into curricula are powerful, affective-domain learning outcomes should be incorporated,
and assessment instruments integrated into educational metrics.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 17
Affective Learning Domain Objectives vs. Affective Attributes
So, what is the relationship between affective learning domain and affective attributes? How can
incorporating the former foster the latter? Pekrun and Linnenbrink-Garcia (2014), followed by Norris
and Weiss (2019), encouraged further research looking at the impact of emotion on learning in the
classroom setting. As early as 1996, (Rodríguez, et al. 1996) suggested that affective learning could
serve as the central causal mediator refining teacher-student relationships. Hanson (2011) found that
affective learning experiences could add value by promoting autonomy and empowerment among
nursing practitioners. Bolkan (2015) believed that positive affective experiences could promote
students’ intrinsic motivation and facilitate cognitive engagement. Johns and Moyer (2018) stressed the
importance of attitudes and beliefs, which support healthy behaviors, and can be considered as one of
the best practices for addressing knowledge and skill development. It is clear to us that maintaining an
awareness of students’ affective reactions in classroom settings is crucial so that educators can
reinforce positive affective experiences for students. Our recent research (Nix et al., 2021)
demonstrated that emotions and attitudes may either catalyze or inhibit affective learning for adults
enrolled in online coursework; that team of researchers has continually refined a mental model (Figure
1) demonstrating the structural relationship of affective attributes to affect and behavior.
Figure 1
A Mental Model of the Relationships Between Values, Emotions, Attitudes, Affect, and Behavior
The model attempts to illustrate how affective attributes are ‘stacked’ to form affect, which serves as a
mediator for behavior. The outer ring represents what is visible to observers; moving inward, we find
affect, which onlookers may be unable to concretely identify. Hidden deeper inside the individual are
attitudes, a relatively short-term class of constructs, but critical to the development of the three basic
categories of affect—constructive, positive, and contrary—described by Arora and Sharma (2018).
Deeper still, we find emotions, fleeting feelings that, repeated in context and over time, play a role in
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 18
the development of attitudes. Finally, at the center, we find values, deeply-held convictions which may
be culturally influenced more than any other affective attribute. The diagram is meant to convey depth,
as in a deep well; considerable effort is required to access values, which lie at the very bottom of the
well. Each of these affective attributes are discussed separately, later in this paper.
While it is necessary to incorporate affective domain learning outcomes into curricula, there will be
growing pains as these new metrics are grafted onto the traditional cognitive-metric-focused
educational framework. Zahl et al. (2019) reported that the Center for the Advancement of Pharmacy
Education advocated curricular change, specifically including concrete assessments of affective learning
outcomes (ALOs) measuring the attitudes, skills, and values that are unique to the roles of
professionals. Hansen (2019) claimed that effectively assessing the whole student would have to
examine affective learning outcomes, such as mindsets and social intelligence, and recommended
cross-disciplinary research from fields such as sociology, psychology, anthropology, and behavioral
economics. Adopting the affective learning domain into the traditional cognitively-dominated
instructional design paradigm would require widespread efforts along these lines. For instance, the
Community College of Aurora and the University of Colorado’s College of Nursing might have to
establish and apply comprehensive assessment tools to pre-evaluate nursing applicants’ empathy and
maturity levels as well as assessing academic competencies. It would be a time- and effort-consuming
process, but potential improvements to teaching, training, and developmental-programming
effectiveness could provide excellent returns. Affective assessment that accounts for students’
affective attributes such as motivations, feelings, attitudes and emotions towards learning, would
provide a holistic vantage for evaluating student learning and development. Moreover, the efforts
spent on engaging affective learning create and maintain emotional attachments and communication
channels between the institutions and students, which may potentially assist institutions to retain
students (Bolin et al., 2005).
Conceptual and Theoretical Frameworks for Learning
The authors relied heavily on Kovbasyuk and Blessinger’s (2013) meaning-centered frameworks for
education and learning. Meaning-centered learning depends upon the students’ own viewpoints to
inform content and structure, including the integration of phenomenological designs. Innovative
teaching and creative learning guide the development of the course and learning activities stressing
dialogue and collaboration with students, giving them opportunities to be active decision-makers
guiding their own education.
Evolution of Learning Theories
From the advent of public education in the 19th century until the 1980s, learning theories evolved,
growing from classical conditioning models into paradigms including humanistic elements. Affective
learning was not considered important, and even as late as 1987 Dr. Skinner said in an interview
(Goleman, 1987):
If I had it all to do again, I would still call the mind a black box; I would not use any of the
new techniques for measuring information processing and the like. My point has always
been that psychology should not look at the nervous system or so-called mind - just at
behavior.
Bridging understanding across the complex phenomena of learning leads to what Kincheloe (2008)
describes as “critical constructivism.” This paradigm posits that, because knowledge is not an external
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 19
object but rather taken in through cultural and emotional lenses, dialogue is necessary to achieve
mutual understanding. By destroying unequal power imbalances that reproduce the status quo, critical
constructivism:
● Encourages greater personal and social consciousness, helping to develop freedom of thought
that recognizes authoritarian tendencies and connects knowledge to power.
● Motivates people to take constructive action, including repair work or de-construction of
undesired structures.
● Theorizes that the connection between power and knowledge maintains the status quo,
anointing certain groups and institutions as the gatekeepers of knowledge.
● Holds that powerful groups and influential people maintain their knowledge construction
hegemony by continually undermining alternative routes to learning. (Nix, et al., 2021, p. 4)
And so we ask, “Have the structures and institutions we’ve built for learning fostered knowledge as a
benefit for everyone, or only a select few?” If we wish for education to serve as a public good, yet find
the latter holds true, then deconstruction and reconstruction work must follow. Gredler (2009) opined
that any justice-minded framework ought to consider the intersections of personal, social, and cultural
factors. Therefore, micro-and macro-level examinations are necessary to achieve holistic learning
outcomes.
Kovbasyuk and Blessinger (2013) defined meaning-centered education (MCE) as an “approach that
facilitates the conscious integration of new [and] prior learning across all domains based on personal
meanings about oneself in relation to the world” (p. 20). In the same volume, they defined meaning-
centered learning (MCL) as “a human centered approach that facilitates the holistic integration of all
learning domains... through diverse life contexts, which motivates learners to apply meaning-based
principles” in their own lives (p. 18). MCL fosters self-determined personalities and self-evolution,
through multiple dimensions of meaning-making including phenomenological, philosophical,
psychological, and sociological. This framework fits neatly inside the construct of critical constructivism
(Kincheloe 2008), providing a foundation which we can use to begin to incorporate ALOs into modern
curricula.
Emotions
Baumeister and Bushman (2007) conceptualized the experience of an emotion as “a subjective state,
often accompanied by a bodily reaction (e.g., increased heart rate) and an evaluative response to some
event” (p. 61). Emotions include reactions and judgments as interactive core elements, and research
recognizes that such behavior stems from attitudes, which are in turn formed from values (Izard, 2010).
Studies have identified the most powerful emotions in terms of the consequences they may have on an
individual’s productivity (Ortony & Turner, 1990) or propensity to learn; a landscape of defined positive
and negative emotions has emerged. Repeated exposure to conditions which elicit the same emotions
have long-term effects on attitude formation and might eventually dictate behavior. Among the
positive emotions, joy, satisfaction, and contentment have the greatest positive impact on behavior,
while anxiety, fear, and confusion have the strongest negative effects. Immordino-Yang and Damasio
(2007) suggested that emotions are attached to learning in the classroom and may dictate information
retrieval; this suggests that the affective domain is integral to learning and academic success.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 20
Attitudes
Two separate groups of researchers (Katz, 1960; Smith, Bruner, & White, 1956) spurred the
development of functional attitude theory. Working independently, they derived lists of functional
attitudes, and their parallel findings ensured that these attitudinal frameworks were solidified as the de
facto paradigm for more than two decades (Snyder & DeBono, 1985). Katz defined his categories, but
he did not offer any methods for conducting empirical research; it fell to later researchers (Debono,
1987; Debono, 2000; Herek, 1986, 1987, 2000; Locander & Spivey, 1978; Petty & Wegener, 1998;
Shavitt, 1990; Shavitt, Swan, Lowrey, & Wanke, 1994) to devise innovative methods for utilizing this
attitude construct in research projects. Our own research was built on Katz’ original framework, with
four categorical levels of attitude as a controlled nuisance variable; attitude was an exploratory factor.
Neurobiology of Values
Affect includes an array of emotions, attitudes, and values. As we have mentioned, noticing (and
assessing for) the presence of those attributes in students is not the same thing as assessing for
affective learning. Neurobiological research suggests that values are not hard-coded, but rather are
categorized and chosen as decisions are made; researchers used magnetic resonance imaging (MRI) to
map what happens in our brains during decision-making (Davis, 1992; Forbes & Grafman, 2010; Miller,
2001). That research shows that values are not as immutable over the long-term as sometimes
anecdotally assumed; values have informed our affect and long-term impressions, but they are not
necessarily the basis for our imminent decisions. At best, abstract representations of values exist in two
prefrontal cortices; those representations seem to be reinterpreted and calculated at the time of
decision-making (Padoa-Schioppa, 2011). Indeed, what happens as the impulses cross from the limbic
system into the prefrontal cortex resembles combat, wherein the constructs that make the strongest
impressions are supported by heritability, and the process is highly susceptible to being manipulated by
serotonin (Clark, Chamberlain, & Sahakian. 2009). MRI gives evidence that interaction occurs between
the regions as people synthesize cognitive information and characterize moral judgements at the same
time (Forbes & Grafman 2010). The impact of culture on these processes is not fully understood and
requires additional research.
Authentic Formative Assessment
Schneider and Preckel (2017) conducted a systematic review of previous meta-analyses investigating
105 correlates associated with achievement in higher education. Three variables significantly predicted
learner achievement: social interaction, meaningful learning, and assessment. Any assessment system
should be robust and include all elements of the course (Gatignon, Tushman, Smith, & Anderson,
2002). According to Schneider and Preckel (2017), “Teachers with high-achieving students invest time
and effort in designing the microstructure of their courses, establish clear learning goals, and employ
feedback practices” (p. 565). We built our course curriculum with the express intent of incorporating
student feedback and assessing ALOs.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 21
Online Learning Pedagogy
Salmon's (2013) five-stage learning model was integrated into the curriculum of our doctoral-level
strategic planning course. Pertinent to this study were the fourth and fifth stages of her model, which
address how learners might construct and utilize knowledge:
● In the fourth stage, learners become comfortable working in the online environment; the
learning management system is freely utilized for conferencing, collaborative learning exercises,
and team projects, and knowledge is created through these activities.
● In the fifth stage, learners achieve contentment with and have synthesized their newfound
knowledge for goal-setting, discovery, reflection, and confidently presenting information to
others.
Affective Learning
Krathwohl et al. (1964) set forth affective learning outcomes as “characterization by a value or value
sets” (p. 184). The affective taxonomy levels are "ordered according to the principle of internalization...
the process whereby a person's affect... [grows] to a point where the affect is 'internalized' and
consistently guides or controls the person's behavior” (Seels & Glasgow, 1990, p. 28). The affective
learning domain contrasts with the cognitive-focused model and represents vastly different goals for
learners that sit atop the learning taxonomies: mental tasks are the desired outcomes of cognitive
learning, whereas states of mind—affects—are the focus for affective learning. Those affects, which
stem from context and experience, can be powerful tools; for example, constructive affect (as defined
by Arora & Sharma, 2018), may foster a growth mindset and could be useful for organizational change-
agents. When examining the affective learning domain and identifying assessment points, it is
imperative to note that the domain is divided into five progressive levels: 1) Receiving/attending, 2)
Responding, 3) Valuing, 4) Organization, and 5) Characterization by value or value complex (shortened
to one word for the figure). Each level is embedded with assessable milestones that may indicate
advancement through that level. Figure 2 illustrates the taxonomy and respective assumptions for
progressing through each level and across each sub-level as described in Krathwohl, et al. (1964).
Figure 2
The Affective Learning Domain
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 22
Historically, the affective domain has been viewed as a potential obstacle to learning and described in
unflattering terms. Consider Edward DeBono’s “Six Thinking Hat” model (DeBono, 1985), where
different colors of hats were used to represent different modes of thinking. Red hats represented
affective thinking and were portrayed as emotional and illogical. Even now, in preparation for annual
reviews, faculty must decide whether the affective responses of students have an undue influence on
course ratings; some might argue that the course-reviews primarily measure affective attributes rather
than cognitive learning outcomes. If we are to encourage institutions and individuals to value affective-
domain outcomes, we need to change these views. And to do that, we have to devise useful metrics by
which to measure ALOs--which requires that we identify the attributes of those desired ALOs in the first
place.
Assessing Affective Learning Outcomes
Two researchers analyzed open-ended reflexive student text responses from Level 2 (Kirkpatrick, 1994)
evaluations. While the hierarchical top level of “characterization” was the desired ALO, any evidence of
affective learning was coded as the students’ reflective pieces were read and reviewed using the GUALS
(Rogers et al., 2018), illustrated in Figure 3.
Figure 3
The Griffith University Affective Learning Scale (GUALS)
Note that the GUALS has a seven-point ordinal scale for rating the five hierarchical levels of affective
learning; this image is used with permission from the author.
Procedures
Simonson, et al. (2015) recommended using Kirkpatrick’s (1994) evaluation framework for assessment
in distance education. Level 1 evaluations measure reaction to the learning event, course materials,
and a trainer’s perceived likeability or effectiveness; the Level 1 evaluation construct is an indirect
measure. However, Level 2 evaluations explore respondents’ deeper reflections and offer direct
evidence of learning. The first two evaluation levels are listed below, preceded by their shorthand
labels, as in Simonson et al.:
1. Reaction—Did they like it?
2. Learning—Did they learn it? (pp. 308-309)
The researchers employed focused comparisons of mixed data from previously completed course
assessments. According to Salkind (2010), this fits into a post-hoc or a posteriori analysis framework.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 23
For this study, ordered response items were used, allowing students to self-rate their relative
agreement with five statements, corresponding to five elements of the course:
● The learning activities were effective.
● Instructions were clear and easy to follow.
● I learned something I had not known before this week.
● The learning activities were engaging.
● I struggled with comprehension for this week’s learning activities.
Quantitative data were collected from the Level 1 evaluations via ordered response items, while
qualitative data were collected from the Level 2 evaluations through students’ reflective formative
assessment prompts. The rationale for including the formative assessments from Level 1 evaluations in
the course was so that instructors could improve instruction after a summative evaluation of weekly
ratings. Based on earlier research regarding the need to assess reactions to change (Gatignon et al.,
2002), the instructors implemented weekly formative assessment into the course. Data from these
items were analyzed using the Minitab statistical analysis program. Level 2 evaluation prompts asked
students to examine the most interesting or the most useful constructs from their learning activities.
We included one ALO with the course learning outcomes (CLOs): Characterize organizations through
analyses of strategic plans. Researchers coded textual data using the MAXQDA qualitative data analysis
software. The data were for affective learning at two levels: 1) evidence of affective learning and 2)
level of affective learning. The Griffith University Affective Learning Scale (GUALS; Rogers, et al. 2018)
was utilized for second-level coding, with permission from the authors.
Nested within the organizational institutionalism framework are several classic sociological theories
that offer interesting and exciting ways to re-invigorate how those works are viewed and incorporated
into contemporary research. Tucked away, nearly hidden in the theories of practice, we find such a
gem: Harold Garfinkel’s ethnomethodology (1967), which holds that reality is only knowable by how
participants restore order after a breach event. Participants will continue restorative work until their
actions and the organizational procedures are publicly accountable. If we want to find a silver lining in
COVID-19, it is that we can consider it the great breach-event. Assessing student reflections as they
were navigating several crises during 2020 allowed course instructors opportunities to peer into a
world not often seen and responding to students contextually began to deepen dialog. It soon became
clear: the instructors were learning, too.
Population Characteristics
Two doctoral cohorts, registered for the Strategic Planning and Resource Allocation course over a
period of two years, provided data for this study. Seventy percent (n=84) of the students were women.
Cohort one consisted of 56 students (39 women and 17 men), while cohort two included 64 students
(45 women and 19 men). Eighty percent were working in primary and secondary education (K-12), and
15% worked in tertiary education. Five percent of the students worked in nonprofit and for-profit
organizations outside the education industry, including active-duty military-officer personnel. All
students were tasked to complete Level 1 and 2 evaluations each week. After both 8-week courses,
there were 838 responses to the formative assessment tasks.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 24
Data Analysis and Results
Researchers utilized both fixed-effects and general linear model routines in Minitab statistical software.
For any variable that appeared to have a statistically significant impact, we performed an individual
Kruskal-Wallis test. The effects of attitude on the affective learning outcome are demonstrated in Table
1.
Table 1
Kruskal-Wallis Test: GUALS-Score Versus Attitude
Descriptive Statistics
Attitude
N
Median
Mean Rank
Z-Value
Egoism
222
3
225.0
-13.96
Knowledge
412
5
529.2
12.90
Utilitarian
73
2
242.7
-6.53
Value-expressive
131
5
502.6
4.28
Overall
838
419.5
Test
Null hypothesis
H₀: All medians are equal
Alternative hypothesis
H₁: At least one median is different
Method
DF
H-Value
P-Value
Not adjusted for ties
3
282.30
0.000
Adjusted for ties
3
290.66
0.000
Attitudes indicative of knowledge and learning appear to have a powerful impact on affective learning;
ego-defensive attitudes served as extreme barriers to achieving the desired ALOs, while contentment
was strongly correlated with success. Attitudinal shifts across the cohorts were salient and statistically
significant χ2(1, N = 838) = 30.04, p = .000, η2 =.03, though with an insignificant effect size. Researchers
found it interesting that value-expressive and ego-defensive attitude codes increased to the extent
observed. Additional qualitative analysis and qualitizing revealed that those codes were most
associated with events extrinsic to the course itself, primarily weather/climate disasters and/or
significant familial COVID-related disruptions in students’ lives. Perhaps fuzzy-set qualitative content
analysis (fsQCA) would be of use for digging deeper into such data sets.
As mentioned earlier, contentment was the emotion most clearly associated with ALO attainment. A
follow-up Kruskal-Wallis test demonstrated extremely significant differences between GUALS score
medians by emotions χ2(8, N = 838) = 416.79, p = .000, η2 =.49. Earlier, Nix, et al., (2021) described an
original code, “anxiety no worry” (ANW). Both the data coded ANW and the originally coded anxiety-
worry (AW) each served as significant barriers to affective learning, as evidenced by Table 2. The
researchers may eventually remove or enhance this code in future projects as we explore fuzzy-set
(Rihoux & Ragin, 2009) data analysis.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 25
Table 2
Kruskal-Wallis Test: GUALS-Score Versus Primary Emotion
Descriptive Statistics
Primary emotion
N
Median
Mean Rank
Z-Value
*
33
1
145.6
-6.63
Anxiety no worry
90
3
344.6
-3.11
Apathy
13
1
55.5
-5.46
Anxiety-worry
167
3
250.4
-10.09
Confusion
36
3
229.2
-4.82
Contentment
359
6
602.2
18.91
Happiness
12
4
387.9
-0.46
Joy
19
5
445.1
0.47
Satisfaction
109
3
327.0
-4.28
Overall
838
419.5
Test
Null hypothesis
H₀: All medians are equal
Alternative hypothesis
H₁: At least one median is different
Method
DF
H-Value
P-Value
Not adjusted for ties
8
404.80
0.000
Adjusted for ties
8
416.79
0.000
Note. All negative emotions with fewer than 10 coding instances were combined into the * category for
final analyses.
One of the more interesting findings with which the researchers are surprised is that satisfaction
appears to stunt learning in the affective domain. Many more studies ought to be conducted and much
more representative data needs to be collected and analyzed regarding this result. Educational
assessment, as a profession, is built on the assumption that satisfaction is positively related to learning;
indeed, across industries we rely on satisfaction to inform the trainers and facilitators of learning
exercises through Level 1 evaluations (Kirkpatrick, 1994). The implications of a generalized finding
which corroborates a link between satisfaction and stunted learning would be staggering.
Leaving aside that surprising finding, the bottom line here is that we found an emotional impact on
attainment of affective learning outcomes. Instructors and student development professionals are, if
willing, able to feel the authenticity of the student; as Nix, et al. (2015) demonstrated, when
instructors, paraprofessionals, and professionals forge authentic connections with students, attainment
of cognitive and affective learning outcomes is facilitated. As evidenced earlier in this report, analysis of
this data-type may be challenging, but not insurmountably so. Rihoux & Ragin (2009) demonstrated the
feasibility of calibrating partial membership in categorical and ordinal variables, a useful tool in this line
of research since attitudinal and emotional conditions do not always fit the sets we created to make
them accountable, or even fit neatly into categories at all! However, they either stunt or catalyze
learning. The researchers recoded and condensed the data into three categories: negative emotions,
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 26
positive emotions, and confusion. Table 3 provides a closing snapshot for researchers, instructors, and
advisors to consider.
Table 3
Kruskal-Wallis Test for GUALS-Score Versus Emotion
Descriptive Statistics
Recoded
Primary emotion
N
Median
Mean Rank
Z-Value
Negative emotions
303
3
258.6
-14.48
Confusion
36
3
229.2
-4.82
Positive emotions
499
5
530.9
16.17
Overall
838
419.5
Test
Null hypothesis
H₀: All medians are equal
Alternative hypothesis
H₁: At least one median is different
Method
DF
H-Value
P-Value
Not adjusted for ties
2
261.89
0.000
Adjusted for ties
2
269.65
0.000
Massive Open Online Course (MOOC) setting
In addition to the strategic-planning course, data from Applying and Leading Assessment in Student
Affairs, a massive open online course (MOOC), was used for additional analyses of affective learning. It
is important to give background context to the MOOC, which was created by a body called the Student
Affairs Assessment Leaders (SAAL). This group organized the course in order to pull together resources
to support folks doing assessment work; these individuals do not typically have access to extensive
academic preparation, training, or professional experience before entering the field (Kuh et al., 2015).
Indeed, it has been rare, historically, for institutions to employ assessment professionals on the
student-affairs side (Roper, 2015); indeed, most people responsible for assessments are thrust into
those positions rather than hired for their assessment experience (Levy et al., 2018).
SAAL, hoping to address these chronic deficiencies in training and hiring competent assessors, brought
several volunteers together throughout 2015-2016 to think about the typical assessment cycle. They
investigated commonly encountered issues, assembling a series of seven documents that covered
various needs, information, and resources that could be useful for a would-be assessment professional.
In partnership with Colorado State University, SAAL used these papers as the foundation for a fully
developed eight-module course that first ran in 2017. The course has run once a year since then, with a
significant development in 2020, when a partnership with National Louis University allowed course-
completers to earn elective credit in the Ed.D. in their Higher Education Leadership program.
This course is totally self-paced. From day one, students can access any of the course’s modules. While
students may move as fast or slow as they like, the instructors teach one module per week over eight
weeks. As far as content is concerned, there are lecture-videos plus personal takes from instructors and
students sharing information as to how the weekly topics resonate for them, or how the information
from the course has proven useful. There are assigned readings, as well as further-learning content
available for those who wish to dig deeper into the specifics of any particular assessment topics.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 27
There are eight discussion boards, seven quizzes, and for the first time this year, two written
assignments. To successfully complete the course and earn the badge, participants must earn a 75% or
better on each individual quiz; because the written assignments were a new addition to the course,
there was no threshold score for participants to meet; they simply needed to make a good faith effort
to complete the assignment. Furthermore, students can choose to join groups, based on institutional
type, for smaller group engagements in discussion boards, and can opt to share contact information
and professional interests so participants can connect and follow up during and after the course. When
these courses have over a thousand participants signed up, it makes for a lot of folks engaging and
learning from one another, as a whole and in small groups!
The instructors guiding the class also provide a few live webinars. The first one occurs before the
course begins to promote it and prep participants for what to expect. The last one (after unit-
embedded live webinars too) bookends the course during the last week, offering students and the
instructors an opportunity to reflect on the experience. The webinars are recorded so that folks who
cannot make it are able to view and post comments as a method of asynchronous engagement. These
opportunities are the only live, synchronous settings to engage and obtain feedback from participants.
Otherwise, instructors benefit from participant feedback in discussion boards, emails, and via the
course evaluation.
While SAAL does its own course analysis, that organization and the class instructors have benefitted
from partnering with the research team of Nix, Song, and Zhang. This partnership has given course
instructors the opportunity to not only learn more about affective learning, but also to see the impact
the open course has on its participants’ achievement of ALOs. While the goal of the course is simple (to
support those engaging in assessment work) this exploration of the extent and type of impact – beyond
what might be expected from a solely cognitive-based paradigm – is powerful and gives affirmation to
the great work of the instructors leading the course.
Summary and Conclusion
This research extended earlier studies (Nix & Song, 2020; Nix, et al. 2021) by incorporating MOOC data
into the analyses. As such, a new coding team had the opportunity to investigate the reliability of the
GUALS. Intra-rater reliability was assessed using Minitab; Fleiss’ kappa was 0.93 (95% Cl: 0.92--0.96) for
rater agreement. Table 1 provides a snapshot.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 28
Table 1
Intra-Rater Reliability for GUALS-Scores
Between Appraisers Assessment Agreement
# Inspected
# Matched
Percent
95% CI
838
792
94.51
(92.75, 95.95)
Fleiss’ Kappa Statistics
Response
Kappa
SE Kappa
Z
P(vs > 0)
1
0.91526
0.0345444
26.4951
0.0000
2
0.65154
0.0345444
18.8608
0.0000
3
0.90054
0.0345444
26.0691
0.0000
4
1.00000
0.0345444
28.9482
0.0000
5
0.99625
0.0345444
28.8397
0.0000
6
0.99355
0.0345444
28.7616
0.0000
7
1.00000
0.0345444
28.9482
0.0000
Overall
0.93484
0.0147045
63.5747
0.0000
We have included in Appendices A and B the results of the MOOC data analyses. Value-expressive
attitudes were the greatest catalyst for affective learning as assessed by the GUALS. Knowledge and
utilitarian attitudes had positive impacts on affective learning as well. Only an Egoism attitude stunted
learning in the affective domain. In terms of emotions, contentment again had the greatest positive
impact on attainment of affective domain learning outcomes, and happiness was also found to increase
levels of affective learning outcomes. However, as mentioned earlier in the report, satisfaction was,
once more, an unexpected barrier to achieving affective learning outcomes within this group of
assessment professionals. In this dataset, satisfaction was even more detrimental to achieving affective
learning outcomes than anxiety-worry! Indeed, the ramifications of such findings could be staggering if
those are replicated in future studies. Appendix C demonstrates that generally, the median GUALS
score significantly χ2(6, 838) = 31.30, p = .000, η2 = .03, increased as the for-credit course progressed, a
trend that we hope to strengthen as our understanding of ALOs progresses.
Reviewing the weekly reflections through an ethnomethodological lens a la Garfinkel (1967), we were
able to ascertain that most of the restorative work was and is being done by women for their families.
While the reflections of men were focused primarily on organizational restorations, women were
generally preoccupied with moving their professional roles online and homeschooling their children.
Traditional notions about sex and gender roles were still salient within this subpopulation, and the
researchers believe that ethnomethodological analyses should take greater precedence since the
whole of the planet is engaged in restorative actions from the COVID-19 great breach event.
Organizational- and business- anthropological lenses may also enlighten researchers as we re-evaluate
designs available for qualitative methodology. Affective learning and affective attributes contribute to a
research area for which fsQCA may enhance researchers’ understandings of students’ realities.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 29
Lastly, as fellow conference participants and other researchers have pointed out, a discussion of effect
size is warranted here. For the Kruskal-Wallis H-test, a nonparametric statistical routine adopted when
comparing more than two groups, the eta-squared (η2) estimate may be computed using the formula,
η2H = , where H is the Kruskal-Wallis value, k is the number of groups and n is the number of
observations (Tomczak & Tomczak, 2014). The eta-squared estimates are expressed in values from 0 to
1; multiplying those by 100% indicates the percentage of variance in the dependent variable explained
by that particular independent variable. We computed effect size for all statistically significant
independent variables in the study and listed those in Table 4.
Table 4
Effect Sizes for Statistically Significant Independent Variable Impacts
Independent variable
η2 estimate
Percent of variance in GUALS-score
explained
Attitude
0.347
34.7%
Confusion about
instructions
0.073
7.3%
Learning module (week)
0.037
3.7%
Primary emotion
0.498
49.8%
As evidenced, those four factors accounted for over 95% of the variance in levels of affective learning
achieved across the doctoral courses as assessed using the GUALS instrument. Other independent
variables (gender, industry, profession, perceived effective learning activities, perceptions of learning
new information, perceived engaging learning activities, and perceived struggles with comprehension)
combined account for less than five percent of variance in affective-learning outcomes attainment.
While the researchers do not advocate for manipulation of student emotions, as student development
and learning professionals we cannot ignore the impact of negative emotions on affect, or states of
mind. The need to maintain positivity as we approach our own tasks—particularly those that are
interaction-heavy with students—must begin to take center-stage as we delve further into
understanding affect. We can choose to point out the positives of each interaction and attempt to
reduce the probability of negative attitudinal or emotional responses in our students. Complimenting
students’ strengths and focusing on successes may set the stage for positive relationships and
increased learning. There is something to be said for reminding people of purpose and value as they
prepare for and engage in learning experiences and/or when crafting interventions; our research found
satisfaction may not be a key driver or consideration to hold dear, but rather we ought focus on design
and remind participants of purpose and value of experience. This may be uncomfortable, but such is
growth…and life.
References
Arora, S., & Sharma, R. (2018). Positive affect, psychotherapy, and depression. Indian Journal of
Psychiatry, 60(2), 199-204.
Baumeister, R. F., & Bushman, B. J. (2007). Angry emotions and aggressive behaviors. In G. Steffgen &
M. Gollwitzer (Eds.), Emotions and aggressive behavior (pp. 61–75). Hogrefe & Huber
Publishers.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 30
Bolin, A. U., Khramtsova, I., & Saarnio, D. (2005). Using student journals to stimulate authentic learning:
Balancing Bloom’s cognitive and affective domains. Teaching of Psychology, 32(3), 154–159.
http://dx.doi.org/10.1207/s15328023top3203_3
Bolkan, S. (2015). Forum: Affective learning. Students’ affective learning as affective experience:
Significance, reconceptualization, and future directions. Communication Education, 64(4), 502–
505.
Clark, L., Chamberlain, S. R., & Sahakian, B. J. (2009). Neurocognitive mechanisms in depression:
Implications for treatment. Annu Rev Neurosci, 32, 57-
74. https://doi.org/10.1146/annurev.neuro.31.060407.125618
Davis, M. (1992). The role of the amygdala in fear-potentiated startle: Implications for animal models of
anxiety. Pharmacological Sciences, 13(1), 35-41.
Debono, K. G. (1987). Investigating the social-adjustive and value-expressive functions of attitudes:
Implications for persuasion processes. Journal of Personality and Social Psychology, 52, 279–
287.
Debono, K. G. (2000). Attitude functions and consumer psychology: Understanding perceptions of
product quality. In G. R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions of attitudes (pp.
195–221). Erlbaum.
DeBono, E. (1985). Six thinking hats. Little, Brown, and Company.
Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion
for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087-1101.
https://doi.org/10.1037/0022-3514.92.6.1087
Forbes, C. E., & Grafman, J. (2010). The role of the human prefrontal cortex in social cognition and
moral judgment. Annu. Rev. Neurosci, 33, 299–324. https://doi.org/10.1146/annurev-neuro-
060909-153230
Fredrickson, B. L. (2000). Cultivating positive emotions to optimize health and well-being. Prevention &
Treatment, 3(1). https://doi.org/10.1037/1522-3736.3.1.31a
Garfinkel, H. (1967). Studies in ethnomethodology. Prentice-Hall.
Gatignon, H., Tushman, M. L., Smith, W., & Anderson, P. (2002). A structural approach to assessing
innovation: Construct development of innovation locus, type, and characteristics. Management
Science, 48, 1103–1122. https://doi.org/10.1287/mnsc.48.9.1103.174
Goleman, D. (1987, August 25). Embattled giant of psychology speaks his mind. New York Times.
https://www.nytimes.com/1987/08/25/science/embattled-giant-of-psychology-speaks-his-
mind.html
Goleman, D. (2004). What makes a leader? Harvard Business Review, 82(1), 82–91.
Gredler, M. E. (2009). Learning and instruction: Theory into practice. Merril Pearson.
Hansen, M. (2019). Using assessment trends in planning, decision-making, and improvement. In S. P.
Hundley, S. Kahn, & T. W. Banta (Eds.), Trends in assessment: Ideas, opportunities, and issues for
higher education (pp. 175-193). Stylus Publishing.
Hanson, J. (2011). Advancing affective attributes and empowering undergraduate students –Lessons
learned from the Bali bombing. Nurse Education in Practice, 11(411-415).
https://doi.org/10.1016/j.nepr.2011.03.026
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 31
Herek, G. M. (1986). The instrumentality of attitudes: Toward a neofunctional theory. Journal of Social
Issues, 42, 99–114.
Herek, G. M. (1987). Can functions be measured? A new perspective on the functional approach to
attitudes. Social Psychology Quarterly, 50, 285–303.
Herek, G. M. (2000). The social construction of attitudes: Functional consensus and divergence in the U.
S. public’s reactions to AIDS. In G. R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions of
attitudes (pp. 325–364). Erlbaum.
Hundley, S. P., Kahn, S., Barbee, J., & Partners of the Assessment Institute. (2019). Meta-trends in
assessment: Perspectives, analyses, and future directions. In S. P. Hundley, S. Kahn, & T. W.
Banta (2019). Trends in assessment: Ideas, opportunities, and issues for higher
education (Chapter 12). Stylus Publishing.
Immordino-Yang, M. H., & Damasio, A. R. (2007). We feel therefore we learn: The relevance of affective
and social neuroscience to education. Mind, Brain, and Education, 1(1), 3–10.
https://doi.org/10.1111/j.1751-228X.2007.00004.x
Integrated Nursing Pathway Program at CCA (n.d.). Community College of Aurora.
https://www.ccaurora.edu/programs-classes/departments/nursing
Izard, C. E. (2010). The many meanings/aspects of emotion: Definitions, functions, activation, and
regulation. Emotion Review, 2(4), 363–370.
Johns, J. A., & Moyer, M. T. (2018). The attitudes, beliefs, and norms framework: A tool for selecting
student-centered, theory-informed affective learning objectives in health education. Journal of
Health Education Teaching, 9(1), 14–26.
Katz, D. (1960). The functional approach to the study of attitudes. Public Opinion Quarterly, 24, 163–
204.
Kincheloe, J. (2008). Critical constructivism primer. Peter Lang.
Kirkpatrick, D. L. (1994). Evaluating training programs. Berrett-Koehler Publishers.
Kovbasyuk, O., & Blessinger, P. (2013). Meaning-centered education: International perspectives and
explorations in higher education. Routledge.
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of educational objectives, Handbook II:
Affective domain. David McKay Company, Inc.
Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Hutchings, P., & Kinzie, J. (2015). Using
evidence of student learning to improve higher education. Jossey-Bass.
Levy, J. D., Hess, R. M., & Thomas, A. S. (2018). Student affairs assessment & accreditation: History,
expectations, and implications. The Journal of Student Affairs Inquiry, 4(1), 4284.
Locander, W. B., & Spivey, W. A. (1978). A functional approach to attitude measurement. Journal of
Marketing Research, 15, 576–587.
Miller, C. (2010). Improving and enhancing performance in the affective domain of nursing students:
Insights from the literature for clinical educators. Contemporary Nurse: A Journal for the
Australian Nursing Profession, 35(1), 2-17.
Miller, E. K. (2001). An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci, 24, 167-
202.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 32
Nix, J. V., Song, L. M., & Lindbeck, R. (2021). Affective learning outcomes assessment as a path to online
dialogic student development. Journal of Organizational Psychology, 21(4), 111-134.
Nix, J. V., & Song, L. M. (2020). Affective assessment: Incorporating emotions into our work for social
justice. Proceedings of the Association for Assessment of Learning in Higher Education (AALHE),
2020 Annual Conference.
Nix, J. V., Lion, R. W., Michalak, M. B., & Christensen, A. (2015). Individualized, purposeful, and
persistent: Successful transitions and retention of students at risk. Journal of Student Affairs
Research & Practice, 52(1), 104-118. https://doi.org/10.1080/19496591.2015.995576.
National League for Nursing. (2017). NLN program outcomes and competencies for graduate academic
nurse educator preparation. http://www.nln.org/docs/default-source/professional-
development-programs/program-outcomes-and-competencies2.pdf?sfvrsn=2
Norris, K., & Weiss, H. A. (2019) Assessing community engagement. In S.P. Hundley, S. Kahn, & T.W.
Banta (Eds.). Trends in assessment: Ideas, opportunities, and issues for higher
education (Chapter 4). Stylus Publishing.
Ortony, A., & Turner, T. J. (1990). What's basic about basic emotions? Psychological Review, 97, 315–
331.
Padoa-Schioppa, C. (2011). Neurobiology of economic choice: A good-based model. Annu.
Rev. Neurosci, 34, 333-359
Pekrun, R., & Linnenbrink-Garcia, L. (Eds.). (2014). International handbook of emotions in education.
Routledge.
Petty, R. E., & Wegener, D. T. (1998). Matching versus mismatching attitude functions: Implications for
scrutiny of persuasive messages. Personality and Social Psychology Bulletin, 24, 227–240.
Rihoux, B. & Ragin, C. C. (2009). Configurational comparative methods: Qualitative comparative
analysis (QCA) and related techniques. SAGE.
Rodríguez, J. I., Plax, T. G., & Kearney, P. (1996). Clarifying the relationship between teacher nonverbal
immediacy and student cognitive learning: Affective learning as the central causal mediator.
Communication Education, 45(4), 293–305. https://doi.org/10.1080/03634529609379059
Rogers, G., Mey, A., Chan, P., Lombard, M., & Miller, F. (2018). Development and validation of the
Griffith University Affective Learning Scale (GUALS): A tool for assessing affective learning in
health professional students’ reflective journals. MedEdPublish,
https://www.mededpublish.org/manuscripts/1361
Roper, L. D. (2015). Student affairs assessment: Observations of the journey, hope for the future. The
Journal of Student Affairs Inquiry, 1(1), 361.
Salkind, N. J. (2010). Encyclopedia of research design (Vols. 1-0). SAGE Publications, Inc.
https://dx.doi.org/10.4135/9781412961288 Salmon, G. (2013). E-tivities: The key to active
online learning (2nd ed.). Routledge, Taylor & Francis Group.
Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: a
systematic review of meta-analyses. Psychological Bulletin, 143, 565–600.
https://doi/10.1037/bul0000098.
Seels, B., & Glasgow, Z. (1990). Exercises in instructional design. Merril.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 33
Shavitt, S. (1990). The role of attitude objects in attitude functions. Journal of Experimental Social
Psychology, 26, 124–148.
Shavitt, S., Swan, S., Lowrey, T. M., & Wanke, M. (1994). The interaction of endorser attractiveness and
involvement in persuasion depends on the goal that guides message processing. Journal of
Consumer Psychology, 3, 137–162.
Simonson, M., Smaldino, S., & Zvacek, S. (2015). Teaching and learning at a distance: Foundations of
distance education, (6th ed.). IAP.
Smith, M. B., Bruner, J. S., & White, R. W. (1956). Opinions and personality. Wiley.
Snyder, M., & DeBono, K. G. (1985). Appeals to image and claims about quality: Understanding the
psychology of advertising. Journal of Personality and Social Psychology, 49, 586–597.
Spady, G. W. (1994). Outcome-based education: Critical issues and answers. The American Association
of School Administrators.
Tomczak, M., & Tomczak, E. (2014). The need to report effect size estimates revisited: An overview of
some recommended measures of effect size. Trends in Sport Sciences, 1(21), 19-25.
Tyng, C. M., Amin, H. U., Saad, M. N. M., & Malik, A. S. (2017). The influences of emotion on learning
and memory. Frontiers in Psychology, 8, 1454. https://doi.org/10.3389/fpsyg.2017.01454
Yeager, D. S., & Dweck, C. S. (2012). Mindsets that promote resilience: When students believe that
personal characteristics can be developed. Educational Psychologist, 47(4), 302–314.
https://doi.org/10.1080/00461520.2012.722805
Zahl, S. B., Jimenez, S., & Huffman, M. (2019). Assessment at the highest degree(s): Trends in graduate
and professional education. In S. P. Hundley, S. P., S. Kahn, & T. W. Banta (2019). Trends in
assessment: Ideas, opportunities, and issues for higher education (Chapter 7). Stylus Publishing.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 34
Appendix A
Kruskal-Wallis Results for Attitudes Versus GUALS-Score, MOOC Data
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 35
Appendix B
Kruskal-Wallis Results for Emotions Versus GUALS-Score, MOOC Data
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 36
Appendix C
GUALS-Score by Module/Week Across Two Doctoral Cohorts
GUALS-Score Statistics by Learning Module, Two Cohorts
Statistics
Variable
Module/We
ek
N
N*
Q1
Median
Q3
IQR
Mode
N for
Mode
GUALS_scor
e
1
11
2
0
1.250
3.000
5.000
3.750
1
28
2
12
2
0
3.000
4.000
5.000
2.000
3
28
3
12
1
0
3.000
4.000
5.000
2.000
3
38
4
12
3
0
3.000
5.000
6.000
3.000
3
33
5
11
8
0
3.000
4.500
7.000
4.000
7
33
6
12
2
0
3.000
5.000
7.000
4.000
7
32
7
12
0
0
3.000
4.000
5.000
2.000
3, 5
31
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 37
The authors express their thanks to Muzhen Zhang for her assistance with coding qualitative data from
the MOOC. The authors extend gratitude to Demian Pedone for style and English copy editing.
About the authors
Misty Song is the Manager of Online Learning at Lamar Institute of Technology. She may be reached out
lms15a@acu.edu
Dr. Vince Nix is an Assistant Professor at Lamar University. He can be reached at jnix2@lamar.edu
Dr. Joe Levy is the Executive Director of Assessment and Accreditation at National Louis University. He
can be reached at jlevy2@nl.edu.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 38
Help Me Help You: Motivating Campus Enigmas to Become Exemplars
By Kate Oswald Wilkins and Susan Donat, Messiah University
Abstract: Engagement and buy-in remain key predictors of a culture of learning assessment on college
and university campuses, but this is rarely accomplished without effective leadership (Ewell &
Ikenberry, 2015). Building a culture of learning assessment depends upon assessment leaders' ability to
garner wide-spread engagement in assessment activities (Jankowski & Marsha, 2017). Institutional
assessment work requires the ability to work with others. And other people can be complete enigmas,
especially in times of stress (thanks, COVID). At first, we might ask: why don't they care? The reality is
that our colleagues care about student learning, but some require motivation to deconstruct the
barriers they have about assessment. We identify strategies for empowering and equipping
assessment "enigmas" to help them grow into exemplars.
Introduction
Kinzie and Jankowski (2015) argue that meaningful, consequential assessment occurs when there is
authentic, widespread engagement in assessment activities on a campus. It is not enough to have
compliance with assessment expectations when it means that in reality, a limited number of individuals
submit assessment reports or post results that are neither seen nor used. However, achieving more
than mere compliance can feel nearly impossible in an era in which educators and administrators alike
are overwhelmed by a multitude of disruptive changes: rapid adoption of technologies, budget cuts,
falling enrollments, restructuring, and more. One could argue, however, that times in which higher
education institutions are struggling to survive are also the times in which they stand to benefit most
from authentic assessment efforts, ultimately demonstrating the value of college education to external
audiences such as prospective students, parents, and accreditors (Bassis, 2015).
Equipping and empowering diverse individuals and groups to do assessment work is one essential way
to encourage a culture of assessment to flourish. The NILOA Transparency Framework points to the
centrality of well-used assessment resources in equipping educators and administrators to engage in
authentic assessment. But because one size does not fit all, we need to "give faculty and staff clear
expectations and guidance on what they are to do, but at the same time offer flexibility and options"
(Suskie, 2018, p. 132). Assessment leaders need to become skilled at identifying the reasons for
resistance on their campus and adapt in order to reach and teach. This paper presents a framework for
assessment professionals to target assessment efforts based on personas faculty may adopt in
discussions about improving student learning.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 39
We present our framework to help assessment professionals:
• Identify common personas that might hinder/enhance individuals' willingness to engage in
campus assessment activities.
• Describe strategies for equipping and empowering diverse individuals for engagement in
authentic assessment.
• Identify specific ways to diversify approaches and mediums for reaching and teaching your
campus educators and administrators.
Where Have We Gone Wrong?
What prevents key members of your institution from engaging in assessment efforts? Ikenberry & Kuh
posit that some barriers arise due to a lack of communication. “Often not consulted in advance or
viewed as partners, faculty members may see efforts to gauge student learning as threatening,
unneeded, useless, intrusive, or irrelevant” (2015, p. 21). The way faculty perceive assessment efforts
should guide our strategies in leading and supporting faculty and administrators. In our tenure as
assessment professionals, we noticed common responses from faculty. Some responses indicated
faculty who were happy to dig into assessment work, some from faculty who were less than
enthusiastic, and some responses were borderline hostile. We present these common assessment
personas that present challenges for accomplishing assessment work and offer observations about
behaviors that might help you spot these personas on your campus.
Common Assessment Personas & How to Spot Them
We developed these personas based on observing how our colleagues responded to assessment tasks.
We chose the term “persona” because it means a role a person adopts (Goffman, 1959). It doesn’t
essentialize or stereotype the person, but rather it presents the behavior as a chosen response within a
particular situation. Individuals may present multiple personas simultaneously, and individuals may not
fit our types exactly because of course humans are not archetypes--they are complex. Finally, while
we’ve all had our frustrations with colleagues because of the ways these personas impede our work, we
want to emphasize that our focus is not on labeling. Rather, our focus is in seeking to understand and
respect our colleagues toward the goal of finding ways to equip and empower them for the sake of
improving learning assessment on our campuses. As an organizing tool, we categorize these personas in
accordance with difficulty level: easy to support/encourage in their assessment tasks; moderately
challenging; and, difficult to support/encourage.
Turning your Enigmas into Exemplars
To work with the personas and move them from assessment enigmas to exemplars, you must listen to
and observe your enigmatic colleague with the goal of identifying that persona’s barriers to assessment
work. This next section presents a description of each persona, principles from organizational theories,
learning theories, and assessment literature that have helped us understand how to approach each
persona, and equipping strategies to use while interacting with each persona.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 40
Easy Personas
The personas that tend to be the easiest to support and encourage in accomplishing assessment tasks,
and the most receptive to discussions about assessment data are the learner, the altruist, and Mr./Ms.
Independent.
The learner/innovator: This persona expresses genuine curiosity about how assessment processes and
data can help improve the student experience and their program’s curriculum. You can identify this
persona by looking for the people who volunteer for pilot projects and the colleague who emails or
catches you at the beginning or end of meetings to ask questions about assessment because they
genuinely want to learn more. Adult learners tend to be autonomous, self-directed, and problem
centered (Knowles, 1973); therefore, a learner/innovator attends trainings and provides insight into
additional applications of core concepts. The learner also tends to embrace innovation, and you can
lean on these people to support the culture change needed to embrace a culture of learning (Rogers,
1962).
If we connect assessment to learning, we can connect to educator’s excitement as learners (Suskie,
2018). Therefore, when dealing with a learner persona, frame assessment in ways to spark curiosity,
such as modeling the type of open-ended questions based on their assessment data that inspire a
deeper discussion about our students’ experiences. Offer collaborative faculty development sessions,
interactive group/individual trainings, lists of resources to share and opportunities to connect with
other campus innovators or risk-takers who enjoy trying new pedagogies or technologies. Set the
learner persona up for success by ensuring the expectations are clear, and that 1) your institution’s
assessment policies and directions are clear, and 2) that the policies and directions are easily and widely
available through multiple channels: websites, portals, email, or regularly updated manuals. Consider
providing screenshots and short, topic-specific, or problem-shooting videos.
The second persona that’s easy to support is the altruist. The altruist is willing to do tasks if those tasks
clearly contribute to the wellbeing of others, to their program or department, or to the well-being of
the institution. You can spot the altruist because they are the ones who ask questions about the
university’s assessment expectations. They want to understand what the school or the university or
accreditor expects regarding assessment. The altruist either volunteers or willingly completes
assessment work for their program or department. Another indicator is that an altruist tends to
authentically connect with and talk about the university’s values or mission. They see the connection of
how assessment can provide evidence of accomplishing the mission.
Altruists are more likely to have a connection to the organization or team’s culture (Pacanowsky &
O'Donnell-Trujillo, 1982). Equip an altruist for assessment tasks by connecting assessment to the
institution’s mission and values. Cultivate a shared vision for assessment, an essential element in
creating a learning organization. With this, altruists can feed the vision and inspire others (Senge, 1990).
Encourage/model how improvements in learning support students’ ability to discuss what they know as
a result of completing the program, and improve placement after graduation, emphasize ways
assessment helps improve programs and bolster recruitment. Encourage altruists to place assessment
work on performance appraisal under "institutional service."
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 41
The third persona that is easy to support is Mr./Ms. Independent. This person needs to complete tasks
themselves; they avoid dependence on others. You can spot this person because they can sometimes
trigger whiplash. They ignore your emails and may blow off training sessions. You despair that they
don’t care or are going to blow off the work, but then you’re surprised to discover they found and read
the manual or your directions, and jumped in with both feet, and accomplished the tasks on their own!
Bear in mind that adult learners tend to want to think of themselves as autonomous and self-directed
(Knowles, 1973). Therefore, allow people to use the methods they feel comfortable with; offer
flexibility (Massa & Kasimatis, 2017). Equip your Mr./Ms. Independents for assessment work by
providing on-demand resources that are clear and accessible. Do your best to allow autonomy as long
as they are meeting the expectations.
Moderately Challenging Personas
The personas that present moderate challenges for partnering are the perfectionist, the achiever, the
responsible colleague, the change averse, and the overworked colleague. These personas require more
targeted communication than the easy personas and can require more time to support as they grow
into campus exemplars.
The perfectionist needs to do things correctly and avoid subpar performance, so their assessment focus
tends toward generating perfect results/showing their program is already great. Perfectionists tend to
respond to suggestions by telling you they already tried that. Or they may respond in a way that seems
like they need to prove they already accomplished what you’re talking about. The perfectionist takes
directions literally and never leaves tasks undone. Adult learning theory tells us that orientation toward
learning tends to be life-centered; adults want to learn in order to increase their competence and
accomplishments (Knowles, 1973). These individuals need a safe, affirming environment in which to be
given permission to be imperfect and consider how they might grow (Kasworm & Bowles, 2012).
Consider supporting a perfectionist by ensuring they have the resources they need to do the job “right.”
Cite institutional policy, provide detailed manuals with screenshots and step-by-step videos, tell them
ways to share the results of doing things the right way, tell them they are an example for others. Model
ways less-than-perfect results can provide a springboard for improving and celebrating future program
success.
The second persona in the moderate category is the achiever. The achiever only wants to invest in work
if it makes them look good or if it helps them advance professionally (and doesn’t want to invest in
other activities). You can spot achievers because they may only answer your emails when their
supervisor is copied, if it might make them look bad, or if it’s a request to feature them in the next
assessment newsletter. As you engage with an achiever, keep these principles in mind. Adult learners
need to tap into personal incentive (Knowles, 1973), and helping this colleague achieve “personal
mastery” will nurture their happiness and commitment to their work (Senge, 1990).
To support an achiever, consider offering incentives, like reminders to put assessment tasks on their
performance appraisals, badges, certificates, campus highlights or recognition. Set achievers up for
success. Establish institutional awards and connect the achiever with the ability to successfully apply.
Provide shout outs to acknowledge their participation during meetings with chair/dean when they are
also present.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 42
The third persona that presents moderate challenges to assessment work is the responsible colleague.
This type of colleague does their duty because it’s required, but not because they like it or see value in
it. This individual contacts you right before a deadline (or right after the deadline), apologizes for not
knowing what is going on, and then asks you to help them get it done. Alternatively, they may not ask
any questions, but they submit only the bare minimum to check off their assessment requirements.
In working with a responsible colleague persona, remember that adults tend to be ready to learn when
the learning will help them cope with a situation or perform a task (Knowles, 1973). In assessment,
duty-oriented individuals can be prone to compliance orientation and therefore take longer to move
toward genuine buy-in (Ikenberry & Kuh, 2015). To equip your responsible colleague personas, provide
on-demand resources that are clear and accessible. Encourage/model consequential use of data to help
the responsible colleague to move toward authentic assessment. When possible, offer one on one
assistance to continue coaching them toward authentic learning culture.
The fourth persona in the moderately challenging category is the change averse persona. The change
adverse colleague reacts negatively to new processes, structures, and expectations. Once you start
working with them, you realize their negative reaction isn’t about you, and it isn’t about the work. The
negative reaction is their reaction to change. You can spot this person by their confused emails asking
you clarify directions or explain what has changed, or because they can’t find something due to a
change. Another “tell” for this persona is that they make a point to mention that the university keeps
“changing the rules” or changing the software.
When working with change averse personas, keep in mind that initiative fatigue is real. Colleagues can
become jaded when the organization takes on too many short-lived change initiatives; Senge et. al
(1999) calls them “flavor of the month” programs (p. 6). Assessment is particularly prone to cause
initiative fatigue, so assessment professionals need to sell the merits of the initiative, hold large-scale
events, conduct short-cycle assessments, calculate the return on investment, and connect the dots
among initiatives over time (Kuh & Hutchings, 2015). Another underlying concern could be perceived
lack of help/support to do what the institution expects. “If the help available to people is inadequate,
the effectiveness of the change initiate suffers and learning capabilities fail to develop” (Senge et. al,
1999, 104). We also need to make it easy to seek help (p. 107).
Equipping strategies to try with change-averse personas include communicating the ways in which
changes are a part of the same vision/goal. Present the vision as “insider knowledge” of the larger
institutional plan. Ask those in positions of power to affirm the longer term goal in their presence.
Connect assessment efforts to strategic planning efforts. Provide clear directions and make the path
smooth (Heath & Heath, 2012) by removing known obstacles in their path. When possible, meet one on
one to better understand their needs and concerns, and to provide targeted support.
The final persona in this level is the overworked colleague. This person either over-reacts or responds
poorly to assessment due to their perceived bandwidth issues. They feel stretched too thin. We noticed
a large uptick in this persona in the past year. You spot this persona by phrases such as “I just don’t
know when I can get to this.” This persona may burst out in tears during a routine meeting because you
told them one too many things they need to do. The principles to consider include those adult learners
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 43
need to know what to do (Knowles, 1973) but within their zone of proximal development (Vygotsky,
1978).
Equipping strategies to try with overworked colleagues include working with colleagues to eliminate
busywork (Senge et. al, 1999). Find out “all the places learning occurs” (Kinzie & Jankowski, 2015, p. 83)
and help them document it (rather than starting too many new initiatives). Meet with them one-on-one
to understand assessment processes and take some of the leg work from them/facilitate delegation of
legwork to their administrative assistant. Work with them (and their supervisor) to determine how to
“stop doing something else” in order to prioritize assessment (Suskie, 2018, p. 140). Connect
assessment work to their scholarship, teaching them to accomplish two goals at once. Finally, make
sure they understand expectations, as “any task is less daunting if we know exactly what to do, and
assessment is no exception” (Suskie, 2018, p. 132).
Challenging Personas
The personas that tend to be the most challenging to work with and the least receptive to assessment
tasks are the low-priority and philosophical disagreement personas. Both of these personas require a
shift in their attitude toward assessment or their understanding about assessment’s value. These types
of shifts are not impossible but require significant care/resourcing.
The low priority persona displays little motivation to engage in assessment. You can identify these
people, as they are the ones who ignore your emails, are no-shows at trainings or appointments, and
who don’t provide required information. They provide no explanation for their lack of responsiveness. It
could be the person who nods and smiles in meetings, but then does nothing to follow through. In
working with this persona, Knowles’ work tells us that adult learners need to know why (1973). A
“commitment gap” will occur if the organization fails to help individuals “connect personally to a
change initiative” (Senge et. al, 1999, p. 160).
As you work with low-priority persons, “locate assessment in the commitments that faculty hold”
(Kinzie & Jankowski, 2015, p. 104). Other equipping strategies to try include finding a way to connect
with your colleague, even if it has nothing to do with assessment. Tap into their passion for student
success and find a connection to assessment. Ask the individual’s supervisor to support your efforts.
Ensure university governance structures include assessment. Each of these can help build relationships
that inspire authenticity and leverage existing structures to support the common focus on learning
culture.
A second persona that presents real challenges for collaboration is the philosophical disagreement. The
way they talk and think about learning assessment seems to conflict with what they are being told to do
in assessment. This persona reacts negatively to assessment terminology. You may find they hate the
word “data.” In trainings and meetings, they ask “why” questions rather than process questions, trying
to rehash the debate about assessment.
When working with someone who holds a different philosophical view, keep in mind that adult learners
need to know why (Knowles, 1973). In order for individuals to commit to something, they need to see it
as relevant (Senge, 1999). Individuals “need to know how they fit in, how they can contribute, and how
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 44
they will benefit” (Senge, 1999, p. 160). Because faculty speak different “disciplinary discourses” about
learning, assessment professionals need to translate (Becher, 1994, 1981). Make sure you define key
assessment terms so that you can learn to speak the same language (Suskie, 2018). Therefore,
assessment professionals need to examine their own “catch phrases” that create barriers with our
colleagues (Jankowski, 2017). While it’s not necessary to convince people, “it is important to help them
see that the story you are telling is ‘on their side,’ and therefore worth listening to…that their point of
view is treated fairly, and that they are not cast as an outsider” (Senge, 1999, p. 332).
Strategies that can help when you’re working with a colleague with a philosophical different include
learning your colleague's disciplinary dialect and helping them understand the overlapping goals with
assessment. Emphasize outcomes of assessment that align with disciplinary views of learning. Define
key assessment terms in ways that make sense to someone in their discipline, stop using turn-off catch
phrases (e.g., “engage students,” “close the loop”).
How to engage and equip someone presenting these attitudes/issues
Most people aren’t jerks intentionally. As you try to identify personas and strategies that best support
personas, sometimes it’s tough because you may not yet know your colleague well enough to
determine their motivations. In circumstances where you don’t know the person well, it may help to
think about the issue from the perspective of identifying a deficit.
The Lippitt Knoster model (Figure 1) for managing complex change helps with this. If someone is
displaying confusion, it may be because they don’t understand the vision. If you clarify the vision, they
may be successful. If they display anxiety, it may be because they need support in developing their
assessment skills. Transform the deficit into a strength, and you may help sway them into becoming a
friend of assessment.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 45
Figure 1.
Lippitt-Knoster model of managing complex change (Lippitt-Knoster, 1987, adapted by Knoster, 1991).
So, consider what is missing from your coworker’s experience as it pertains to assessment work.
Addressing the deficit could help turn them from enigma to champion. Our “easy” and “medium”
categories tend to need a better understanding of the vision, or skills, or resources such as time and
support. The “hard” categories tend to need intrinsic incentives, or an investment of time focused on
building consensus. These are the people who are most likely to resist assessment or sabotage a
department or program’s assessment work.
Through the lens of transformational learning theory, adults go through ten phases when they need to
learn and change (Mezirow, 1991), as quoted below:
• A disorienting dilemma
• A self-examination with feelings of guilt or shame
• A critical assessment of epistemic, sociocultural, or psychic assumptions
• Recognition that one’s discontent and the process of transformation are shared and that others
have negotiated a similar change
• Exploration of options for new roles, relationships, and actions
• Planning a course of action
• Acquisition of knowledge and skills for implementing one’s plan
• Provision trying of new roles
• Building of competence and self-confidence in new roles and relationships
• A reintegration into one’s life on the basis of conditions dictated by one’s perspective
If the motivation to learn as adults often comes out of disorientation, disappointment, or
disequilibrium, it is no wonder many adults don’t navigate these experiences well. We like competence,
control, and predictability in our work expectations. Learning assessment might be a trigger for
negative reactions for a variety of reasons: it might disrupt a colleague’s disciplinary perspectives on
learning and how it should be evaluated, it might expose areas of low competence, and for others it
presents challenges related to navigating technology or data in ways they feel ill-equipped to do.
As assessment professionals work to equip our campuses for assessment work, it is important to
recognize the vulnerable position our colleagues inhabit, and how essential it is for us to create a safe,
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 46
trusting, and respectful learning environment (Kasworm & Bowles, 2012). Learning occurs through
dialogical processes – we need to listen and be flexible with others if they are going to be open to
transformation (Shapiro, Wasserman, & Gallegos, 2012). Perhaps most importantly, the goal of
transformational learning is emancipation: we want learners to be empowered to design effective
assessments, interpret student performance data, and make evidence-based improvements.
Resources
Basis, M. (2015, July). A primer on the transformation of higher education in America. National Institute
for Learning Outcomes Assessment. Available at
https://www.learningoutcomesassessment.org/wp-content/uploads/2019/08/BassisPrimer.pdf
Becher, T. (1981). Towards a definition of disciplinary cultures. Studies in Higher Education, 6(2), 109-
122.
Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education, 19(2), 151.
doi:10.1080/03075079412331382007
Ewell, & Ikenberry, S. (2015). “Leadership in making assessment matter.” In Kuh, G., Ikenberry, S.,
Jankowski, N., Cain, T., Ewell, P., Hutchings, P., and Kinzie, J. Using evidence of student learning
to improve higher education (pp. 117-145). San Francisco, CA: Jossey-Bass.
Goffman, Erving (1959). The Presentation of Self in Everyday Life. Garden City, N.Y.: Doubleday.
Ikenberry, S. & Kuh, G. (2015). “From compliance to ownership: Why and how colleges and universities
assess student learning.” In Kuh, G., Ikenberry, S., Jankowski, N., Cain, T., Ewell, P., Hutchings, P.,
and Kinzie, J. Using evidence of student learning to improve higher education (pp. 1-26). San
Francisco, CA: John Wiley & Sons, Inc.
Jankowski, N. (2017). “Pardon me, your catch phrase is showing”: The importance of the language we
use. Assessment Update 29(2), pp. 9-13.
Jankowski, N. & Marsha, D.W. (2017). Degrees that matter: Moving higher education to a learning
systems paradigm. Styling Publishing: Sterling, VA.
Kasworm, C. & Bowles, T. (2012). “Fostering transformative learning in higher education settings.” In
Tayor, E., & Cranton, P. The handbook of transformative learning: Theory, research, and practice
(pp. 388-407). San Francisco, CA: Jossey-Bass.
Kegan, R., and Lahey, L. (2009). Immunity to change: How to overcome it and unlock potential in
yourself and your organization. Boston, MA: Harvard Business Press.
Kinzie, J. & Jankowski, N. (2015). “Making assessment consequential: Organizing to yield results.” In
Kuh, G., Ikenberry, S., Jankowski, N., Cain, T., Ewell, P., Hutchings, P., and Kinzie, J. Using
evidence of student learning to improve higher education (pp. 73-94). San Francisco, CA: Jossey-
Bass.
Knowles, M., Holton, E., & Swanson, R. (2005, 1973). The adult learner: the definitive classic in adult
education and human resource development (6th edition). New York: Elsvier.
Kuh, G. and Hutchings, P. (2015). “Assessment and initiative fatigue: Keeping the focus on learning.” In
Kuh, G., Ikenberry, S., Jankowski, N., Cain, T., Ewell, P., Hutchings, P., and Kinzie, J. Using
evidence of student learning to improve higher education (pp. 146-159). San Francisco, CA: John
Wiley & Sons, Inc.
The Association for the Assessment of Learning in Higher Education (AALHE)
2021 Conference Proceedings 47
Lippitt, M., 1987, The Managing Complex Change Model. Copyright, 1987, by Dr. Mary Lippitt, Founder
and President of Enterprise Management, Ltd, adapted by Knoster, T. (1991). Presentation to
the Association for the Severely Handicapped Conference, Washington, D.C.
Massa, L., and Kasimatis, M. (2017). Meaningful and manageable program assessment: A how-to guide
for higher education faculty. Sterling, VA: Stylus.
Mezirow, J. (1991). Transformative Dimensions of Adult Learning. San Francisco, CA: Jossey-Bass.
Pacanowsky, M. & O'Donnell-Trujillo, N. (1982). Organizational communication as cultural performance.
Communication Monographs 50(2), pp. 126-147.
Rogers, E. (1962). Diffusion of innovations. New York: The Free Press.
Suskie, L. (2018). Assessing student learning: A common sense guide. San Francisco, CA: Jossey-Bass.
Shapiro, S., Wasserman, I., and Gallegos, P. (2012). “Group work and dialogue: Spaces and processes for
transformative learning in relationships.” In Tayor, E., & Cranton, P. The handbook of
transformative learning: Theory, research, and practice (pp. 355-372). San Francisco, CA: Jossey-
Bass.
Senge, P., Roberts, C., Ross, R., Roth, G., and Smith, B. (1999). The dance of change: The challenges to
sustaining momentum in learning organizations. New York: Doubleday.
Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New York:
Doubleday.
Tayor, E., & Cranton, P. (2012). The handbook of transformative learning: Theory, research, and
practice. San Francisco, CA: Jossey-Bass.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes Cambridge,
Mass.: Harvard University Press.
About the authors
Dr. Kate Oswald Wilkins is the Assistant Dean of General Education & Common Learning, Director of
Assessment, and Professor of Communication at Messiah University. She can be reached at
koswaldwilkins@messiah.edu.
Dr. Susan Donat is the Assistant Director of Curriculum and Assessment at Messiah University. She can
be reached at sdonat@messiah.edu.