Article

The Official (and Unofficial) Rules for Norming Rubrics Successfully

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... They usually have defined dimensions or characteristics of performance that can be measured (criteria). Evidence suggests that using analytic rubrics can be an effective way to examine learning outcomes related to information literacy [3,4]. ...
... Rubric norming refers to the process in which workshops are conducted, with appropriate calibration activities so as to achieve a desired level of consensus about student performance criteria and standards of judgment-in other words, so that evaluation judgments are equally applied and fit the proposed student group [5]. While faculty and librarians can gather direct evidence of student learning with rubrics, they must take appropriate steps to ensure that rubrics are applied consistently and reliably across raters [4][5][6]. ...
... The works of Hoffman and LaBonte [12], Holmes and Oakleaf [4], and Gola, Ke, Creelman, and Vaillancourt [13] speak to the importance of interdepartmental collaboration in assessment projects. We sought to develop a partnership between assessment personnel, faculty members, and librarians at our university. ...
Article
Full-text available
Objective: The study evaluated whether a modified version of the information literacy Valid Assessment of Learning in Undergraduate Education (VALUE) rubric would be useful for assessing the information literacy skills of graduate health sciences students. Methods: Through facilitated calibration workshops, an interdepartmental six-person team of librarians and faculty engaged in guided discussion about the meaning of the rubric criteria. They applied the rubric to score student work for a peer-review essay assignment in the ‘‘Information Literacy for Evidence-Based Practice’’ course. To determine inter-rater reliability, the raters participated in a follow-up exercise in which they independently applied the rubric to ten samples of work from a research project in the doctor of physical therapy program: the patient case report assignment. Results: For the peer-review essay, a high level of consistency in scoring was achieved for the second workshop, with statistically significant intra-class correlation coefficients above 0.8 for 3 criteria: ‘‘Determine the extent of evidence needed,’’ ‘‘Use evidence effectively to accomplish a specific purpose,’’ and ‘‘Access the needed evidence.’’ Participants concurred that the essay prompt and rubric criteria adequately discriminated the quality of student work for the peer-review essay assignment. When raters independently scored the patient case report assignment, inter-rater agreement was low and statistically insignificant for all rubric criteria (kappa¼[1]0.16, p.0.05–kappa¼0.12, p.0.05). Conclusions: While the peer-review essay assignment lent itself well to rubric calibration, scorers had a difficult time with the patient case report. Lack of familiarity among some raters with the specifics of the patient case report assignment and subject matter might have accounted for low inter-rater reliability. When norming, it is important to hold conversations about search strategies and expectations of performance. Overall, the authors found the rubric to be appropriate for assessing information literacy skills of graduate health sciences students.
... They usually have defined dimensions or characteristics of performance that can be measured (criteria). Evidence suggests that using analytic rubrics can be an effective way to examine learning outcomes related to information literacy [3,4]. ...
... Rubric norming refers to the process in which workshops are conducted, with appropriate calibration activities so as to achieve a desired level of consensus about student performance criteria and standards of judgment-in other words, so that evaluation judgments are equally applied and fit the proposed student group [5]. While faculty and librarians can gather direct evidence of student learning with rubrics, they must take appropriate steps to ensure that rubrics are applied consistently and reliably across raters [4][5][6]. ...
... The works of Hoffman and LaBonte [12], Holmes and Oakleaf [4], and Gola, Ke, Creelman, and Vaillancourt [13] speak to the importance of interdepartmental collaboration in assessment projects. We sought to develop a partnership between assessment personnel, faculty members, and librarians at our university. ...
Article
Full-text available
Objective: The study evaluated whether a modified version of the information literacy Valid Assessment of Learning in Undergraduate Education (VALUE) rubric would be useful for assessing the information literacy skills of graduate health sciences students.Methods: Through facilitated calibration workshops, an interdepartmental six-person team of librarians and faculty engaged in guided discussion about the meaning of the rubric criteria. They applied the rubric to score student work for a peer-review essay assignment in the ‘‘Information Literacy for Evidence-Based Practice’’ course. To determine inter-rater reliability, the raters participated in a follow-up exercise in which they independently applied the rubric to ten samples of work from a research project in the doctor of physical therapy program: the patient case report assignment.Results: For the peer-review essay, a high level of consistency in scoring was achieved for the second workshop, with statistically significant intra-class correlation coefficients above 0.8 for 3 criteria: ‘‘Determine the extent of evidence needed,’’ ‘‘Use evidence effectively to accomplish a specific purpose,’’ and ‘‘Access the needed evidence.’’ Participants concurred that the essay prompt and rubric criteria adequately discriminated the quality of student work for the peer-review essay assignment. When raters independently scored the patient case report assignment, inter-rater agreement was low and statistically insignificant for all rubric criteria (kappa¼[1]0.16, p.0.05–kappa¼0.12, p.0.05).Conclusions: While the peer-review essay assignment lent itself well to rubric calibration, scorers had a difficult time with the patient case report. Lack of familiarity among some raters with the specifics of the patient case report assignment and subject matter might have accounted for low inter-rater reliability. When norming, it is important to hold conversations about search strategies and expectations of performance. Overall, the authors found the rubric to be appropriate for assessing information literacy skills of graduate health sciences students.
... Rubric norming is a process through which faculty members are trained as raters to discuss and come to agree on their standards of judgment, thus heightening the likelihood of increased reliability in scoring. Appropriate steps for calibrating rubrics have been described by Allen (9), Suskie (12), Maki (7), and more recently Holmes and Oakleaf (13). To date, there is little research on the use of the Written Communication VALUE Rubric for graduate health sciences. ...
... Embedding analytic rubrics to assess common competencies was relatively new to this institution and represented a refinement in a developing assessment process enacted earlier in the institution's academic plan. This norming process entailed first calibrating the rubric for use by multiple raters, and then formally determining inter-rater reliability through an independent scoring exercise (9,12,13). This research represented an interdepartmental collaboration between the Office of Assessment and Institutional Research and a small number of teaching faculty in the Master's of Occupational Therapy and Doctor of Physical Therapy programs. ...
Article
Full-text available
Background: There is growing interest in the use of rubrics to assess written work. This study aimed to determine whether or not the norming of a written communication rubric improved scoring consistency among clinical faculty in a critical thinking course. The benefits of a formalized norming process are described. Methods: Faculty-raters were trained to apply the rubric to a signature assignment while participating in calibration workshops. For each rubric criterion, faculty examined whether or not heightened congruence in scoring resulted from the training. Inter-rater reliability was determined after raters independently scored de-identified essays. Results: Pre-workshop intra-class correlations (ICCs) were acceptable (i.e., >0.7) for three of five rubric criteria. Post-workshop ICCs for only two criteria were acceptable: disciplinary conventions, and sources and evidence. Rater attrition and lag-time between calibration and post-workshop activities likely contributed to reduced consistency. Discussion: The rubric was useful for discriminating writing proficiency. Norming led to revision of the signature assignment, the rubric design, and a need for writing workshops. These changes will result in better student preparation for composing evidence-informed essays. Less-rigid approaches are worthy of future exploration.
... However, examination of the scoring patterns of the two investigators revealed prominent differences in their assessment of student mastery of IL competencies (Table 1), with Investigator 1 awarding students higher median scores than Investigator 2 in five of the six frames. ICC values indicated "poor" or "fair" agreement [9] for all six frames. ...
... Holmes and Oakleaf describe steps that can be taken to norm rubrics to help ensure the validity and reliability of evaluating student skills across time and raters [9]. These steps include nominating a facilitator to lead the norming process, scoring the jmla.mlanet.org ...
Article
Full-text available
Objective: The authors developed a rubric for assessing undergraduate nursing research papers for information literacy skills critical to their development as researchers and health professionals. Methods: We developed a rubric mapping six American Nurses Association professional standards onto six related concepts of the Association of College & Research Libraries (ACRL) Framework for Information Literacy for Higher Education. We used this rubric to evaluate fifty student research papers and assess inter-rater reliability. Results: Students tended to score highest on the “Information Has Value” dimension and lowest on the “Scholarship as Conversation” dimension. However, we found a discrepancy between the grading patterns of the two investigators, with inter-rater reliability being “fair” or “poor” for all six rubric dimensions. Conclusions: The development of a rubric that dually assesses information literacy skills and maps relevant disciplinary competencies holds potential. This study offers a template for a rubric inspired by the ACRL Framework and outside professional standards. However, the overall low inter-rater reliability demands further calibration of the rubric. Following additional norming, this rubric can be used to help students identify the key information literacy competencies that they need in order to succeed as college students and future nurses. These skills include developing an authoritative voice, determining the scope of their information needs, and understanding the ramifications of their information choices.
... Rubric norming is a process through which faculty members are trained as raters to discuss and come to agree on their standards of judgment, thus heightening the likelihood of increased reliability in scoring. Appropriate steps for calibrating rubrics have been described by Allen (9), Suskie (12), Maki (7), and more recently Holmes and Oakleaf (13). To date, there is little research on the use of the Written Communication VALUE Rubric for graduate health sciences. ...
... Embedding analytic rubrics to assess common competencies was relatively new to this institution and represented a refinement in a developing assessment process enacted earlier in the institution's academic plan. This norming process entailed first calibrating the rubric for use by multiple raters, and then formally determining inter-rater reliability through an independent scoring exercise (9,12,13). This research represented an interdepartmental collaboration between the Office of Assessment and Institutional Research and a small number of teaching faculty in the Master's of Occupational Therapy and Doctor of Physical Therapy programs. ...
... Rubric norming is a process through which faculty and staff are trained as raters to discuss and come to agreement on their standards of judgment, thus heightening the likelihood of increased reliability in scoring. Appropriate steps for calibrating rubrics have been described by Allen (3), Suskie (6), Maki (1), and more recently Holmes and Oakleaf (7). To date, there is little research on the use of the Written Communication VALUE Rubric for graduate health sciences. ...
... Our project entailed applying techniques for training faculty how to calibrate use of the rubric for multiple raters. (3,6,7) The research represents an inter-departmental collaboration between the Office of Assessment and Institutional Research and our teaching faculty in the Masters of Occupational Therapy and Doctor of Physical Therapy programs. IRB approval was secured to conduct the research and disseminate the results beyond the institution. ...
... Steps in Calibration (After: Allen, 2004;Maki, 2004;Holmes & Oakleaf, 2013) • Circulate a small sample of de-identified work • Raters independently score sample • Facilitate Discussion: meaning of criteria, evidence Q: Why did you give this one a "2"? … ...
Conference Paper
Full-text available
Embedding analytic rubrics into coursework can be an effective method of assessing skill development. Utilizing a norming framework, we found that the Information Literacy VALUE rubric could be used to discriminate graduate level work. In this interactive session, participants will learn how to modify and calibrate rubrics, establish validity and reliability, and build interdepartmental partnerships.
... Devising and assessing using rubrics is also a time-consuming effort, often requiring practice and adjustment (Oakleaf 2009b). Issues of inter-rater and intra-rater reliability are often associated with the use of rubrics, and special attention is needed to norm the rubric, as well as test for these issues (Holmes and Oakleaf, 2013). ...
Article
Full-text available
This paper presents the findings of a study carried out by librarians in Champlain College who developed a two-pronged authentic assessment approach to measure the information literacy (IL) levels and determine the information seeking habits of students while conducting research for academic purposes. Librarians devised and developed an IL rubric and a citation analysis checklist for the assessment of first-year annotated bibliography assignment papers. This paper illustrates the merits of rubric-based, citation analysis assessment measures using authentic student coursework as a highly effective method of determining student outcomes assessment and information seeking habits while engaging in academic research. Findings from this study also suggest that authentic assessment is an extremely useful tool for instruction librarians to identify areas of IL that require further instructional support. This study is of importance to librarians wishing to adopt rubric-based and citation analysis authentic methods for student outcomes assessment.
... As the assessment of the skills is for programme assessment purposes, the students are assessed as groups and not individually. According to Holmes and Oakleaf, norming is crucial to the efficacy of a rubric and without such a process, deployment of a rubric may be a waste of time or severely limit its effectiveness [15]. Because of this, use of the CPSA rubric to assess student work involves a small team of assessors who participate in a norming process and continually work towards rater consensus. ...
Article
Full-text available
In the fields of engineering and computing, the Accreditation Board for Engineering and Technology (ABET) education places much emphasis on professional skills, such as the ability to engage in lifelong learning and to function successfully on a multi-disciplinary team. The recently developed engineering professional skills assessment (EPSA) simultaneously measures ABET’s non-technical skills for programme and course level assessment. The EPSA is a discussion-based performance task designed to elicit students’ knowledge and application of professional skills. A research project is underway to adapt the method to the field of computing and develop the computing professional skills assessment (CPSA). The CPSA consists essentially of a scenario, a student discussion of the scenario and a rubric to grade the discussion. This article describes the work completed during the first year of the project and the results of the first complete iteration. The results demonstrate that the CPSA can successfully measure the professional skills.
... ASSESSMENT According to Holmes and Oakleaf (2013), norming is crucial to the efficacy of a rubric and without such a process, deployment of a rubric may be a waste of time, or severely limit its effectiveness. Because of this, use of the CPS rubric to assess student work involves a small team of assessors that participate in a norming process and continually work towards rater consensus. ...
Conference Paper
Full-text available
The Engineering Professional Skills Assessment (EPSA) is the only direct method and measurement tool in the literature to teach and simultaneously measure the ABET (Accreditation Board for Engineering and Technology) non-technical skills for both course and program level assessment. The American Society for Engineering Education award-winning EPSA is a discussion-based performance task designed to elicit students " knowledge and application of professional skills, from the understanding of professional and ethical responsibility to the impacts of technical solutions on global, economic, and societal contexts. A partnership with Zayed University in the United Arab Emirates (UAE) was formed in 2014 to adapt the EPSA to computing and the UAE context. The two-year project, funded by the Zayed University Research Incentive Fund, has developed a series of current and relevant scenarios to engage students and faculty in computing related issues specific to the UAE and the Gulf Region. The final deliverable of the project will be the Computing Professional Skills Assessment, or CPSA, which will be made freely available to the computing and IT communities worldwide. This paper describes the first year of the project, the development of one scenario, the first complete iteration of the CPS Rubric, and preliminary results.
... Prior to assessing the discussion groups, faculty participate in a calibration (sometimes referred to as norming) process, using the consensus-estimate approach whereby evaluators must come to consensus as to what constitutes evidence of a given descriptor and score within one point difference on the scale (Stemler, 2004). Calibrations is crucial to the accuracy and efficacy of the implementation of a rubric (Holmes & Oakleaf, 2013) A rating session begins with a review of the CPSA Rubric to refresh their memories. Raters are given a printed copy of the discussion transcript for each group. ...
Article
Full-text available
Aim/Purpose: Assessing non-technical skills is very difficult and current approaches typically assess the skills separately. There is a need for better quality assessment of these skills at undergraduate and postgraduate levels. Background: A method has been developed for the computing discipline that assesses all six non-technical skills prescribed by ABET (Accreditation Board for Engineering and Technology), the accreditation board for engineering and technology. It has been shown to be a valid and reliable method for undergraduate students Methodology: The method is based upon performance-based assessment where a team of students discuss and analyze an ill-defined authentic issue over a 12-day period on a discussion board Contribution: This is the first published method to assess all six skills simultaneously in computing and here it has been trialed with postgraduate students. Findings: The results show that the method, though originally designed for undergraduates, can successfully be used with postgraduate students. Additionally, the postgraduate students found it to be very beneficial to their learning. Recommendations for Practitioners: This method can successfully assess non-technical skills at tertiary level in the computing discipline and it can be adapted to other disciplines. Though designed for assessment it has been found to be an ideal method for teaching the skills at both undergraduate and post graduate levels. Recommendation for Researchers: Compared with other assessment approaches this method has many advantages: it is a direct method of measurement, it is a rigorous method and it assesses all skills simultaneously Impact on Society: Proficiency in non-technical skills is critical for development of knowledge-based economies. This method is a tool to assist in developing these skills. Future Research: Researchers can examine how the method benefits students in their context and examine if there are differences between their context and the UAE context presented here. Researchers can work on developing a rubric solely for postgraduate use i.e., to capture the range of levels among postgraduates.
... Claire Holmes' and Megan Oakleaf's article, "The Official (and Unofficial) Rules for Norming Rubrics Successfully," was very useful to the task force who developed the rubric, as well as the subsequent promotion policy committees who evaluated the dossiers in the cycles following approval of the policy. 6 A good understanding of the "official" and "unofficial" rules helped to decrease tension when there was disagreement about scoring. Even though every attempt had been made to make the policy and the rubric as objective as possible and to create a committee with as broad a perspective as possible, there are still some activities where shared definitions are critical. ...
... Rubric-based methods of evaluation have been advocated for and adopted by numerous LIS researchers as an approach to assessing student learning, particularly when accompanied by a rigorous norming process (Fagerheim and Shrode 2009;Oakleaf 2009;Knight 2006). Though requiring significant time and practice to properly norm and validate, rubrics can facilitate the evaluation of a wide range of IL instruction scenarios, from single sessions to semester-long credit courses (Holmes and Oakleaf 2013). One major venture devoted to exploring and encouraging the use of rubrics for IL, titled Rubric Assessment of Information Literacy Skills (RAILS), is a grantfunded research project providing access to a number of rubrics developed by academic libraries in the United States (RAILS 2014). ...
Article
Full-text available
Seeking to introduce first-year students to library resources and services in an engaging way, an orientation titled The Amazing Library Race (ALR) was developed and implemented at a university library. Informed by the pedagogy of problem-based learning, the ALR asks students to complete challenges regarding different departments and services. This study assesses this initiative’s success using observational and artifact-based data, addressing the challenging prospect of evaluating the impact of library orientation sessions. Two rubrics were developed to measure student involvement and student learning comprehension. More than 14 hours of in-class observations were used to track engagement, and 64 artifacts of student learning were collected and coded to evaluate learning comprehension. After coding, interrater reliability was established using the intraclass correlation coefficient to establish the validity of the ratings. This paper will outline these methodologies, present the results of the data analysis, and discuss the possibilities and difficulties of measuring student engagement in information literacy instruction centred upon active learning.
... 91 Since then, Holmes and Oakleaf have published a guide to rubric norming, the process through which those who will be responsible for applying the rubric are trained to use it consistently. 92 When properly developed and used, rubrics have very broad applicability and can provide rich, reliable feedback to students and data for instructors. Like any instrument, rubrics have their own set of strengths and weaknesses. ...
Article
There is a well-established need for academic libraries to demonstrate their impact on student learning, particularly through the application of measurable outcomes in information literacy instruction (ILI). Recent literature is replete with articles both outlining the importance of well-designed assessment as an integral part of ILI and providing examples of the methods being used at particular institutions. This review synthesizes the theoretical and practical literature on ILI assessment in an effort to answer three questions: What do we know about assessment methods and what general recommendations exist? What assessment methods are academic librarians actually using? How does professional practice compare to existing recommendations?.
... Many of the printed AGDA Award compendiums simply state that the judging process is based on the 'Olympic model' without giving specific details on what judges hold in high esteem when scoring the work (AGDA, 2012). A clearly communicated rubric, or 'a coherent set of criteria' can ensure a clear path to success and mastery (Brookhart, 2013;Kaplan & Owings, 2013;Brookhart, 2013;Holmes & Oakleaf, 2013). ...
Conference Paper
Full-text available
This article measures and evaluates the visibility of women in Australian graphic design, through their presence and experiences in the AGDA (Australian Graphic Design Association) Awards. Positioning gender equity as a critical value in the graphic design industry, it also establishes the AGDA Awards as an integral way for designers to gain this visibility as authors of their work. This paper hypothesises that women have low visibility, in comparison to men, and that actions can be taken to remedy this gendered anonymity. Through collating the gender of every winner and juror in the AGDA Awards, this research demonstrates that levels of gender equity in the industry can be evaluated objectively. Similarly, it shows that identifying issues impacting the visibility of women on award platforms, felt by women in established design careers, can provide insights that lead to improving gender equity in the industry. Building on methodologies inspired by Marie Neurath’s contribution to the ‘Isotype Transformer’ process, this research analyzes, selects, orders and makes visible the AGDA Award data set. The findings that surface during this process, conclusively show that women are – on average and consistently – only 25 per cent of winners and judges in the AGDA Awards. However, through an evaluation of these shortfalls alongside the inclusion of interviews with women, deemed significant contributors to Australian graphic design by their peers, findings show how equitable visibility can be achieved through a series of measured and purposeful initiatives.
... " Following official rules #2, #3, and #4, each of us used the rubric to independently grade two Annotated Bibliographies. We thoroughly discussed how and why we assigned the labels to each category for those two papers and then made minor alterations and tweaks to the wording of the rubric so that we were interpreting the rubric in the same way (Holmes & Oakleaf, 2013). We then finished reviewing all papers independently and reconvened later to assess the scores for each paper. ...
Article
Ensuring quality library instruction in an online-exclusive First Year Writing (FYW) course is important and challenging. Assessing what the students learned and how is equally important. The authors collaborate and co-teach the information literacy portion of an online-exclusive second semester FYW course at the University of Tennessee at Chattanooga. To teach information literacy skills, the authors developed tutorial videos, worksheets, and a Librarian AMA (Ask Me Anything) discussion forum for the students. The authors completed formative and summative assessments to measure the efficacy of the activities. This article explores their assessment, findings, and recommendations.
Article
Full-text available
Purpose The purpose of this paper is to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010). Design/methodology/approach Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations. Findings In the described case, the Scoring Rubric for Information Literacy appears to be a reliable and valid instrument for the assessment of information literate performance. Originality/value Reliability and validity are prerequisites to recommend a rubric for application. The results confirm that this Scoring Rubric for Information Literacy can be used in courses in higher education, not only for assessment purposes but also to foster learning.
Article
Full-text available
The authors conducted a performance-based assessment of information literacy to determine if students in a first-year experience course were finding relevant sources, using evidence from sources effectively, and attributing sources correctly. A modified AAC&U VALUE rubric was applied to 154 student research papers collected in fall 2015 and fall 2016. Study results indicate that students in the sample were able to find relevant and appropriate sources for their research papers; however, they were not using evidence to effectively support an argument or attributing sources correctly. The authors discuss changes to the library instruction curriculum informed by the assessment results.
Conference Paper
Full-text available
Learning Outcomes •Describe how to involve faculty in initiating systematic assessment efforts •Relay benefits of implementing analytic rubrics •Discuss faculty experience in assessing student learning outcomes, adapting to new technology
Conference Paper
Full-text available
Learning Outcomes •Describe how to involve faculty in initiating systematic assessment efforts •Relay benefits of implementing analytic rubrics •Discuss faculty experience in assessing student learning outcomes, adapting to new technology
Article
The increasing popularity of rubrics to assess student learning outcomes in the information literacy classroom is evident within library and information science literature. However, there is a lack of research detailing scientific evaluation of these assessment instruments to determine their reliability and validity. The goal of this study was to use two common measurement methods to determine the content validity and internal consistency reliability of a citation rubric developed by the researcher. Results showed the rubric needed modification in order to improve reliability and validity. Changes were made and the updated rubric will be used in the classroom in a future semester.
Article
This article recounts our experience developing an embedded librarian model which evolved into a fully integrated learning community, pairing online composition with an online information literacy credit-bearing course. Our assessment of student success measures indicate that the positive trends we found under the embedded librarian program have continued to improve under the formal learning community model. We discuss the results of our qualitative and quantitative measures of the program's impact on student success and share our recommendations for further developments.
Article
abstract: Rubric assessment of information literacy is an important tool for librarians seeking to show evidence of student learning. The authors, who collaborated on the Rubric Assessment of Informational Literacy Skills (RAILS) research project, draw from their shared experience to present practical recommendations for implementing rubric assessment in a variety of institutional contexts. These recommendations focus on four areas: (1) building successful collaborative relationships, (2) developing assignments, (3) creating and using rubrics, and (4) using assessment results to improve instruction and assessment practices. Recommendations are discussed in detail and include institutional examples of emerging practices that can be adapted for local use.
Article
Full-text available
Information literacy (IL) skills are essential for adult learners in higher education, especially those unfamiliar with information systems. Citing a lack of literature assessing such skills in adult learners, this article examines the IL abilities of adult learners in an IL course. Using a rubric and annotated bibliographies from study participants, the authors rank the IL abilities of adult students. Similar to studies assessing IL skills in traditional undergraduates, the authors found that adult students struggled to articulate their evaluations of sources. The authors make recommendations for improving IL instruction for adults and suggest future research.
Article
abstract: This paper details the design and implementation of an initial baseline assessment of information literacy skills at the University of Baltimore in Maryland. To provide practical advice and experience for a novice audience, the authors discuss how they approached the design and implementation of the study through the use of a rubric-based authentic assessment, employing a pretest and posttest delivered through a course management system. They also present lessons learned through the process of assessment focused on norming, test design and delivery, and the importance of institutional support and flexibility.
Article
Purpose There is considerable agreement around the foundation skills required by employers that will enable graduates to integrate and devise promising solutions for the challenges faced by knowledge and globalized societies. These are life skills (communication skills, teamwork and leadership skills, language skills in reading and writing and information literacy), transferable skills (such as problem-solving, including critical thinking, creativity and quantitative reasoning) and technology skills (search for knowledge and build upon it). Foundation skills, however, are recognized to be difficult both to teach and assess. This paper aims to describe a performance assessment method to assess and measure these skills in a uniquely concurrent way – the General Education Foundation Skills Assessment (GEFSA). Design/methodology/approach The GEFSA framework comprises a scenario/case describing an unresolved contemporary issue, which engages student groups in online discussions, and a task-specific analytic rubric to concurrently assess the extent to which students have attained the targeted foundation skills. The method was applied in three semesters – during 2016 and 2017. These students were non-native English speaking students in a General Education program at a university in the UAE. Findings Results obtained from the rubric for each foundation skill were analyzed and interpreted to ensure robustness of method and tool usability and reliability, provide insight into, and commentary on, the respective skill attainment levels and assist in establishing realistic target ranges for General Education student skill attainment. The results showed that the method is valid and provides valuable data for curriculum development. Originality/value This is the first method in published literature that directly assesses the foundation skills for General Education students simultaneously, thus providing educators with valuable data on the skill level of the students. Additionally, repeated use of the method is a valuable way of teaching skills.
Article
This chapter focuses on IR's role in coordinating a pilot of the Degree Qualifications Profile (DQP) for assessment in a new master's degree program and provides guidance on frameworks that can be used, important technical considerations, and ways IR can be involved to advance the use of rubrics as a primary program assessment tool.
Thesis
Full-text available
Graphic designers are generally invisible as the authors of their own work. A deliberate effort to self-promote must be made in order for them to be seen and acknowledged. The collaborative nature of design, associations with clients, and the involvement of production teams further hinders an individual graphic designer’s visible authorship. However, gender also has a major influence on the invisibility of women in the history of this industry. Historically, the most celebrated practising graphic designers in Australia have been men, as evidenced by their overwhelming presence in books and on award platforms. My research has explored and addressed the key factors that cause this gendered inequity, including the representation and understanding of the name ‘graphic design’, the biases in historical narratives, and the disparate understandings of ‘success’ and ‘significant contributions’. Applied research, in the form of four multi-model communication design projects, has been conducted to explore and address these issues. These are the Postcard Project (project one), the Slushie Installation (project two), the Anonymity Exhibition (project three), and the #afFEMatjon Website (project four). Using the theoretical lenses of feminism and building on existing literature I have validated my findings through the use of surveys, interviews, and the collation of data sets. Each of these major projects and accompanying methodologies quantify the visibility of women in Australian graphic design. In addition, this project advocates for women’s visibility on award platforms and in historical narratives, and in classrooms. The project collects, analyses, and validates the individual experiences of women in the graphic design industry. Comparisons are made regarding these findings in relation to academic and professional contexts, such as publishing, advertising, and within studios. New knowledge and insights are embodied in the creation of the designed outcomes. These include two distinct frameworks aimed at improving processes of power—the Framework for Gender Equitable Award Platforms and the Framework for Gender Equitable Histories. In addition, the Autonomous Comfort Zone Survey which is a tool that produced new primary research regarding the experience of individual Australian women graphic designers. These outcomes, plus the aforementioned four major projects, have been disseminated through many traditional and non-traditional channels. Each of these projects has been measured, using alt-metrics to determine the exposure, reach, and impact of the visibility they have created for women in Australian graphic design. This data has been comparatively mapped to demonstrate the large number of people exposed to the findings. It has also been qualitatively analysed to reveal the positive change that these outcomes have begun to make both within and beyond Australian graphic design.
Book
Full-text available
From the Preface: The main theme of this year’s ECIL is ‘Information literacy in the inclusive society’. Social, economic and political inclusion, participation, democracy and, stemming from these imperatives, empowerment: information literacy is deeply relevant here. The Alexandria Proclamation recognised this back in 2005, with its view that information literacy “empowers people in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goals”. This view remains pertinent where a large proportion of the world’s population remains sidelined, disenfranchised or excluded, and also where populism and demagoguery undermine rational discourse. Information literacy fosters inclusivity by equipping citizens to use information as a means of affirming their stake in society, challenging mis-information and developing critical attitudes to prevailing norms, particularly where these stand in the way of greater well-being and emancipation.
Conference Paper
Full-text available
Since there is a lack of IL skills instruction in the education system of Iran, especially in primary schools, this research developed IL skills lesson plans integrated into the 6th grade Iranian primary science curriculum. This study was conducted using a Delphi method included in the instructional design. Using a snowball sampling method, a sample of 12 6th grade teachers from public primary schools in Ahwaz was chosen for the expert panel. The Delphi process stopped after achievement of consensus and stability of results in the third round. In sum, the developed and confirmed the Unit Plan for 11 th and 12 th Units of the Iranian 6 th grade science curriculum integrated into the Big6 model and 5 Lessons in context can used as the initial framework for developing the IL skills lesson plans in other curricula and subjects in order to upgrade and improve the IL level of the Iranian primary school students.
Article
Full-text available
Purpose – The aim of this paper is to present the Information Literacy Instruction Assessment Cycle (ILIAC), to describe the seven stages of the ILIAC, and to offer an extended example that demonstrates how the ILIAC increases librarian instructional abilities and improves student information literacy skills. Design/methodology/approach – Employing survey design methodology, the researcher and participants use a rubric to code artifacts of student learning into pre-set rubric categories. These categories are assigned point values and statistically analyzed to evaluate students and examine interrater reliability and validity. Findings – By engaging in the ILIAC, librarians gain important data about the information behavior of students and a greater understanding of student strengths and weaknesses. The ILIAC encourages librarians to articulate learning outcomes clearly, analyze them meaningfully, celebrate learning achievements, and diagnose problem areas. In short, the ILIAC results in improved student learning and increased librarian instructional skills. In this study, the ILIAC improves students' ability to evaluate web sites for authority. Research limitations/implications – The research focuses on librarians, instructors, and students at one institution. As a result, specific findings are not necessarily generalizable to those at other universities. Practical implications – Academic librarians throughout higher education struggle to demonstrate the impact of information literacy instruction on student learning and development. The ILIAC provides a much needed conceptual framework to guide information literacy assessment efforts. Originality/value – The paper applies the assessment cycle and "assessment for learning" theory to information literacy instruction. The ILIAC provides a model for future information literacy assessment projects. It also enables librarians to demonstrate, document, and increase the impact of information literacy instruction on student learning and development.
An essential partner: The librarian's role in student learning assessment Assessing for learning: Building a sustainable commitment across the institution
  • D Gilchrist
  • M Oakleaf
Gilchrist, D., & Oakleaf, M. (2012). An essential partner: The librarian's role in student learning assessment. Retrieved from http://www.learningoutcomeassessment.org/ documents/LibraryLO_000.pdf Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus Publishing, LLC.
Assessing information literacy skills: A rubric approach. (Doctoral Dissertation) Retrieved from http://meganoakleaf.info/oakleafdissertation The information literacy instruction assessment cycle: A guide for in-creasing student learning and improving librarian instructional skills
  • M Oakleaf
Oakleaf, M. (2006). Assessing information literacy skills: A rubric approach. (Doctoral Dissertation) Retrieved from http://meganoakleaf.info/oakleafdissertation.pdf Oakleaf, M. (2009a). The information literacy instruction assessment cycle: A guide for in-creasing student learning and improving librarian instructional skills. Journal of Documentation, 65(4), 539–560. http://dx.doi.org/10.1108/00220410910970249.