Similar publications

Article
Full-text available
Many problems have been exposed in the development of the self-provided power plants, how to reform the self-provided power plants is very important. The construction of a comprehensive energy system covering self-provided power plants has become a new reference idea, which can guide the development of self-provided power plants in the direction of...

Citations

... The wide adoption of CAA, usually in conjunction with the MC format, has been attributed, among other reasons, to its cost-effectiveness [1,[30][31][32][33]. Online and blended courses are probable scenarios of future education concerning the various teaching modalities because of their flexibility, ease of access, and cost-effectiveness [34][35][36]. ...
Chapter
Full-text available
Objective computer-assisted examinations (CAA) are considered a preferable option compared to constructed response (CR) ones because marking is done automatically without the intervention of the examiner. This publication compares the attitudes and perceptions of a sample of engineering students towards a specific objective examination format designed to assess the students’ proficiency to solve electronics problems. Data were collected using a 15-item questionnaire which included a free text question. Overall the students expressed a preference for the objective-type examination format. The students who self-reported to face learning difficulties (LD) were equally divided between the two examination formats. Their examination format preference was determined by the details of their learning difficulties, indicating that none of the two assessment formats effectively solves the assessment question for these students. For the rest of the respondents, examination format preference was accompanied by opposing views regarding answering by guessing, having the opportunity to express their views, selecting instead of constructing an answer, having the opportunity to demonstrate their knowledge, and having control of the exam answers.KeywordsMultiple choiceConstructed responseLearning disabilitiesPower relationsManagerialism
... ed programs (EUA 2011, p.5) while professors are trying to accommodate professionalism (Mintzberg 1979) with the consequences of reduced funding. For example, the adoption of Computer Assisted Assessment has been often attributed, among other reasons, to its cost-effectiveness (Mandel et. al. 2011;Loewenberger P. & Bull J. 2003;Bull & McKenna, 2004, Topol et. al. 2010. Online and blended courses have gained accreditation among researchers and university tutors, not only because of their flexibility, pedagogy and ease of access but also for their cost-effectiveness (Abdul Rahman et al 2020;Vivitsou, 2019;Lieser et al, 2018), while students' preference on traditional classroom teaching does not appear ...
Conference Paper
Full-text available
The transition from face to face to remote teaching during the COVID-19 health crisis, has been viewed by privately owned companies, prestigious universities, international organizations and politicians as an opportunity to promote the digital paradigm in education. A carefully carved rhetoric bundles the reduced funding of education, the maturity of digital technologies and the experience of remote teaching during the COVID-19 restrictions to promote the idea of rewiring and rethinking education as a synonym for change.
... The rapid growth of automated essay scoring can be observed on a significant scale and is probably due to the reason that the system has the potential capability to produce the scores quicker and more reliably. Be that as it may, it is considerably costlier (Topol, Olson & Roeber, 2010). On the other hand, Zhang (2013) points out that the noticeable shortcomings to be found in the human essay scoring system can be eliminated by using the automated essay scoring systems available online. ...
... While it may be possible to implement paper-and-pencil assessment of scientific argument analysis, A 3 's application of computer-based technology will also potentiate more technically sound and feasible assessment of this construct (Bennett, 2002;Quellmalz & Haertel, 2004;Topol, Olson, & Roeber, 2010). For example, digital technology can support standardization of administration conditions, and scoring and analysis methods, to reduce error (Pellegrino et al., 2014;Zickar, Overton, Taylor & Harms, 1999). ...
... Implementation via computer-based assessment infrastructure and resultant automation will also afford time-related efficiencies related to test administration, scoring, and data analysis, and correspondingly reduce personnel costs (Baker & Mayer, 1999;Scalise & Gifford, 2006;Topol et al., 2010). Additionally, as a full-blown, classroom-integrated formative assessment system comprising instructional resources, A 3 will eliminate the need for teachers to locate and select resources for how to address student deficiencies. ...
Research Proposal
Full-text available
Purpose: In response to problems with secondary students' argumentation skills and the vector of science education reforms, the proposed project will develop the Argument Analysis Assessment (A3), an instructionally-integrated computer-based system designed to elicit evidence of grade 6 students' skills in analyzing the elements of evidence-based scientific arguments. The web-based A3 will present text-based arguments concerning Next Generation Science Standards life science topics (e.g., natural selection) to students, who will then directly interact with the texts to identify their elements (i.e., claims, evidence, and warrants). A3 will support instructional decision-making to improve science teaching and learning. Project Activities: The project will employ an iterative development cycle during which A3 feasibility, reliability, generalizability, and validity evidence will be collected. Based on an A3 prototype and initial item sets, we will collect A3 feasibility evidence, and preliminary reliability, validity, and item parameter evidence in Years 1 and 2. This information will then be used to refine A3. Early in Year 3, we will conduct a formal field test to gather additional reliability (and generalizability), validity, and item parameter evidence. During the end of Year 3 and in the beginning of Year 4, we will pilot more items, gather additional psychometric evidence, assemble test forms, and ready the system for operational implementation. Products: Products of the proposed work include the fully-developed and-validated A3 (including 3-5 horizontally-equated test forms and instructional resources). Peer-reviewed publications and conference presentations will also be produced. Setting and Sample: The research will be conducted in three (urban and suburban) Illinois school districts. The sample will include 1300 racially/ethnically, socioeconomically, and linguistically diverse grade 6 students and 30 teachers in eight middle schools. Assessment: We will design the A3 to elicit evidence of students' overall argument analysis skills, as well as specific aspects of argument analysis, namely: students' ability to, and errors made when students, identify particular argument elements. The A3 system will comprise six components: 1) student interface; 2) teacher interface; 3) item and form bank; 4) database; 5) scoring and analysis module; and 6) instructional resource bank. Research Design and Methods: A3 system feasibility will be investigated through collection of evidence pertaining to student and teacher A3 usability and beliefs, attitudes, and perceptions about A3. Validity will be investigated related to test content, test-taker response processes, internal structure, relationships with external measures, and the consequences of testing. Internal consistency and stability reliability, generalizability, instructional sensitivity, and fairness for key student subgroups will also be investigated. Key Measures: Key measures will include student responses to the developed A3 assessment system; researcher-developed instruments (e.g., domain knowledge test, usability checklists); content validity reviews; alternative argumentation measures; and the Commitment to Logic, Evidence, and Reasoning scale. Data Analytic Strategy: Internal structure will be investigated via multi-level and bifactor multivariate item response theory, and multi-level confirmatory factor, analyses. Multi-level structural equation modeling will be used to examine relationships among A3 scores and other variables, and A3 instructional sensitivity. Cronbach's alpha, the Separation index, and conditional standard error of measurement will be used to assess reliability, and generalizability will be examined through multi-level variance components analysis.
... With these arising problems, Topol, Olson and Roeber [15], suggested that the development and implementation of performance assessments should be part of a larger assessment system utilizing costs quite similar to traditional or conventional tests. Such move could be done through strategic use of technology, teacher scoring, and economies of scale achieved by countries working in a consortium. ...
Article
Full-text available
Abstract –The study described how performance-based assessment was used in selected higher education classrooms in Cebu City, Philippines. Purposive sampling was used in the selection of six students from one sectarian and one non-sectarian institutions of higher learning. A qualitative content analysis was used in analyzing the key informants’ verbatim accounts, gestures and other factors. Motivation to learn, self-regulation and willingness to work in group were the emerging categories/themes identified in the study. Keywords – performance-based assessment, purposive sampling, qualitative content analysis, verbatim accounts
... Furthermore, human-computer agreement can illustrate how AES engines can produced scores more reliably, quickly, and at a lower cost than human raters (e.g., Hearst, 2000;Topol, Olson, & Roeber, 2011). That is, reliability estimates have been used to refute comments about the lack of synchronicity between the ways humans versus computers evaluate a text (Ericsson & Haswell, 2006). ...
... Over the past decade considerable progress has been made in the development and application of automated text analysis techniques for scoring of written and spoken text, with much of the work being focused on essays (Shermis & Burstein, 2013;Shermis, Burstein, Higgins, & Zechner, 2010). Current systems can produce scores more quickly and reliably and at a lower cost than trained human raters (Topol, Olson, & Roeber, 2010). A growing number of studies have demonstrated close agreement between human-and machine-generated scores (Shermis & Burstein, 2013). ...
Article
Full-text available
In this study, we explored the potential for machine scoring of short written responses to the Classroom-Video-Analysis (CVA) assessment, which is designed to measure teachers' usable mathematics teaching knowledge. We created naive Bayes classifiers for CVA scales assessing three different topic areas and compared computer-generated scores to those assigned by trained raters. Using cross-validation techniques, average correlations between rater- and computer-generated total scores exceeded .85 for each assessment, providing some evidence for convergent validity of machine scores. These correlations remained moderate to large when we controlled for length of response. Machine scores exhibited internal consistency, which we view as a measure of reliability. Finally, correlations between machine scores and another measure of teacher knowledge were close in size to those observed for human scores, providing further evidence for the validity of machine scores. Findings from this study suggest that machine learning techniques hold promise for automating scoring of the CVA.
... Both of the consortia proposals identified opportunities-even in some instances the necessity-for human raters, along with the benefits that could be realized if these were teachers from the participating states. The idea is attractive, for both its fiscal ramifications, which presume the use of teacher professional days (Darling-Hammond, 2010;McTighe & Wiggins, 2011;Topol, Olson, & Roeber, 2010) and even more so for the anticipated positive impact on teaching and learning. Pending the release of more detailed information on training and follow-up to teachers' participation in scoring, the degree to which that impact might accrue within the context of the new assessment systems is uncertain. ...
... The greater availability of online testing platforms makes automated essay scoring (AES) systems increasingly practical to implement and feasible to incorporate into these platforms. These automated essay scoring systems often produce scores more reliably and quickly and at a lower cost than human scoring (see Hearst, 2000;Topol, Olson, & Roeber, 2011;Williamson et al., 2010). As these systems are implemented, it becomes increasingly important to develop methods to ensure that the AES is scoring effectively. ...
... " 4 See Lockheed, 2008, p. 10, on LICs. Topol et al. (2010) provide a recent review of the US efforts the costs of more complex assessments in the US, where it is claimed, in part, that improved technology can reduce costs of increased R&D. But since LICs are, for the time being, hampered by technological constraints, the increased costs of R&D will likley end up as further bottom line expenditures. ...
Article
Full-text available
The following values have no corresponding Zotero field: PB - Teachers College, Columbia University. International and Transcultural Studies, PO Box 211, 525 West 120th Street, New York, NY 10027