Article

Using WebClass for Reduced Redundancy Testing

Authors:
If you want to read the PDF, try requesting it from the authors.

Abstract

This article examines and reviews two types of reduced redundancy tests, namely cloze tests and C-tests, which involve completing a text from which certain units (whole words or their parts) have been removed. Assessment instruments of this kind are typically used to measure overall language proficiency, for example for the purpose of making placement decisions. The paper also discusses the development of these two measures of reduced redundancy with the help of the WebClass testing system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Foreign language departments with the goal of advanced literacy require optimizing student learning, especially at the initial stages of the program. Current practices for admission and placement mainly rely on students’ grades from previous studies, which may be the main reason why intra-group language proficiency often varies dramatically. One essential step for creating an environment that enables students to progress according to their skill level is the development of assessment procedures for admission and placement. Such assessment must prominently include proficiency in the target language. This article promotes the incorporation of an automated C-test into gateway and placement procedures as an instrument that ranks candidates according to general language proficiency. It starts with a review of the literature on aspects of validity of the C-Test construct and contains an outline of the functional design of such an automated C-Test. The article highlights the economic benefits of an automated C-Test platform and the central role of proficiency-based student placement for the success of programs aiming to develop advanced literacy in a foreign language. The findings implicate that developing and using the outlined C-Test platform has the potential to increase student achievement in advanced foreign language instruction significantly.
Article
Full-text available
p>The C-Test as a tool for assessing language competence has been in existence for nearly 40 years, having been designed by Professors Klein-Braley and Raatz for implementation in German and English. Much research has been conducted over the ensuing years, particularly in regards to reliability and construct validity, for which it is reported to perform reliably and in multiple languages. The author engaged in C-Test research in 1995 focusing on concurrent, predictive and face validity. Through this research, the author developed an appreciation for the C-Test assessment process particularly with the multiple cognitive and linguistic test-taking strategies required. When digital technologies became accessible, versatile and societally integrated, the author believed the C-Test would function well in this environment. This conviction prompted a series of investigations into the development and assessment of a digital C-Test design to be utilised in multiple linguistic settings. This paper describes the protracted design process, concluding with the publication of mobile apps.</p
Article
Full-text available
Although research on the cloze test has offered differing evidence regarding what language abilities it measures, there is a general consensus among researchers that not all the deletions in a given cloze passage measure exactly the same abilities. An important issue for test developers, therefore, is the extent to which it is possible to design cloze tests that measure specific abilities.Two cloze tests were prepared from the same text. In one, different types of deletions were made according to the range of context required for closure, while in the other a fixed-ratio deletion procedure was followed. These tests were administered to 910 university and pre-university students, including both native and non-native speakers of English, with approximately half assigned at random to take the fixed-ratio test and the other half taking the rationally deleted test.While both tests were equally reliable and had equal criterion validity, the fixed-ratio test was significantly more difficult. Analyses of responses to different types of deletions suggest that the difficulty of cloze items is a function of the range of syntactic and discourse context required for closure. The study also provides practical and empirically supported criteria for making rational deletions and suggests that cloze tests can be designed to measure a range of abilities.
Article
Full-text available
Although there is considerable evidence supporting the predictive validity of cloze tests, recent research into the construct validity of cloze tests has produced differing results. Chihara et al. (1977) concluded that cloze tests are sensitive to discourse constraints across sentences, while Alderson (1979) concluded that cloze tests measure only lower-order skills. Anderson (1980) has concluded that Cloze tests measure sensitivity to both cohesive relationships and sentence-level syntax. Factor analytic studies (Weaver and Kingston 1963, Ohnmacht et al. 1970) have identified several factors in cloze and other language tests and suggest that cloze deletions should be based on the linguistic and coherence structures of language.In the present study, the trait structure of a cloze test was examined using confirmatory factor analysis. A cloze passage with rationally selected deletions of syntactic and cohesive items was constructed and given to two groups of non-native English speaking students entering the University of Illinois. A trait structure with three specific traits and one general trait provided the best explanation of the data. The results suggest that a modified cloze passage, using rational deletions, is capable of measuring both syntactic and discourse level relationships in a text, and that this advantage may outweigh considerations of reduced redundancy which underlie random deletion procedures.
Article
Full-text available
Four categories of multiple-choice (MC) cloze items were examined in relation to the TOEFL. The object was to assess the factor structure of the TOEFL and the potential of distinguishing MC cloze items aimed at reading comprehension (defined in terms of textual constraints ranging across clauses) as contrasted with knowledge of grammar (short-range surface syntax and morphology) or vocabulary. Since it is impossible in principle to distinguish such skills absolutely at any given point in a text, a compromise was to identify items whose difficulty seemed to be based primarily on one level of processing and secondarily on another. The pivotal category was reading comprehension. In all, 50 MC cloze items over three texts were used in four subsets: ones for which reading comprehension seemed to be the primary source of difficulty, and (1) grammar secondary or (2) vocabulary secondary (nine and 14 items respec tively) ; and ones for which either (3) grammar or (4) vocabulary was the main source of difficulty and reading comprehension secondary (15 and 12 items). Results were analysed separately for each of nine language groups, with a total of 11,290 subjects in all. Factor analysis of the TOEFL suggested two factors related to (a) the Listening Comprehension section, and (b) the nonlistening subsections. The data did not clearly reveal the expected differential relations between the MC cloze categories and subsections of the TOEFL, though tendencies were apparent and analyses on the whole revealed substantial reliability and validity for the MC cloze items.
Chapter
Full-text available
While it is easy, with the help of such examples, to understand the term and get a feeling for the concept ‘hypercharacterization’, a precise definition is not so easy. The concept has, in fact, never been formally defined. Most of the time it has been taken for granted, and often it has been explicitly equated with neighbouring concepts. The concepts against which it must be delimited include pleonasm, tautology, redundancy, reinforcement and hypercorrection. Some of these are well established in certain scientific disciplines, others are no clearer than hypercharacterisation itself. I will therefore 1. start by defining pleonasm and delimiting it against neighbouring concepts;
Article
Full-text available
The purpose of this study was to explore whether or not the C-test, as it is claimed, serves as a valid operationalization of the reduced redundancy principle. In so doing, an attempt was made to investigate the frequency and type of micro- and macro-level cues that EFL learners employ to restore the mutilations in the C-test. A C-test comprising five texts was administered concurrently with the TOEFL to 32 engineering students taking an English for Science and Technology course. Retrospective verbal protocols of the test takers were then collected. Analysis of the protocols indicated that there exist four major types of cues with varying frequencies: (1) automatic processing; (2) lexical adjacency; (3) sentential cues; and (4) top-down cues. This finding shows that, with a certain degree of latitude, C-testing is a reliable and valid procedure that mirrors the reduced redundancy principle.
Article
We examined the relationship between C-test and criterion-test scores to better understand the C-test construct. We meta-analyzed Pearson’s r coefficients and computed summary effects for subgroups, where criterion construct was the subgrouping variable. We summarized the evidence from 239 C-test studies, published in English. Any correlational study about foreign-language education was eligible. If a C-test was at least one passage in length, its information was recorded. Studies that did not include statistical information needed for meta-analysis were excluded, but other information was used to summarize aspects of the C-test domain. Summary effects indicate that C-test scores correlate most strongly with general language proficiency scores. However, there were too few studies in some criterion subgroups for analysis. Confidence intervals for subgroup summary effects overlapped across meta-analyses. Additional work is needed to understand the C-test construct. We recommend creating a C-test meta-analysis bank and adding information from study reports not published in English to create a cumulative C-test domain (Cummings, 2014). Our study involves the most comprehensive study-retrieval process to date; there was little evidence of publication bias in results.
Book
This book is about developing language tests with the aid of web-based technology. The technology is represented by WebClass (webclass.co), a learning management system (LMS) that I have been developing and using in blended environments for the last several years. The WebClass platform started off as a simple online system for administering language tests consisting mostly of multiple-choice and gap-filling items. At present, it includes two main modules, Materials and Tests, which can be used to author, manage, and deliver learning materials and assessments. Most importantly perhaps, the testing module can be utilized for the entire process of test development, which includes test and item analysis.
Article
Achievement of advanced literacy as a goal of foreign language (FL) study within the available amount of time requires that FL departments construct a well-articulated program and optimize student learning at each stage of the curriculum. One essential element of such optimization is the development of assessment procedures to place students into courses that enable successful fostering of their abilities. Ideally, such assessment practices should incorporate aspects of textual literacy, including a well-motivated link between meaning-oriented textual semantics and the required lexicogrammatical features. This article reports on the revision and validity evaluation of a C-test as one component of the placement test in the Georgetown University German program and as an instrument that includes accounting for textual literacy. We begin with the reasons for the test revision and report on the development and evaluation of the new C-test texts, which enabled better alignment with the curriculum and demonstrate a fitting range of reliable distinctions among examinees of broadly differing abilities. The article concludes by highlighting the central role of contextually relevant assessment practices for the success of a program that aims to develop advanced literacy in a FL and the lessons learned throughout the evaluation process.
Article
Cloze tests are valid, reliable second language proficiency tests. This paper discusses the construction, administration, scoring and interpretation of cloze tests of overall language proficiency. Other uses of the cloze in ESL are mentioned. Finally an explanation of the cognitive processes in doing cloze tasks is offered.
Article
The cloze test has received considerable attention in recent years from testers and teachers of English as a foreign language, and is becoming more widely used in language tests, both in the classroom and in standardized tests. However, most of the research has been carried out with native speakers of English and the results do not produce clear-cut evidence that the cloze test is a valid test of reading comprehension. The article reports on a series of experiments carried out on the cloze procedure where the variables of text difficulty, scoring procedure and deletion frequency were systematically varied and that variation examined for its effect on the relationship of the cloze test to measures of proficiency in English as a Foreign Language. Previous assumptions about what the cloze procedure tests are questioned and it is suggested that cloze tests are not suitable tests of higher-order language skills, but can provide a measure of lower-order core proficiency. Testers and teachers should not assume that the procedure will produce automatically valid tests of proficiency in English as a Foreign Language.
Article
Studies the factor validity of cloze tests as measures of comprehension ability by analyzing the principal components of the correlations among nine cloze tests and seven multiple-choice comprehension tests, each designed to measure a different comprehension skill. The tests were administered to 150 students enrolled in grades four, five, and six. Only one factor exhibited an eigen value greater than one and that factor accounted for 77 per cent of the variation in the correlation matrix. The loadings of all tests on this factor approached the maximum correlations possible for those tests. These data were interpreted as providing little grounds for claiming that cloze tests measure anything other than what has commonly been labeled reading comprehension skills./// [French] Etudie la validité des facteurs des Tests de Cloze en tant que moyens de mesure de la compréhension en lecture. Dans cette étude on analyse les composantes principales des corrélations entre neuf tests Cloze et sept tests de compréhension à choix multiple, chacun des tests mesurant un aspect différent de la compréhension en lecture. On administre les tests à 150 élèves de quatrième, cinguième, et sixième années. Un seul facteur montre une valeur "eigen" supérieure à 1, et ce facteur est responsable de 77% de la variation dans la matrice de corrélation. Le degré d'importance de ce facteur dans tous les tests est voisin du maximum de corrélations possible pour ces tests. D'après ces résultats il est impossible de prouver que les tests Cloze mesurent autre chose que ce qui est simplement appelé: différents aspects de la compréhension en lecture./// [Spanish] Se estudia la validez del factor en las pruebas "cloze" como medida de la habilidad de comprensión analizando los componentes principales de las correlaciones entre nueve pruebas "cloze" y siete pruebas de comprensión de selección múltiple. Cada prueba fué diseñada para medir una habilidad de comprensión diferente. Las pruebas se efectuaron con 150 estudiantes matriculados en el cuarto, quinto y sexto grados. Sólo un factor demostró un valor "eigen" de más de uno y ese factor dió razón del 77 por ciento de las variaciones en el patrón de correlación. El recargo de todas las pruebas en este factor se acercó al máximo de correlación posible en esas pruebas. Estos datos se interpretaron como muy poca prueba para alegar que las pruebas "cloze" miden algo más que lo que se ha designado corrientemente como la habilidad de comprensión en la lectura.
Article
The application of the rule-of ‘two’ for constructing C-tests produces two sorts of test items. Many items delineate acceptable facility and discrimination values, but a sizeable number of them are either extremely easy or extremely difficult to fill in. To investigate whether this defect can be avoided, a C-test with 5 texts and 126 items was constructed and tried with 146 Iranian English majors. On the basis of an item analysis, a tailored C-test with 100 items was developed and tried with 60 other subjects. The results of the study showed that no gains were made with the classical item analysis.
Article
Several recent studies have suggested C-testing to be a highly valid and reliable measure of general language proficiency avoiding the problems with cloze testing. This study investigates the feasibility of the procedure with native and non-native speakers of English. Results of 20 C-tests constructed with different ratio and/or deletion start are analysed and discussed. The findings of the study refute the claims on C-testing. The implications of the findings are also discussed.
Article
What C-tests actually measure has been an issue of debate for many years. In the present research, the authors examined the hypothesis that C-tests measure general language proficiency. A total of 843 participants from four independent samples took a German C-test along with the TestDaF (Test of German as a Foreign Language). Rasch measurement modelling and confirmatory factor analysis provided clear evidence that the C-test in question was a highly reliable, unidimensional instrument, which measured the same general dimension as the four TestDaF sections: reading, listening, writing and speaking. Moreover, the authors showed that language proficiency was divisible into more specific constructs and that examinee proficiency level differentially influenced C-test performance. The findings have implications for the multicomponentiality and fluidity of the C-test measurement construct.
Article
This study investigates the characteristics of natural cloze tests. Natural cloze tests are defined here as cloze procedures developed without intercession based on the test developer's knowledge and intuitions about passage difficulty, suitable topics, etc. (i.e., the criteria which are often used to select a cloze passage appropriate for a particular group of students). Fifty reading passages were randomly selected from an American public library. Each passage was made into a 30-item cloze test (every twelfth word deletion). The subjects were 2298 EFL students from 18 colleges and universities in Japan. Each student completed one of the 30-item cloze tests. The 50 cloze tests were randomly administered across all of the subjects so that any variations in statistical characteristics could be assumed to be due to other than sampling differences. The students also took a 10-item cloze test that was common to all students. The 50 cloze tests were compared in terms of descriptive, reliability and validity testing characteristics. The results indicate that natural cloze tests are not necessarily well-centred, reliable and valid. A typical natural cloze is described, but considerable variations were also found in the characteristics of these cloze tests (with many of them having skewed distributions and/or poor reliability). The implications for cloze test construction and use are discussed.
Article
Reactions to tests can have effects on test scores, motivation and relationships, and these reactions can conceivably be affected by various aspects of tests and the testing situation. In the present study, questionnaires were used to elicit reactions on various dimensions from Italian and Spanish teenagers to test items used as part of placement procedures. The data indicated that a C-test was the most negatively rated on most dimensions by all groups, and that this reaction seemed to relate most strongly to the perceived difficulty of the test. Reactions of lower-scoring students were more negative than those of higher- scoring, although the differences were not so great on the more evaluative dimensions. No gender differences were identified, and there were few differ ences between reactions of Italian and Spanish students, or the most and least nervous groups. Data on the effects of a longer time limit and first language instructions was inconclusive.
Article
Cloze tests and C-Tests are both tests of reduced redundancy based on the theory of general language proficiency. This paper presents the theory and shows first why cloze tests are unsatisfactory operalizations of the theory and the ways in which C-Tests are technically superior. It then reports the various investigations which have been performed in the construct validation of C-Tests and discusses their relevance to the original theory. Four hypotheses are set up relating to linerarity, parallelism, prediction of difficulty and processing strategies. The results obtained with the C-Tests support these hypotheses.
Article
Considerable evidence suggests that cloze techniques can create tests which measure aspects of students' second language competence. However, it remains unclear how variations in the cloze procedure affect measurement. This study compared results obtained from cloze passages constructed from the same text using four different procedures: fixed-ratio, rational, (rational) multiple choice, and C-test. The four procedures produced tests similar in reliabilities but distinct in levels of difficulty and patterns of correlations with other tests. These results are discussed in view of theoretically-based expecta tions for convergent and discriminate relationships of the four cloze tests with other tests.
Article
So-called authentic language tests may not have recognizable separate items. Alternatively such tests may involve items which are dependent on each other, in a fixed order, and which cannot be replaced by alternatives. In neither case can classical test analysis be used at the item level. The use of the Rasch Model is also inappropriate. However, where total test score is the result of adding scores on a number of different, independent but equivalent parts, it is possible to estimate test reliability and to carry out item analysis on the basis of parts. In order to examine the homogeneity of test parts, the unidimensionality of the total score and the question of scale level, the CLA Model may be used. An example is provided of the use of the CLA Model in relation to a German C-Test.
Article
In language testing, the concept of reduced redundancy has been a fruitful approach for the development of major test procedures. The way in which examinees perform under conditions of 'noise' is believed to provide evidence for the level of their current status in overall or general language proficiency. This article reports an investigation comparing the empirical performance of C-Tests with other representatives of the 'family' of reduced redundancy tests - classical cloze, cloze-elide, multiple-choice cloze. The criterion for empirical validity is DELTA, the Duisburg English Language Test for Advanced Students. Overall, the C-Test emerges as the most economical and reliable procedure, it has the highest empirical validity and is shown to be the best representative of the general factor in the battery.
Article
The use of cloze tests is beset with problems. The C-Test represents an attempt to develop a measure of general language competence which will avoid these problems. A large number of studies, involving children and adults learning a variety of languages, point to its being a reliable and valid measure of overall language ability. Construct validation has now begun. The principal usefulness of the C-Test is seen to be in selection and placement procedures. It is essential that the test should not be used to make significant decisions without prior statistical evaluation.
Article
This article reports on the results of a research programme carried out to validate the C-test amongst Hungarian EFL learners. One hundred and two university English majors were administered four different language tests (including an oral interview) to form a General Language Proficiency measure against which the C-test was evaluated. Various analyses were made, partly to replicate the results of the earlier studies to see to what extent these could be generalized, and partly to shed light on controversial issues. The same C-test was then administered to four groups of secondary school pupils (N=53) to examine whether the findings amongst university students were also true in another proficiency range. The results of the programme confirmed that the C-test is a reliable and valid instrument, and detailed information was obtained about issues such as text difficulty and text appropriateness, the role of content and structure words, and the use of different scoring methods.
Article
C-tests have been suggested to be the best in the family of tests of reduced redundancy. They are claimed to be theoretically and empirically valid and reliable measures of language ability. A C-test contains four to six texts and a total of 100 items. It is constructed according to the rule-of two, which involves deleting the second half of every other word beginning from the second word of the second , sentence. This study investigates five versions of a C-test and a standard cloze test with 340 Iranians majoring in English. The C- tests were constructed with three different deletion starts and two different ratios. The results show that there is nothing magical about the rule-of two. Other deletion rates and deletion starts yield more or less similar results. The paper concludes with suggestions for improving the C-test.
Article
This study investigates the possibility that the reliability and validity of a cloze procedure can be improved by applying traditional item analysis and selection techniques. Students in a single level (n = 89) at the Guangzhou English Language Centre (People's Republic of China) were chosen for this study because previous experience and research had indicated that cloze tests gene rally produce low reliability and validity coefficients in samples wherein the range of ESL proficiencies is limited. This turned out to be the case when a 399-word every seventh word deletion cloze passage with 50 items was administered to this group. The study was designed such that 250 of the poten tial items in this passage could be piloted and item analyzed. The results of the item analysis were used to select the 'best' items on the basis of item facility and discrimination indices. The resulting 50 item 'tailored cloze' was then read ministered to the same group and the results for the original version of the cloze test were compared to those for the tailored version. These results indicate statistically significant and meaningful improvements in test quality due to the revision. The dispersion of scores, reliability and validity were all substantially improved by the item analysis and selection processes. The article concludes with discussion of the implications of these findings.
Article
"Cloze Procedure" involves no formula or "element counting," but consists of sampling all potential readability influences. Although similar to sentence-completion tests, the cloze method demands deletion of random words from a passage. After administration to a group the correctly identified omissions are tallied. Experimental results show: (1) the cloze method consistently ranked three selected passages in the same way as the Flesch and Dale-Chall formulas; (2) the method was reliable; (3) the cloze method seemed to handle specialized passages more adequately than other methods; (4) the same rankings of readability were obtained when words were deleted at random or every nth word; (5) the cloze procedure could be used for comparing reading abilities of different individuals. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The variety of fill-in-blank test known as the “cloze” procedure is discussed as a device for teaching and testing ESL proficiency. Research with native speakers and the sparse literature available concerning studies with non-native speakers are explored briefly. An experiment is conducted to attempt to partially determine the discriminative power of a cloze test (scored by the exact-word method) and its validity as a device for measuring ESL skills. Students in beginning, intermediate, and advanced ESL along with two control groups of native (ENL) speakers (freshmen and graduates, respectively) are tested. Differentiation of levels of proficiency among the ESL groups seems adequate, but ENL freshmen are not significantly distinct from advanced ESL students though they are significantly inferior to KNL graduate students. The cloze test correlates best with the dictation (.82) on the UCLA ESLPE 2C, and next best with the reading section (.80): multiple correlation with all sections is .88. It is concluded that the cloze method is a very promising device for measuring ESL proficiency.
Article
The possibility of utilizing the doze procedure as a measure of ESL (English as a Second Language) proficiency has recently aroused considerable interest. Studies by Darnell (1968), Bowen (1969), Kaplan and Jones (1970), Oller and Conrad (1971), and Oller and Inal (1971) have demonstrated that the cloze method has merit, but several important questions are yet unanswered. Among them are the matters of scoring and level-of-difficulty, and their respective contributions to the effectiveness of cloze tests as measures of ESL (English as a Second Language) proficiency. A still further and possibly more important question concerns the nature of the cloze task and the skills involved in performing it. Previous research has shown repeatedly that the best and most convenient method for scoring when native speakers are tested is simply to count the number of exact-words restored to the context (Taylor, 1953, Rankin, 1957, Ruddell, 1963, Bormuth, 1965). Although native speakers tend to get higher mean scores when acceptable substitutes are counted as correct, the increase in total test variance is so slight that the extra effort involved is scarcely worthwhile. It is considerably simpler to ask for each fill-in, "Does it match the original word?," than it is to ask, "Is this response contextually acceptable?" Moreover, scorers are likely to be less reliable in the latter case. In spite of all this, researchers who have experimented with the cloze method as a measure of second-language proficiency have often preferred scoring systems that give credit for contextually acceptable responses. Some have even gone so far as to give partial credit for responses which, though clearly incorrect, indicate some measure of comprehension. Darnell (1968) scored responses on given items on the basis of native speaker responses for those same items. Bowen (1969) weighted responses according to their degree of correctness, subjectively determined. Oller and Inal (1971) counted any contextually acceptable response as correct. Since it has been clearly established that allowing contextually acceptable responses in addition to exact-word fill-ins makes little difference with native speakers, why should we expect things to be different when non-natives are tested? There are several reasons. One is that the exact-word scoring criterion may create a cloze test that is simply too difficult for non-natives even though it may not be for natives. Also, there is something intuitively unsettling about requiring a non-native speaker to guess the exact-word in order to receive full credit for an answer. Suppose, for example, that an item reads, "the --went down to the stream." If the exact-word is, say, "child," is it reasonable to class "horse," "dog," "animal," etc., along with clearly incorrect fill-ins like "of," "and," "table," etc.? The task of guessing the exact-word is not necessarily a language skill in the ordinary sense of the term.
Article
An abstract is not available.
Comprehensibility of high school textbooks: Association with content area
  • J G Beard
Beard, J. G. (1967). Comprehensibility of high school textbooks: Association with content area. Journal of Reading, 11(3), 229-234.
Sind computerisierte und Papier&Bleistift-Versionen des C-Tests äquivalent?
  • M Bisping
  • U Raatz
Bisping, M., & Raatz, U. (2002). Sind computerisierte und Papier&Bleistift-Versionen des C-Tests äquivalent? In R. Grotjahn (Ed.), Der C-Test: Theoretische Grundlagen und praktische Anwendungen (Vol. 4, pp. 131-155). Bochum: AKS-Verlag.
QSAT: The web-based mC-test as an alternative English proficiency test
  • S Boonsathorn
  • Ch Kaoropthai
Boonsathorn, S., & Kaoropthai, Ch. (2016). QSAT: The web-based mC-test as an alternative English proficiency test. TESOL International Journal, 11(2), 91-107.
Making sense of knowledge: Comprehending expository text
  • M M Cash
  • J S Schumm
Cash, M. M., & Schumm, J. S. (2006). Making sense of knowledge: Comprehending expository text. In J. S. Schumm (Ed.), Reading Assessment and Instruction for All Learners (pp. 262-296). New York: The Guilford Press.
A comparative survey of the proficiency and progress of language learners in British universities
  • J A Coleman
Coleman, J. A. (1996). A comparative survey of the proficiency and progress of language learners in British universities. In R. Grotjahn (Ed.), Der C-Test. Theoretische Grundlagen und praktische Anwendungen (Vol. 3, pp. 367-399). Bochum: Brockmeyer.
CLOZE procedures and comprehension. The Reading Teacher
  • J W Culhane
Culhane, J. W. (1970). CLOZE procedures and comprehension. The Reading Teacher, 23(5), 410-464.