Dan Horowitz was well-known for his research into essay examination prompts, and was greatly respected for the intellec-tual clarity of his work and for his humanistic grounding of that work in the central questions facing practitioners in ESL class-rooms in colleges and universities. In this paper I review some of the work that has been done on prompt effects in ESL writing at the college or college preparatory level, focusing on just one small aspect in an attempt to move our work in this area toward a better general understanding. While I do not make explicit reference to Dan's work in the text, the collegial dialogue we maintained is an important underpinning of the paper. There has been a great deal of research into the question of whether topic types, topics, the linguistic structure of questions, and an array of what we may group together as "prompt effects" have a significant effect on the measured writing ability of native writers of English. While far less work has been done on those same issues in relation to the writing of nonnative English users, it seems likely that the effects of prom pts, if such exist, will only be exacerbated when we look at nonnative writers rather than native writers. An overview of the field (Hamp-Lyons, in press) suggests that in first language writing assessment the trend is to treat topics, and even topic types, as equivalent. In the major ESUEFL writing assessment programs the same trend emerges (Hamp-Lyons, op. cit.). For example, the TOEFL Program uses two topic types at different administrations of its Test of Written English, although analyses of trial prompts of the two topic types in the TOEFL Test of Written English (TWE) (Carlson, Bridgeman, Camp, & Waan-ders, 1985) showed that they behaved rather differently (they correlated at around .70). Reid (1989) looked more closely than Carlson et al. at the four writing prompts in the TWE study, and found that the students' tests varied significantly from topic type to topic type, even when the differences did not result in differing 37 CHALLENGING THE TASK score patterns. The variation was most marked for strong writers. Perhaps weaker writers have less language flexibility, while those with higher scores seem able to adapt to different topics. In looking at the issue of whether there is a significant effect on student essay test writing from the prompts used, then, it would seem that the answer depends on one's research orientation and the questions one asks as much as it does on "hard" numbers. I can illustrate this from two studies carried out in the Univer-sity of Michigan Testing and Certification Division. Both studies looked at prompts on the ME LAB (Michigan English Language Assessment Battery, a test for nonnative users of English filling the same function as the TOEFL, but including a composition as a basic component rather than as an occasional, optional, extra). In the first, Spaan (1989) describes an experimental study of two prompts from the MELAB, chosen because they appeared on the surface to be dramatically different. Spaan provides a linguistic analysis to show the linguistic, cognitive, and schematic differ-ences of the prompts, and interprets her score data as suggesting that even these prompts in fact yield scores which are significantly related. In the second study, Hamp-Lyons and Prochnow (1990), in a post-hoc study, looked at all sets of MELAB prompts used in the period 1986-89, including those Spaan had used. They found that expert judges and student writers felt able to recognize easy and difficult topics, and that their judgments of the relative easel difficulty of prompts were generally confirmed by score levels on prompts. While they confirm Spa an's assessment of her two topics as radically different in difficulty, considered a-contextually, by looking too at a general language proficiency measure they are able to suggest some reasons for essay scores that are less different than predicted. It seems that nonnative writers taking the MELAB writing test component assess which prompt is easier, and that students with weaker language proficiency choose the easier prompt while students with stronger language proficiency choose the harder prompt. Hypothesizing too that reader accommodation plays its part in pushing disparate prompts toward parity of treatment, they suggest that both weaker and stronger writers regress toward the mean in their writing score, relative to their scores on other language components. These two research studies 38