Accessibility is one of the most important challenges at the intersection of linguistic and psycholinguistic studies of text and discourse processing. Linguists have shown how linguistic indicators of referential coherence show a systematic pattern: Longer linguistic forms (like full lexical NPs) tend to be used when referents are relatively low accessible, shorter forms (pronouns and zero anaphora) are used when referents are highly accessible. This linguistic theory fits in nicely with a dynamic view of text and discourse processing: When a reader proceeds through a text, the activation of concepts as part of the reader's representation fluctuates constantly. Hypotheses considering activation patterns can be tested with on-line research methods like reading time or eye-movement recording. The articles in this special issue show how accessibility phenomena need to be studied from a linguistic and a psycholinguistic angle, and in the latter case from interpretation as well as production.
We report 3 experiments that examined younger and older adults' reliance on "good-enough" interpretations for garden-path sentences (e.g., "While Anna dressed the baby played in the crib") as indicated by their responding "Yes" to questions probing the initial, syntactically unlicensed interpretation (e.g., "Did Anna dress the baby?"). The manipulation of several factors expected to influence the probability of generating or maintaining the unlicensed interpretation resulted in 2 major age differences: Older adults were generally more likely to endorse the incorrect interpretation for sentences containing optionally transitive verbs (e.g., hunted, paid), and they showed decreased availability of the correct interpretation of subordinate clauses containing reflexive absolute transitive verbs (e.g., dress, bathe). These age differences may in part be linked to older adults' increased reliance on heuristic-like good-enough processing to compensate for age-related deficits in working memory capacity. The results support previous studies suggesting that syntactic reanalysis may not be an all-or-nothing process and might not be completed unless questions probing unresolved aspects of the sentence structure challenge the resultant interpretation.
This paper considers points in turn construction where conversation researchers have shown that talk routinely continues beyond possible turn completion, but where we find bodily-visual behavior doing such turn extension work. The bodily-visual behaviors we examine share many features with verbal turn extensions, but we argue that embodied movements have distinct properties that make them well-suited for specific kinds of social action, including stance display and by-play in sequences framed as subsidiary to a simultaneous and related verbal exchange. Our study is in line with a research agenda taking seriously the point made by Goodwin (2000a, b, 2003), Hayashi (2003, 2005), Iwasaki (2009), and others that scholars seeking to account for practices in language and social interaction do themselves a disservice if they privilege the verbal dimension; rather, as suggested in Stivers/Sidnell (2005), each semiotic system/modality, while coordinated with others, has its own organization. With the current exploration of bodily-visual turn extensions, we hope to contribute to a growing understanding of how these different modes of organization are managed concurrently and in concert by interactants in carrying out their everyday social actions.
The goal of this study was to examine predictions derived from the Lexical Quality Hypothesis (Perfetti & Hart, 2002; Perfetti, 2007) regarding relations among word-decoding, working-memory capacity, and the ability to integrate new concepts into a developing discourse representation. Hierarchical Linear Modeling was used to quantify the effects of two text properties (length and number of new concepts) on reading times of focal and spillover sentences, with variance in those effects estimated as a function of individual difference factors (decoding, vocabulary, print exposure, and working-memory capacity). The analysis revealed complex, cross-level interactions that complement the Lexical Quality Hypothesis.
Plural phrases are open to many interpretations in English, where cumulative interpretations of noun and verb phrases are possible without any disambiguating morphology. A sentence like Every week, the high school kids went to the movies or the ballgame might involve quantifying over multiple occurrences of a single scenario, in which subsets of the kids do different things, or it might involve quantifying over distinct scenarios, in which all of the kids do one thing or all of them do the other. In the present work and related earlier work (Harris et al., 2013), we pursue the No Extra Times principle that favors interpretations where a phrase is construed as describing a single event taking place during a given time period. In two written interpretation studies, we found that participants more often interpret indeterminate sentences with disjunctive predicates by partitioning the set of individuals rather than partitioning the predicate to denote distinct scenarios or times. We conclude by offering some speculations about why partitioning the eventuality denoted by the verb phrase into multiple times is more costly than partitioning the entities denoted by its subject noun phrase into multiple sets.
Four experiments investigate the influence of topic status and givenness on how speakers and writers structure sentences. The results of these experiments show that when a referent is previously given, it is more likely to be produced early in both sentences and word lists, confirming prior work showing that givenness increases the accessibility of given referents. When a referent is previously given and assigned topic status, it is even more likely to be produced early in a sentence, but not in a word list. Thus, there appears to be an early mention advantage for topics that is present in both written and spoken modalities, but is specific to sentence production. These results suggest that information-structure constructs like topic exert an influence that is not based only on increased accessibility, but also reflects mapping to syntactic structure during sentence production.
Three expriments tested the psychological validity of the constituent units and sequencing rules of the Mandler and Johnson story grammar. If people's knowledge about stories reflects such a grammar, then it should have noticeable effects on their processing. The first experiment tested the effects of constituents structure on comprehension and recall by measuring reading and recall times of sentences within and across constituent boundaries. First sentences of constituents were slower to read and recall than second sentences. Various tests and a second experiment showed that these effects were due to story structure rather than to lexical overlap or semantic relatedness per se. The third experience tested the sequencing rules of the grammar by systematically moving constituents away from their normal positions, while at the same time providing them with surface markers to indicate the intended sequence of events. In all cases movements slowed reading time both at the place where the expected constituent was missing and at the place where it acutally occurred. Movements also results in more recall errors. The data support the position that people have incorporated knowledge about the canonical structure of stories which they use slurring processing.
Asserts that the term "inference" has had a negative effect on the study of how information is elaborated and reduced in text processing (TP). The negative effect is said to arise from the suggestion that inference in TP is a unitary phenomenon and from the overemphasis on the conscious. The author discusses T. Guthke's (1991) view of inferencing in text comprehension. Guthke's view distinguishes between conscious, controlled inference processes, and automatic, unconscious inferences. Also, distinction is made between inferences that elaborate incoming information with existing knowledge and those that generate new information. The author suggests that information reduction processes be viewed in text comprehension within the same framework as information accretion. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Rereading can improve the accuracy of people's predictions of future test performance for text material. This research investigated this rereading effect by evaluating 2 predictions from the levels-of-disruption hypothesis: (a) The rereading effect will occur when the criterion test measures comprehension of the text, and (b) the rereading effect will not occur when a 1-week delay occurs between initial reading and rereading. Participants (N = 113) were assigned to 1 of 3 groups: single reading, immediate rereading, or rereading after a 1-week delay. Outcomes were consistent with the 2 predictions stated earlier. This article discusses the status of the levels-of-disruption hypothesis and alternative hypotheses based on the cognitive effort required to process texts. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Examined successful and unsuccessful instances of communication between speakers who do not share a common language. Ss included 29 limited-English proficient children in nursery and elementary school, enrolled with native English-speaking peers in regular classrooms. The model used here to evaluate the process of multilingual communication involves 3 hierarchically interrelated levels: background knowledge, situational knowledge and skills, and linguistic knowledge. Ss employed a predominantly top-down strategy to achieve comprehension; whenever expectations at higher levels were shared, verbal forms were often correctly decoded, even within limited parameters of language proficiency. Thus, within certain well-defined recurrent situations, a shared linguistic code is neither necessary nor sufficient for successful communication. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Examined how native and non-native speakers adjust their referring expressions to each other in conversation. 20 Asian language speakers learning English were tested before and after conversations with native English speakers in which they repeatedly matched picture of common objects (Exp 1). Lexical entrainment was just as common in native/non-native pairs as in native/native pairs. People alternated director/matcher roles in the matching task; natives uttered more words than non-natives in the same roles. In Exp 2, 31 natives rated the pre- and post-test expressions for naturalness; non-natives' post-test expressions were more natural than their pre-test expressions. In Exp 3, 20 natives rated expressions from the transcribed conversations. Native expressions took longer to rate and were judged less natural-sounding when they were addressed to non-natives than to other natives. These results are consistent with H. H. Clark and D. Wilkes-Gibbs's (see record
1987-07185-001) principle of Least Collaborative Effort. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Reviews the practical work of R. F. Bales (1951, 1970), J. Dore (1977), and W. Labov and D. Fanshel (1977) on the application of speech act theory to the development of coding schemes for research on natural conversation. The theoretical frameworks of J. L. Austin (1962), J. R. Searle (1969, 1975), and Z. Vendler (1972) are also discussed. A speech act classification scheme designed by the present authors to investigate correspondences between speech acts and adjectival dimensions descriptive of interpersonal behavior is then presented. Analyses of 17 excerpts from a documentary on the naturally occurring communication of a middle-class family using this classification scheme revealed 10 clusters or categories of speech acts that are likely to go together in short episodes of interaction. Findings indicate some strong relationships between kinds of speech acts uttered in a conversation and perceptions of the associated interactors. (20 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Current paradigms study language comprehension as if archival memory were its primary function. Subjects only receive linguistic material and are later tested on memory for its contents. In contrast, the two target articles in this issue, Glenberg and Robertson (in press) and Roth (in press), examine comprehension as if preparing for situated action were its primary function. Besides receiving linguistic materials as input, subjects study objects, actions, and interactions between agents. Rather than simply being tested on memory for linguistic materials, subjects also produce actions and enter into group interactions. Although these researchers focus their attention on specific genres---the comprehension of verbal instructions and the comprehension of scientific theories---their methods and findings have wider implications. In particular, the primary function of comprehension is not to archive information but is instead to prepare agents for situated action. Arguments from the evolution of cognition and language are brought to bear on this thesis, and perceptual simulation is proposed as a mechanism well-suited for supporting situated comprehension. Finally, it is conjectured that studying comprehension in the context of situated action is likely to produce significant scientific progress. Sense fades into reference. Roth (in press) If an outsider reviewed research on language comprehension, what conclusions might he or she reach? After reviewing this literature myself for a text on cognitive psychology (Barsalou, 1992, Chapters 8 and 9), I concluded that comprehension is essentially archival memory, describing it as follows: (1) Words enter the cognitive system through phonemic and graphemic processing. (2) Word representations are translated into amodal syntactic str...
Three teachers were videotaped as they taught the same lesson (finding the main idea of a given text) to groups of high- and low-ranked hearing-impaired high school students. Tapes were transcribed for sign, speech, and contextual features. While teachers indicated the task was within all the students' abilities, teaching strategies differed between high and low groups in terms of the nature of topically related elicitation sets and the number of deviations from the lesson framework to pursue details for clarification with the low-ranked groups. This diminished the possibility of their attaining the stated goal of the lesson, thereby perpetuating their low performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Examined whether speakers choose spatial perspectives that minimize effort for themselves, for their partners, or for both when describing locations. Three possible models are proposed for how descriptions in a particular perspective are more difficult when speaker and addressee view a scene from different offsets. In a communication task, 27 college students described locations on a complex display for 27 other students who shared their vantage point or were offset by 90° or 180°. Both partners either took the perspective of the person who did not know the location or used descriptions that helped them avoid choosing one or the other person's perspective. Speakers who shared their addressee's vantage point gave different descriptions than 180°- and 90°-offset speakers, who did not differ from each other reliably. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Notes that in previous research, there have been inconsistent findings regarding interpretation of ironic insults. In this study the authors examined the possibility that the perception of ironic insults depends on whether participants were asked to judge speaker intent (e.g., mocking) or social impression (e.g., politeness). 60 Ss participated. Three paper and pencil tasks were completed: a ratings task, a distraction task, and a free recall memory task. Results show that ironic insults were perceived to be more mocking, but also more polite, than direct insults. In contrast, ironic compliments were perceived to be more mocking and less polite than direct compliments. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Examined whether causal antecedent and causal consequence inferences are generated on-line during comprehension and also determined the time course of their activation. Ss were 160 undergraduates. Inference category, the rate of word presentation in a rapid serial visual presentation (RSVP) format, and the delay between the last word in a sentence and the test word (i.e., the stimulus onset asynchrony [SOA] interval) were manipulated. Lexical decision latencies were collected on test strings (i.e., nonwords, inference words, or unrelated words) which were presented after each sentence in the passages. There was a threshold of 400 msec after stimulus presentation (RSVP and SOA) before causal antecedents were generated on-line, whereas causal consequences were not generated on-line. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
In a previous paper (Wolfe, Schreiner, Rehder, Laham, Foltz, Kintsch & Landauer, this issue) we have shown how Latent Semantic Analysis (LSA) can be used to assess student knowledge - how essays can be graded by LSA and how LSA can match students with appropriate instructional texts. We did this by comparing an essay written by a student with one or more target instructional texts in terms of the cosine between the vector representation of the student's essay and the instructional text in question. This simple method was effective for the purpose, but questions remain about how LSA achieves it's results and how they might be improved. Here we address four such questions: (a) what role use of technical vocabulary per se plays, (b) how long should the student essays be, (c) whether the cosine is optimal measure of semantic relatedness, and (d) how to deal with the directionality of knowledge in the high-dimensional space. Latent Semantic Analysis 3 Using Latent Semantic Analysis to ass...
In this study we apply the procedures and assumptions of ethnomethodological conversation analysis to analyze a segment of interaction in a Problem-Based Learning (PBL) meeting. In the segment, one member of the group presents a theory pertaining to the case under study. Before it is accepted or rejected, the same speaker presents a second theory to which other group members react with objections and disaffiliative laughter. The presenter consequently rejects the second theory and uses this rejection as a basis for returning to and implicitly accepting the first. Theory presentation and assessment are an integral part of the PBL group process of moving discursively from case history and symptoms to diagnosis and treatment. We observe that the presentation of a theory makes relevant a variety of sequential activities through which participants in instructional activities of this sort come to accept or discard the theory. Implications for teaching and tutorial practice are presented. -...
Four experiments investigated how the reading of semantic associates of a predictive inference raised its activation level. Two kinds of semantic associations were distinguished: those that could causally relate text events to the inference in a representation of the described situation and those that could not. Lexical decision data indicated that in both cases the predictive inferences were activated to the same high level (Exp 1), even when no other causal support of the inference could be found in the text (Exp 3). However, the results of a judgment task performed at various delays (immediately after the reading of the predictive sentences, at a 3- sentence delay, and after the reading of all the texts) suggests that readers evaluate the plausibility of the predictive inferences on the basis of their causal support from text events. These findings suggest that the initial activation of predictive inferences mainly results from the associative constraints governing the elaborative process of the meaning of text words. The causal constraints of the predictive inferences intervene later to eventually reinforce the integration of the inference to readers' representation. The compatibility of these results with the comprehension models of W. Kintsch (1988, 1998) is discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Very young children are capable of recalling past events. What young children remember and how consistent their recall is from time to time were investigated by comparing the memories of 24 30–35 mo old children for events experienced during 2 separate interviews with their mother, 2 interviews with a stranger, or 1 interview with their mother and 1 with a stranger. Two major findings emerged: (1) Children recalled more accurate information when conversing with the stranger than with their mother and (2) although there was more consistency in children's recall when conversing with the same adult across the 2 interviews than when conversing with a different adult, recall was inconsistent overall. Children overwhelmingly remembered different, but still accurate information on the 2nd interview. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
A large body of research has shown that verbal phrases such as “move the pen” are better remembered when they are physically enacted than when the same phrases are studied under standard verbal learning instructions (e.g., Engelkamp & Krumnacker, 19805.
Engelkamp , J. and
Krumnacker , H. 1980. Image- and motor-processes in the retention of verbal materials.. Zeitschrift für Experimentelle und Angewandte Psychologie, 27: 511–533. View all references). More recently, a non-literal enactment effect was discovered in which verbal material that was not literally congruent with the accompanying movement was nevertheless better remembered if the speaker had been moving during the utterance. The early demonstrations of this phenomenon involved actors' performances on stage, but the effects were later replicated with non-actors in a lab. A possible explanation for the non-literal effect is that the words and the performed actions are connected at a goal level. In the preliminary study, self-reports of professional actors revealed that all on-stage movements are carefully designed to explain or constrain how the accompanying verbal material constitutes an attempt to reach a goal. In the main study, it was found that this non-verbal information is sufficiently explicit so that non-actors, unacquainted with the situation or the dialogue, can accurately determine the intended goal-directed meaning. The connections between the non-literal enactment effect and theories of embodied cognition are discussed, along with the relevance of non-literal enactment to studies on gestures and pragmatics.
Suggests that quantitative analyses are useful procedures through which to isolate constraints from different levels of discourse and through which to separate the ways in which structure, meaning, and social action differently influence the production of discourse. The value of quantitative analyses of discourse options is demonstrated by (1) focusing on 2 discourse options for the representation of cause and effect and (2) operationalizing semantic constraints (temporal reference) and pragmatic constraints (discourse topic) and examining the relative effects of each on causal reversibility. Results of a quantitative analysis show that no single level of constraint was able to account for causal reversibility. However, the conclusion that causal reversibility is constrained by factors that crosscut different levels of discourse organization supports views of discourse as an interlocking system of structure, meaning, and action. (47 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Three experiments investigated how 206 adult readers represent causal relations among events in a narrative. Models of text comprehension were tested. In each experiment Ss read brief narratives and received a speeded-recognition test of their memories for story events. Each story could be represented by a linear chain or by a network. On each trial in the recognition procedure Ss read a priming sentence that reminded them of either a story (general prime) or a specific event in a story (specific prime). Across the experiments, positive responses were faster when the target followed a specific prime that was causally related than when it followed a specific but unrelated prime or a general prime. Importantly, this was the case when the specific prime and target were adjacent and when they were nonadjacent in the surface structure of the story. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Analyzed children's performance on communicative referential tasks and developmental changes in the type and quality of messages. 28 twins and 28 singleton children were compared at different age levels (younger group mean age 5.5 yrs; older group mean age 10.4 yrs). Four interactive situations were considered: twin–twin, single born–single born, twin–single born, and single born–twin. Although a multivariate analysis of variance (MANOVA) did not reveal poorer performances in twins in the number and type of information elements, path analyses indicated different interactive styles and strategies between twins and singletons. The twin pairs tended to intervene in the interaction to support and complete the co-twin's performance; singletons seemed to care more about the quality of information and tended to engage in informative exchanges. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
This study analyzed 28 hour-long tutoring sessions that were carried out keyboard-tokeyboard with tutor and student in different rooms. The tutors were professors of physiology at Rush Medical College. The students were first year medical students. We classified student initiatives and tutor responses in human tutoring sessions with the goal of making our intelligent tutoring system capable of handling mixed-initiative dialogue. Student initiatives were classified along four dimensions: communicative goal, surface form, focus of attention, and degree of certainty (does the student hedge or not?). Student goals include: request for confirmation, request for information, challenge, refusal to answer, and conversational repair. Tutor responses were classified along three dimensions: communicative goal, surface form, and delivery mode. The tutor goals included: causal explanation, acknowledgment, conversational repair, instruction in the rules of the game, teaching the problem-solving alg...
Focuses on cognitive quantitative models in discourse processing. Topics discussed in this chapter include the following: an ideal cognitive model of text and discourse processing; modern discourse modeling; approaches to modeling discourse and text; discourse models; modeling aspects of discourse, text, and cognitive processes; and applications of models. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Explored, in 3 experiments with 90 undergraduate native-English speakers, the proposal that methods of marking new information serve a topic-promotion function during the processing of spoken discourse. Two devices commonly discussed as important for determining information structure (i.e., intonational emphasis and sentence position) were manipulated factorially. Ss made coherence judgments for active sentence pairs in which the topic of Sentence 2 was congruent with either the subject or object of Sentence 1. A consistent judgment time advantage was found for object-relevant continuations but effects of emphasis were restricted to the object position in Exp I and were not obtained in Exps II and III. The object advantage was shown to depend on the intersentence delay, thus implicating a new-information-last principle as an important means of maintaining local cohesion and facilitating the listener's task of integrating spoken discourse. (38 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Latent Semantic Analysis is used as a technique for measuring the coherence of texts. By comparing the vectors for two adjoining segments of text in a highdimensional semantic space, the method provides a characterization of the degree of semantic relatedness between the segments. We illustrate the approach for predicting coherence through re-analyzing sets of texts from two studies that manipulated the coherence of texts and assessed readers' comprehension. The results indicate that the method is able to predict the effect of text coherence on comprehension and is more effective than simple term-term overlap measures. In this manner, LSA can be applied as an automated method that produces coherence predictions similar to propositional modeling. We describe additional studies investigating the application of LSA to analyzing discourse structure and examine the potential of LSA as a psychological model of coherence effects in text comprehension. Measuring Coherence 3 The Measurement o...
Investigated the effects of breakdowns in referential (R) and factual coherence (FC) on text comprehension, using 56 university students. Breakdowns in RC produced by distant antecedent information hindered reading times of texts but not text memory; those produced by absent antecedent information hindered both. Breakdowns in FC hindered reading times of texts and hindered text memory when the comprehension goal of readers was an integrative one, requiring readers to update old knowledge with new information, but not when the comprehension goal was to accurately recall the texts. Recall results for the more integrative task suggested that new factually inconsistent information is particularly salient and memorable to Ss, whereas memory for "old" factually inconsistent knowledge is hindered. Results are discussed in terms of possible constructive and reconstructive processes contributing to hindered memory for factually inconsistent information. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Examined interactions among text, task, and reader factors in 2 experiments with a total of 64 undergraduates that looked at the role of paragraphing, a surface text feature, on the identification of and memory for main ideas as compared to elaborative information in expository passages. In the coincident paragraphing condition, main ideas of the passage were paragraph initial. In the conflicting condition, elaborations of the main ideas were paragraph initial. Although paragraphing identified these elaboration sentences as main ideas, the content information conflicted with that designation. The paragraphing manipulation had a greater effect on the differentiation of main ideas and elaborations when passage content was less familiar. The major difference between the recall and the summary task was that the likelihood of including elaborations was greater in the recall task. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
In order to generate cohesive discourse, many of the relations holding between text segments need to be signalled to the reader by means of cue words, or discourse markers. Programs usually do this in a simplistic way, e.g., by using one marker per relation. In reality, however, language offers a very wide range of markers from which informed choices should be made. In order to account for the variety and to identify the parameters governing the choices, detailled linguistic analyses are necessary. We worked with one area of discourse relations, the Concession family, identified its underlying pragmatics and semantics, and undertook extensive corpus studies to examine the range of markers used in both English and German. On the basis of an initial classification of these markers, we propose a generation model for producing bilingual text that can incorporate marker choice into its overall decision framework. 1 1 Introduction The existence of a Concession 1 category is ac...
In three experiments, we investigate the likelihood that predictive inferences are drawn when there is more than one consequence of the predictive context. Whereas a previous set of studies (Klin, Guzman, and Levine, 1999) showed no facilitation of a naming probe (e.g., break) 500 ms after the predictive context (e.g., Steven threw the delicate vase), in Experiment 1 there was evidence of an inference 1500 ms after the predictive context. However, in Experiment 2, there was no evidence of an inference when the 1500-ms inter-stimulus interval contained additional text. To reduce task demands, the probe task was eliminated in Experiment 3. Readers slowed down on a line that contradicted the targeted inference, suggesting that they drew a predictive inference. We conclude that predictive inferences are more prevalent than has been assumed previously, but they may be minimally encoded when conditions are not optimal. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
O'Brien, Rizzella, Albrecht, and Halleran (1998) demonstrated that when a protagonist is introduced with information that is inconsistent with an action described in a subsequent target sentence, reading times on that sentence were disrupted. This occurred even when the inconsistent information was followed by consistent information that outdated the inconsistent information. Three experiments are reported that examine factors that may have contributed to the reactivation of outdated information. In Experiment 1, the order of introduction of the consistent and inconsistent information was reversed so that the initial character information was consistent with the target sentence. Despite introducing the protagonist with information consistent with the target sentence, reading continued to be disrupted. In Experiments 2 and 3, the amount of consistent information was increased. The additional consistent information eliminated any comprehension difficulty on the target sentence caused by the inconsistent information; however, Experiment 3 confirmed that the inconsistent information continued to be reactivated. Results are discussed in terms of memory-based contributions to the updating process. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
A knowledge digraph defines a set of semantic (or syntactic) associative relationships among propositions in a text (e.g., Graesser and Clark (1985) conceptual graph structures and the causal network analysis of Trabasso & van den Broek, 1985). This paper introduces the Knowledge Digraph Contribution (KDC) data analysis methodology for quantitatively measuring the degree to which a given knowledge digraph can account for the occurrence of specific sequences of propositions in recall, summarization, talkaloud, and question-answering protocol data. KDC data analysis provides statistical tests for selecting the knowledge digraph which "best-fits" a given data set. KDC data analysis also allows one to test hypotheses about the relative contributions of each member in a set of knowledge digraphs. The validity of specific knowledge digraph representational assumptions may be evaluated by comparing human protocol data with protocol data generated by sampling from the KDC distribution. Specifi...
This study examined a corpus of 10 widely used pre-algebra and algebra textbooks, with the goal of investigating whether they exhibited a symbol precedence view of mathematical development as is found among high school teachers. The textbook analysis focused on the sequence in which problem-solving activities were presented to students. As predicted, textbooks showed the symbol precedence view, presenting symbolic problems prior to verbal problems.
Conducted a modified replication of a study by R. C. Anderson and J. W. Pichert (see record
1979-02802-001). 90 Ss (aged 15–55 yrs) read 3 stories, taking a particular perspective for each, recalled each story from that perspective and, either immediately or after 1 wk, recalled the stories again from a new perspective. Consistent with Anderson and Pichert's findings, Ss in the immediate condition showed a shift in recall as a function of retrieval perspective. However, in contrast to Anderson and Pichert, further results demonstrate that even though the retrieval framework operated selectively in making certain information more accessible for output, it was ultimately constrained by the accessibility of information as determined by the encoding framework. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Three experiments with 104 undergraduates compared the predictions of the nonconnectionist biofunctional (NBF) and parallel distributed processes (PDP) schema theories in the comprehension of surprise-ending (SE) stories. In each experiment, Ss read either SE or no SE versions of a story. The thematic influences derived from the 2 interpretations of the SE story elicited opposite ratings but the interpretations generated essentially identical levels of idea unit recall or importance. As predicted by NBF, the 2 schemas involved in the comprehension of the SE story were rated as mutually incompatible because they shared the same categorical knowledge. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Data from 4 experiments with 342 undergraduates support the hypothesized tripartite distinction of T. A. van Dijk and W. Kintsch (1983) between surface representation (SR), propositional textbase (PT), and situation model (SM) in memory for text or discourse. Ss were able to differentiate between sentences they had seen and meaning-preserving paraphrases of those sentences, confirming the existence of SR. Performance improved if the distractor (DI) also introduced a new meaning, which is evidence for PT. Discrimination performance was best when the new meaning introduced by the DI was inconsistent with the situation described by the text, arguing for the existence of SM. Data also replicated findings of F. Schmalhofer and D. Glavanov's (see record
1987-17734-001) and D. Dellarosa's (unpublished manuscript) attempts to experimentally separate the 3 levels of representation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Describes a theoretical and methodological framework for organizing empirical investigations of inferential processes in reading. The theoretical component provides an account of the various types of inferences that have been investigated. The methodological component, based on J. J. Jenkins' (1979) tetrahedral model of psychological experimentation, captures the impact of methodological variations regarding Ss, orienting tasks, materials, and criterial tasks across studies on the meaning of the results. The framework is applied in a discussion of central issues in current research on inference generation in reading, including minimalist vs maximalist reading and immediate vs delayed inferences. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Presents an analysis of children's interpretations of a complex episode of social interaction, which is used to illustrate 3 features of human plans (e.g., human plans are social) that distinguish them from robot plans and form a basis for a theory of the development of social action. In a study conducted by the 1st author (1981), 12 elementary school and college Ss were shown a skit in which one character deceived another. Younger Ss considered the interaction to be cooperative, whereas older Ss understood that the deceiver was manipulating the victim's cooperative interpretation. A model of interacting human plans is incorporated in a notation system that is used for displaying the structure of the alternative interpretations and their mutual embeddings. Implications of the model of human plans for the development of social action and cognition are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
This study examines the hypothesis that the ability of a reader to learn from text depends on the match between the background knowledge of the reader and the difficulty of the text information. Latent Semantic Analysis (LSA), a statistical technique that represents the content of a document as a vector in high dimensional semantic space based on a large text corpus, is used to predict how much readers will learn from texts based on the estimated conceptual match between their topic knowledge and the text information. Participants completed tests to assess their knowledge of the human heart and circulatory system, then read one of four texts that ranged in difficulty from elementary to medical school level, then completed the tests again. Results show a non-monotonic relationship in which learning was greatest for texts that were neither too easy nor too difficult. LSA proved as effective at predicting learning from these texts as traditional knowledge assessment measures. For these te...
This study examines the hypothesis that the ability of a reader to learn from text depends on the match between the background knowledge of the reader and the difficulty of the text information. Latent Semantic Analysis (LSA), a statistical technique that represents the content of a document as a vector in high dimensional semantic space based on a large text corpus, is used to predict how much readers will learn from texts based on the estimated conceptual match between their topic knowledge and the text information. Participants completed tests to assess their knowledge of the human heart and circulatory system, then read one of four texts that ranged in difficulty from elementary to medical school level, then completed the tests again. Results show a non-monotonic relationship in which learning was greatest for texts that were neither too easy nor too difficult. LSA proved as effective at predicting learning from these texts as traditional knowledge assessment measures. For these texts, optimal assignment of text on the basis of either pre-reading measure would have increased the amount learned significantly. Learning from text: Matching readers and texts by Latent Semantic Analysis Much of what we learn, as students and throughout life, we learn from reading. Learning from text, however, is not the same as remembering the text. Kintsch (1994) argued that a central feature of learning from text is linking up the textual information with prior knowledge. The new information must be integrated with prior knowledge both for current comprehension and for later use in new situations. Thus, learning presupposes suitable prior knowledge to which the to-be-learned information can be linked. If there is no relevant knowledge basis at all, this integration cannot take place - ...
Tested J. Searle's (1965, 1969) suggestion that certain conditions must hold true for a promise to be successfully made. Intuitions regarding these pragmatic conditions were examined in 4 experiments by looking at how 120 undergraduates made and understood promises. The results show that the conditions of speaker's obligation to perform and addressee's desire for performance were extremely important to maintain if a promise was to be made or understood. It appears that people can make promises about actions that would be performed in the normal course of events. It is argued that promises do not by themselves obligate a speaker, but are used to reaffirm previously existing, and often unstated, obligations. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Discusses a new approach to the discovery of units of planning in discourse. Converging evidence from prosody, pausing, structural and semantic parallelism, and stylistic analysis is used to argue for a series of hypotheses about units that appear to organize the construction of discourse. At the lowest level, idea units converge on lines, units that are often, but not always, clausal, and which contain 1 piece of new or foreground information. At the highest level, the text is organized around sections that are like the acts of a play. In between, and crucially mediating between the 2 levels, are stanzas, clusters of lines that are narrowly constrained in structure and topic. It is with stanzas that discourse takes its most definitive step beyond syntax. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
This paper presents a framework for expressing how choices are made in systemic grammars. Formalizing the description of choice processes enriches descriptions of the syntax and semantics of languages, and it contributes to constructive models of language use. There are applications in education and computation. The framework represents the grammar as a combination of systemic syntactic description and explicit choice processes, called “choice experts.” Choice experts communicate across the boundary of the grammar to its environment, exploring an external intention to communicate. The environment's answers lead to choices and thereby to creation of sentences and other units, tending to satisfy the intention to communicate. The experts’ communicative framework includes an extension to the systemic notion of a function, in the direction of a more explicit semantics. Choice expert processes are presented in two notations, one informal and the other formal. The informal notation yields a grammar‐guided conversation in English between the grammar and its environment, while the formal notation yields complete accounts of what the grammar produces given a particular circumstance and intent.
A study examined the ways in which Head Start preschool children's vocabulary developed when they and their mothers engaged in joint reading. Subjects, 19 dyads, were observed as they interacted around expository texts presented in both familiar (newspaper toy advertisements) and traditional (trade books) formats. Subjects were observed in their homes for 10 readings each, during which the dyads read a series of presented texts. The children's ability to identify words from the texts read and their comprehension of a standardized receptive vocabulary list were measured. Mothers talked more than children in all contexts; furthermore, different forms of talk were observed around the different text formats. Correlational and sequential analyses indicated that children's word recall was best predicted by responsive maternal strategies, such as encouraging children to talk about the text, and children's modeling of maternal strategies. (Contains 36 references and 5 tables of data. An appendix of data is attached.) (RS)
In 3 experiments, we explored the accessibility of concepts of varying centrality as defined by the underlying events described in script-based passages. The accessibility of central concepts, as defined by event-relatedness, was compared to that of central concepts defined on the basis of the number of mentions in the text or based on their relation to the title of the text. Experiments 1A and 1B demonstrated that central words defined by event-relatedness were more accessible than peripheral words. In Experiment 2, event-relatedness and the number of mentions were pitted against each other in defining centrality. The results showed that central concepts defined by event-relatedness (mentioned fewer times) were accessed more readily than peripheral concepts (with a higher number of mentions). Experiment 3 further indicated that the number of mentions did not affect the accessibility of the event-related central concepts. This research demonstrated the appropriateness and effectiveness in defining centrality on the basis of underlying events.