Article

Literary “higher dimensions” quantified: a stylometric study of nine stories

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The study will focus on the quantitative assessment of nine stories, considered important contributions in the supernatural and in the early and modern science - fiction prose. Besides the two treatments of the topic of imaginary Flatland – penned by E. A. Abbott and C. H. Hinton –, the corpus includes writings by H. G. Wells, A. Blackwood, M. Leinster, G. Waldeyer, R. A. Heinlein, L. Padgett, and A. C. Clarke. Texts are researched on the bases of four analyses (moving - average type - token ratio, average tokens length, Busemann’s coefficient, and collocation associativity), with the results tested for statistical significance; next, the textual comparisons will provide a springboard for sketches of literary criticism interpretations. The analyzed corpus has revealed the distinctive and colorful take writers have in their stories. By the nature of their subject, the texts are expected to share higher dimensions and time warps , a thread implying a meeting point in terms of vocabulary richness, plot development, and possibly of narrative structure. Yet, in most cases, the findings suggest basic and nuanced differences, hinting at clear stylistic physiognomies in the authorship. The outcome affects not only the assessment of the weight individual samples have, but also the interface between a common ( sub ) genre and personal style .

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The article focuses on analysing activity in the selected sonnets of the Czech and Russian nineteenth-century literatures (100 poems per each). Busemann Coefficient (Q) is counted for the samples, and the individual authors are tested on statistical significance by means of the nonparametric Mann–Whitney–Wilcoxon test. Another product of the research is a scatter plot, where the counts of the significant MWW test values for the poets and their average Q’s are compared; these figures are clustered according to the k-means method, and interpretations are formulated on the basis of the groupings. Both microanalyses penetrating into an author’s production, and literary-movement investigations are provided, so as to make the research of use for literary criticism, too. Finally, two ways of comparing the Czech and Russian data are sketched, with the outcomes being commented upon.
Article
Full-text available
With the ever-growing amounts of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper, we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction.
Article
Full-text available
Authorship attribution, the science of inferring characteristics of the author from the characteristics of documents written by that author, is a problem with a long history and a wide range of application. Recent work in “non-traditional” authorship attribution demonstrates the practicality of automatically analyzing documents based on authorial style, but the state of the art is confusing. Analyses are difficult to apply, little is known about type or rate of errors, and few “best practices” are available. In part because of this confusion, the field has perhaps had less uptake and general acceptance than is its due. This review surveys the history and present state of the discipline, presenting some comparative results when available. It shows, first, that the discipline is quite successful, even in difficult cases involving small documents in unfamiliar and less studied languages; it further analyzes the types of analysis and features used and tries to determine characteristics of well-performing systems, finally formulating these in a set of recommendations for best practices.
Chapter
Full-text available
Article
Full-text available
The aim of the article is to evaluate and address the limits of an existing approach to the analysis of the thematic concentration of text. To overcome these limits, the article proposes and applies both a modification of the measurement of thematic concentration – known as secondary thematic concentration and proportional thematic concentration – and methods for their statistical testing. The results show that the modification, as well as the application of the proposed tests, enhances the possibilities for analysing the thematic characteristics of text. The article uses 20 Slovak texts of the same genre written by one author.
Article
Full-text available
Language and literary studies have studied style for centuries, and even since the advent of ›stylistics‹ as a discipline at the beginning of the twentieth century, definitions of ›style‹ have varied heavily across time, space and fields. Today, with increasingly large collections of literary texts being made available in digital form, computational approaches to literary style are proliferating. New methods from disciplines such as corpus linguistics and computer science are being adopted and adapted in interrelated fields such as computational stylistics and corpus stylistics, and are facilitating new approaches to literary style. The relation between definitions of style in established linguistic or literary stylistics, and definitions of style in computational or corpus stylistics has not, however, been systematically assessed. This contribution aims to respond to the need to redefine style in the light of this new situation and to establish a clearer perception of both the overlap and the boundaries between ›mainstream‹ and ›computational‹ and/or ›empirical‹ literary stylistics. While stylistic studies of non-literary texts are currently flourishing, our contribution deliberately centers on those approaches relevant to ›literary stylistics‹. It concludes by proposing an operational definition of style that we hope can act as a common ground for diverse approaches to literary style, fostering transdisciplinary research. The focus of this contribution is on literary style in linguistics and literary studies (rather than in art history, musicology or fashion), on textual aspects of style (rather than production- or reception-oriented theories of style), and on a descriptive perspective (rather than a prescriptive or didactic one). Even within these limits, however, it appears necessary to build on a broad understanding of the various perspectives on style that have been adopted at different times and in different traditions. For this reason, the contribution first traces the development of the notion of style in three different traditions, those of German, Dutch and French language and literary studies. Despite the numerous links between each other, and between each of them to the British and American traditions, these three traditions each have their proper dynamics, especially with regard to the convergence and/or confrontation between mainstream and computational stylistics. For reasons of space and coherence, the contribution is limited to theoretical developments occurring since 1945. The contribution begins by briefly outlining the range of definitions of style that can be encountered across traditions today: style as revealing a higher-order aesthetic value, as the holistic ›gestalt‹ of single texts, as an expression of the individuality of an author, as an artifact presupposing choice among alternatives, as a deviation from a norm or reference, or as any formal property of a text. The contribution then traces the development of definitions of style in each of the three traditions mentioned, with the aim of giving a concise account of how, in each tradition, definitions of style have evolved over time, with special regard to the way such definitions relate to empirical, quantitative or otherwise computational approaches to style in literary texts. It will become apparent how, in each of the three traditions, foundational texts continue to influence current discussions on literary style, but also how stylistics has continuously reacted to broader developments in cultural and literary theory, and how empirical, quantitative or computational approaches have long ­existed, usually in parallel to or at the margins of mainstream stylistics. The review will also reflect the lines of discussion around style as a property of literary texts – or of any textual entity in general. The perspective on three stylistic traditions is accompanied by a more systematic perspective. The rationale is to work towards a common ground for literary scholars and linguists when talking about (literary) style, across traditions of stylistics, with respect for established definitions of style, but also in light of the digital paradigm. Here, we first show to what extent, at similar or different moments in time, the three traditions have developed comparable positions on style, and which definitions out of the range of possible definitions have been proposed or promoted by which authors in each of the three traditions. On the basis of this synthesis, we then conclude by proposing an operational definition of style that is an attempt to provide a common ground for both mainstream and computational literary stylistics. This definition is discussed in some detail in order to explain not only what is meant by each term in the definition, but also how it relates to computational analyses of style – and how this definition aims to avoid some of the pitfalls that can be perceived in earlier definitions of style. Our definition, we hope, will be put to use by a new generation of computational, quantitative, and empirical studies of style in literary texts.
Article
Full-text available
This paper explores ways in which research into collocation should be improved. After a discussion of the parameters underlying the notion of collocation, the paper has three main parts. First, I argue that corpus linguistics would benefit from taking more seriously the understudied fact that collocations are not necessarily symmetric, as most association measures imply. Also, I introduce an association measure from the associative learning literature that can identify asymmetric collocations and show that it can also distinguish collocations with high and low association strengths well. Second, I summarize some advantages of this measure and brainstorm about ways in which it can help re-examine previous studies as well as support further applications. Finally, I adopt a broader perspective and discuss a variety of ways in which all association measures – directional or not – in corpus linguistics should be improved in order for us to obtain better and more reliable results.
Article
Full-text available
This article deals with the one of the oldest and most traditional fields in quantitative linguistics, the concept of vocabulary richness. Although there are several methods for vocabulary richness measurement, all of them are influenced by text size. Therefore, the authors propose a new way of vocabulary richness measurement without any text length dependence. In the second part of the article, the new method is used for a genre analysis in texts written by the Czech writer Karel Čapek. Furthermore, differences between authors and between languages are studied with this method.
Article
Full-text available
'Corpus Design Criteria' beings (Section 1) by defining the object to be created, a corpus, and the constituents of it, texts themselves, noting briefly the pragmatic constraints on the sort of documents which will actually be available, spoken as well as written.It then (Section 2) reviews the practical stages in the process of establishing a corpus, from selection of sources through to mark-up, assigning annotations to the texts assembled. This is followed by a consideration of copyright problems (Section 3).Section 4 points out the major difficulties in defining the population of texts that the corpus will sample, contrasting the sets of texts received versus those produced by a target group, and internal (linguistic) versus external (social) means of defining such groups.The next three sections look at the sets of markers which can be useful at different levels Section 7 begins at the highest level, considering the different types of corpus there may be. Section 6 is intermediate, considering how to distinguish the different types of text occurring within a corpus. Then, for the intra-text level. Section 7 reviews considerations governing mark-up, distinguishing those markers useful for written and spoken texts. Of these three sections. Section 6 is the most fully explicit, listing twenty-nine significant attributes assignable to a text.Sections 8 and 9 turn away from the corpus design itself, to focus on its social context and function, both of the corpus design process, and of the corpus when implemented: to what extent are there now accepted standards relevant to the criteria reviewed in preceding sections? And what are the major classes of potential users and uses for corpora, both now and in the future?.
Article
Full-text available
Collostruction strength, i.e. the degree of attraction that a word Cj exhibits to a construction Ck, has been argued to be exploited in processes of on- line comprehension, for example, to parse ambiguousstructures.There are, however, many ways to express this quantity and a large body of candidate measures can be foundin the computationaland corpuslinguisticliterature. The present studyprovidesa comprehensiveempirical evaluationof 47com- peting(variantsof)measuresofassociationinordertoassesstheirusefulness formodelsofsentencecomprehension.Tothatend,thedegreeofadequacyof a given measure is evaluated against its performance in a task of predicting human behavior in an eye-tracking experiment that investigated the reading of a local syntactic complementation ambiguity (Kennison 2001).The anal- ysis shows that individual measures in fact arrive at different estimations of degrees of attraction between verbs and the relevant complementation pat- terns, and hence differ in their power to predict human reading behavior. On the basis of the obtained results, it is suggestedthat minimum sensitivity (Pedersen and Bruce 1996) is best suited as an expression of collostruction strength.
Article
Full-text available
We are in the midst of a technological revolution whereby, for the first time, researchers can link daily word use to a broad array of real-world behaviors. This article reviews several computerized text analysis methods and describes how Linguistic Inquiry and Word Count (LIWC) was created and validated. LIWC is a transparent text analysis program that counts words in psychologically meaningful categories. Empirical results using LIWC demonstrate its ability to detect meaning in a wide variety of experimental settings, including to show attentional focus, emotionality, social relationships, thinking styles, and individual differences.
Book
Cambridge Core - Applied Linguistics - Statistics in Corpus Linguistics - by Vaclav Brezina
Book
Book synopsis: The Emergence of the Fourth Dimension describes the development and proliferation of the idea of higher dimensional space in the late nineteenth- and early twentieth-centuries. An idea from mathematics that was appropriated by occultist thought, it emerged in the fin de siècle as a staple of genre fiction and influenced a number of important Modernist writers and artists. Providing a context for thinking of space in dimensional terms, the volume describes an active interplay between self-fashioning disciplines and a key moment in the popularisation of science. It offers new research into spiritualism and the Theosophical Society and studies a series of curious hybrid texts. Examining works by Joseph Conrad, Ford Madox Ford, H.G. Wells, Henry James, H. P. Lovecraft, and others, the volume explores how new theories of the possibilities of time and space influenced fiction writers of the period, and how literature shaped, and was in turn shaped by, the reconfiguration of imaginative space occasioned by the n-dimensional turn. A timely study of the interplay between philosophy, literature, culture, and mathematics, it offers a rich resource for readers interested in nineteenth century literature, Modernist studies, science fiction, and gothic scholarship.
Article
The aim of this article is to discuss reliability issues of a few visual techniques used in stylometry, and to introduce a new method that enhances the explanatory power of visualization with a procedure of validation inspired by advanced statistical methods. A promising way of extending cluster analysis dendrograms with a self-validating procedure involves producing numerous particular 'snapshots', or dendrograms produced using different input parameters, and combining them all into the form of a consensus tree. Significantly better results, however, can be obtained using a new visualization technique, which combines the idea of nearest neighborhood derived from cluster analysis, the idea of hammering out a clustering consensus from bootstrap consensus trees, with the idea of mapping textual similarities onto a form of a network. Additionally, network analysis seems to be a good solution for large data sets.
Book
This book shows that over forty years of psychological laboratory-based research support the claims of the Lexical Priming Theory. It examines how Lexical Priming applies to the use of spoken English as the book provides evidence that Lexical Priming is found in everyday spoken conversations.
Article
The idea that text in a particular field of discourse is organized into lexical patterns, which can be visualized as networks of words that collocate with each other, was originally proposed by Phillips (1983). This idea has important theoretical implications for our understanding of the relationship between the lexis and the text and (ultimately) between the text and the discourse community/the mind of the speaker. Although the approaches to date have offered different possibilities for constructing collocation networks, we argue that they have not yet successfully operationalized some of the desired features of such networks. In this study, we revisit the concept of collocation networks and introduce GraphColl , a new tool developed by the authors that builds collocation networks from user-defined corpora. In a case study using data from McEnery’s (2006a) study of the Society for the Reformation of Manners Corpus (SRMC), we demonstrate that collocation networks provide important insights into meaning relationships in language.
Article
This paper investigates the relative effectiveness and accuracy of multivariate analysis, specifically cluster analysis, of the frequencies of very frequent words and the frequencies of very frequent word sequences in distinguishing texts by different authors and grouping texts by a single author. Cluster analyses based on frequent words are fairly accurate for groups of texts by known authors, whether the texts are long sections of modern British and US novels or shorter sections of contemporary literary critical texts, but they are only rarely completely accurate. When frequent word sequences are used instead of frequent words or in addition to them, however, the accuracy of the analyses often improves, sometimes dramatically, especially when personal pronouns are eliminated. Analyses based on frequent sequences even provide completely correct results in some cases where analyses based on frequent words fail. They also produce superior results for small groups of problematic novels and critical texts extracted from the larger corpora. Such successes suggest that analyses based on frequent word sequences constitute improved tools for authorship and stylistic studies.
Book
Corpus Annotation gives an up-to-date picture of this fascinating new area of research, and will provide essential reading for newcomers to the field as well as those already involved in corpus annotation. Early chapters introduce the different levels and techniques of corpus annotation. Later chapters deal with software developments, applications, and the development of standards for the evaluation of corpus annotation. While the book takes detailed account of research world-wide, its focus is particularly on the work of the UCREL (University Centre for Computer Corpus Research on Language) team at Lancaster University, which has been at the forefront of developments in the field of corpus annotation since its beginnings in the 1970s.
Article
Words which frequently co-occur in language ('collocations') are often thought to be independently stored in speakers' minds. This idea is tested here through experiments investigating the extent to which corpus-identified collocations exhibit mental 'priming' in a group of native speakers. Collocational priming is found to exist. However, in an experiment which aimed to exclude higher-order mental processes, and focus instead on the 'automatic' processes which are thought to best reflect the organisation of the mental lexicon, priming is restricted to collocations which are also psychological associates. While the former finding suggests that collocations found in a large corpus are likely to have psychological reality, the latter suggests that we may need to elaborate our models of how they are represented.
Article
This paper aims to provide a theory to help explain the similarities and differ-ences between corpus and elicited data in the area of frequent adjective-noun collocations. The study begins with an overview of existing data and theories from word frequency estimation studies and word association studies. This is followed by a critical analysis of three explanations for elicited-corpus data differences (Sinclair 1991; Bybee and Hopper 2001; and Wray 2002). I then report on an experiment designed to compare British National Corpus (BNC) data and English language teacher intuitions about the most frequent collocates of some very common adjectives in the English language. It is argued that the data provide support for the theory that a key factor affecting the 'quality' of lexical intuitions may be the employment of an availability heuristic in judgments of frequency. It is argued that in an elicitation task some collocates of words (particularly those typically occurring together with the stimulus word in a larger language chain) may be more hidden from memory searches than other collocates which tend to occur with the stimulus word as a 'bare' dyad in typical usage.
Article
This paper argues that corpus stylistics can contribute methodologies and concepts to support the investigation of character information in fiction. Focusing on Charles Dickens, the paper looks at lexicogrammatical patterns as well as places in the literary text. It suggests that clusters, i.e. repeated sequences of words, and suspensions, i.e. interruptions of characters ÿ speech by the narrator, can serve as textual cues in the process of characterization. These concepts are illustrated with examples for the characters Bucket and Tulkinghorn in Bleak House. The analysis of the examples leads to an outline of challenges for corpus stylistics that result from the need to interpret features on the textual surface in relation to the effects they might have on the processing of the text by readers.
Article
1. William James, Some Problems of Philosophy (New York: The Library of America, 1987), 1057. 2. Paul Davies, Superforce, (New York: Simon and Schuster, 1984), 151 3. First published in 1884, Flatland was reissued the same year in a revised edition, with a preface by Abbott as "editor," from which all modern printings have been taken. There have been new editions in every decade since the book's first appearance, including translations into Dutch and German, and of the total of twenty-five, eighteen have been published in the U.S 4. 15 November 1884, 622. 5. 23 February 1885 6. Ray Bradbury, "Introduction," Flatland (San Francisco: Arion Press, 1980), n.p. 7. William Garnett, "Introduction," Flatland (New York: Barnes and Noble, 1963), x 8. Banesh Hoffmann, "Introduction," Flatland (New York: Dover Publications, 1952), viii Hoffman is the author of Einstein: His Life and Times (New York. New American Library, 1972). 9. Abbott was not the only one to supply such an explanation. In 1880, four years before the appearance of Flatland, professional mathematician Charles H. Hinton published "What is the Fourth Dimension?" in the Dublin University Magazine, addressing his essay to a lay audience. A quarter of a century later, Hinton's An Episode of Flatland elaborated on a number of the geometric fancies in Abbott's book. 10. Edwin A. Abbott, Clue: A Guide Through Greek to Hebrew Scripture (London: Adam and Charles Black, 1900), xix. 11. The Times obituary of 13 October 1926 speaks of Abbott as potentially "the greatest preacher in the English Church." 12. Edwin Abbott, Philochristus: Memoirs of a Disciple of the Lord (London: Macmillan, 1878), 66. 13. Ibid., 92. 14. Quoted in Francis Bacon: An Account of His Life and Works (London: Macmillan, 1885), 411. But such objectivity, Abbott frequently emphasizes, is difficult if not impossible for human beings to achieve. In his article "Illusion in Religion," for example, published in the November 1890 Contemporary Review (Vol. 58, 721-22), Abbott observes that we are never able in this world "to reach fact, if that means absolute truth. . . . For indeed we see nothing exactly as it is. There is no such thing as objectiveness in applied science." In a recent introduction to Flatland (Pasadena, CA: Grant Dahlstrom, 1978), David W. Davies comments that "Abbott was concerned [in Flatland] with a problem of viewpoint which he touches on elsewhere in his writings; the fact that the physical universe is understood by man only so far as his senses permit, and his understanding of it might not have any congruence or correspondence at all with its actual nature" (x). 15. Edwin A. Abbott, Silanus, the Christian (London: Adam and Charles Black, 1906), 8. In the light of Abbott's admiration for Pope, compare that poet's definition of chance, in his Essay on Man, as "direction which thou canst not see," a concept that equates chance with providence. Abbott's disapproval of chance and probability as aspects or instruments of divinity is expressed at length in Philomythus (London: Macmillan, 1891), an attack on Cardinal Newman's Essay on Ecclesiastical Miracles. 16. Edwin Abbott [Edwin A.'s father], Concordance to the Works of Alexander Pope, preface by Edwin A Abbott (New York: Appleton, 1875), iv. 17. Edwin A. Abbott, Shakespearian Grammar. An Attempt to Illustrate Some of the Differences Between Elizabethan and Modern English (London Macmillan, 1874), xxiv. 18. Quoted in Francis Bacon, 454. 19. Boston Advertiser, 1885. 20. Compare Thackeray's sardonic vision of aristocratic progress from The Book of Snobs: "Old Pump sweeps a shop, runs messages, becomes a confidential clerk and partner. Pump the Second becomes chief of the house, spins more and more money, marries his son to an Earl's daughter. Pump Tertius goes on with the bank, but his chief business in life is to become the father of Pump Quartus, who comes out a full-blown aristocrat, and takes his seat as Baron Pumpington." The Oxford Thackeray, George Saintsbury, ed. (London, 1908), 9: 299. 21. New York Times, 23 February 1885. 22. Boston Advertiser, 1885. 23. Edwin A. Abbott, Flatland (New York: Dover Books, 1952), 12. 24. Ibid., 49-50. 25. In this last connection, see Abbott's...
Article
John McHardy Sinclair has made major contributions to applied linguistics in three related areas: language in education, discourse analysis, and corpus-assisted lexicography. This article discusses the far-reaching implications for language description of this third area. The corpus-assisted search methodology provides empirical evidence for an original and innovative model of phraseological units of meaning. This, in turn, provides new findings about the relation between word-forms, lemmas, grammar, and phraseology. The article gives examples of these points, places Sinclairs work briefly within a tradition of empirical text analysis, and identifies questions which are currently unanswered, but where productive lines of investigation are not difficult to see: (1) linguistic-descriptive (can we provide a comprehensive description of extended phrasal units for a given language) and explanatory (what explains the high degree of syntagmatic organization in language in use), and (2) socio-psychological (how can the description of phrasal units of meaning contribute to a theory of social action and to a theory of the ways in which we construe the social world?).
Article
In 1901 T. C. Mendenhall, in one of the earliest studies of its kind, concluded that the frequency distribution of words of different length in the works of Shakespeare were so consistently different from those of Bacon, that it was very unlikely that works attributed to the former could have been written by the latter. It is here shown that in the writings of Sir Philip Sidney, a contemporary of the above, the differences in word-length distribution between his prose and his verse are very close indeed to those found between Bacon's prose and Shakespeare's verse. Thus the differences that Mendenhall found can be more simply explained by a difference of literary presentation. Mendenhall was misled in his conclusions by classifying Shakespeare's plays as 'prose'.
Article
This paper traces the historical development of the use of statistical methods in the analysis of literary style. Commencing with stylometry’s early origins, the paper looks at both successful and unsuccessful applications, and at the internal struggles as statisticians search for a proven methodology. The growing power of the computer and the ready availability of machine-readable texts are transforming modern stylometry, which has now attracted the attention of the media. Stylometry’s interaction with more traditional literary scholarship is also discussed.
Article
We present an extensive empirical evaluation of collocation extraction methods based on lexical association measures and their combination. The experiments are performed on three sets of collocation candidates extracted from the Prague Dependency Treebank with manual morphosyntactic annotation and from the Czech National Corpus with automatically assigned lemmas and part-of-speech tags. The collocation candidates were manually labeled as collocational or non-collocational. The evaluation is based on measuring the quality of ranking the candidates according to their chance to form collocations. Performance of the methods is compared by precision-recall curves and mean average precision scores. The work is focused on two-word (bigram) collocations only. We experiment with bigrams extracted from sentence dependency structure as well as from surface word order. Further, we study the effect of corpus size on the performance of the individual methods and their combination.
Article
Natural languages are full of collocations, recurrent combinations of words that co-occur more often than expected by chance and that correspond to arbitrary word usages. Recent work in lexicography indicates that collocations are pervasive in English; apparently, they are common in all types of writing, including both technical and nontechnical genres. Several approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual data. These techniques automatically produce large numbers of collocations along with statistical figures intended to reflect the relevance of the associations. However, none of these techniques provides functional information along with the collocation. Also, the results produced often contained improper word associations reflecting some spurious aspect of the training corpus that did not stand for true collocations.In this paper, we describe a set of techniques based on statistical methods for retrieving and identifying collocations from large textual corpora. These techniques produce a wide range of collocations and are based on some original filtering methods that allow the production of richer and higher-precision output. These techniques have been implemented and resulted in a lexicographic tool, Xtract. The techniques are described and some results are presented on a 10 million-word corpus of stock market news reports. A lexicographic evaluation of Xtract as a collocation retrieval tool has been made, and the estimated precision of Xtract is 80%.
Chapter
Half-title pageSeries pageTitle pageCopyright pageDedicationPrefaceAcknowledgementsContentsList of figuresHalf-title pageIndex