Figure 4 - uploaded by Ryder A Wishart
Content may be subject to copyright.
Case system network. 

Case system network. 

Source publication
Thesis
Full-text available
The lexical and grammatical tradition within biblical studies leaves the interpretive guidelines for exegesis unformalized. Polysemy provides no direction in addressing this issue, but serves only to blur the distinction between the invariant meaning of linguistic signs and the contexts and co-texts that specify and constrain those invariant meanin...

Citations

Article
Full-text available
This paper explores linguistic monosemy and the methodological priorities it suggests. These priorities include a bottom-up modeling of lexical semantics, a corpus-driven discovery procedure, and a sign-based approach to linguistic description. Put simply, monosemy is a methodology for describing the semantic potential of linguistic signs. This methodology is driven by the process of abstraction based on verifiable data, and so it incorporates empirical checks and balances into the tasks of linguistics, especially (though not exclusively) lexical semantics. This paper contrasts lowest common denominator and greatest common factor methodologies within biblical studies, with three examples: (a) Porter and Pitts's analysis of the semantics of the genitive within the Greek case system in regard to the πίστις Χριστοῦ debate; (b) disagreement between Ronald Peters and Dan Wallace regarding the Greek article; and (c) the Porter-Fanning debate on the nature of verbal aspect in Greek. Analysis of the Greek of the New Testament stands to benefit from incorporating the insights of monosemy and the methodological correctives it steers toward. (Article)
Article
Full-text available
This paper argues that the underdeveloped notion of semantic similarity in Louw and Nida’s lexicon can be improved by taking account of distributional information. Their use of componential analysis relies on a set of metalinguistic terms, or components, that are ultimately arbitrary. Furthermore, both the polysemy within their semantic domains and the organization of those domains problematize their categories. By contrast, distributional data provide an empirical measurement of semantic similarity, and lexicogrammatical categorization provides a non-intuition-driven principle of classification. Distributional data is gathered by word embedding, and lexicogrammatical categorization is based largely on a derived metric of abstraction. This argument is tested by considering probable semantic field relationships for a number of Greek lexemes. Ultimately, this approach provides directions to address some of the critical weaknesses in semantic domain or semantic field theory as applied to the study of Hellenistic Greek, by introducing empirical means of approximating lexical fields.