March 2023
·
5 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
March 2023
·
5 Reads
December 2022
·
89 Reads
·
6 Citations
Journal of Cultural Analytics
This article puts operationalization as a research practice and its theoretical consequences into focus. As all sciences as well as humanities areas use concepts to describe their realm of investigation, digital humanities projects are usually faced with the challenge of ‘bridging the gap’ from theoretical concepts (whose meaning(s) depend on a certain theory and which are used to describe expectations, hypothesis and results) to results derived from data. The process of developing methods to bridge this gap is called ‘operationalization’, and it is a common task for any kind of quantitative, formal, or digital analysis. Furthermore, operationalization choices have long-lasting consequences, as they (obviously) influence the results that can be achieved, and, in turn, the possibilities to interpret these results in terms of the original research question. However, even though this process is so important and so common, its theoretical consequences are rarely reflected. Because the concepts that are operationalized cannot be operationalized in isolation, operationalizing is not only an engineering or implementation challenge, but touches on the theoretical core of the research questions we work on, and the fields we work in. In this article, we first clarify the need to operationalize on selected, representative examples, situate the process within typical DH workflows, and highlight the consequences that operationalization decisions have. We will then argue that operationalization plays such a crucial role for the digital humanities that any kind of theory needs to take off from operationalization practices. Based on these assumptions, we will develop a first scheme of the constraints and necessities of such a theory and reflect their epistemic consequences.
December 2021
·
13 Reads
·
7 Citations
Journal of Literary Theory
The present article discusses and reflects on possible ways of operationalizing the terminology of traditional literary studies for use in computational literary studies. By »operationalization«, we mean the development of a method for tracing a (theoretical) term back to text-surface phenomena; this is done explicitly and in a rule-based manner, involving a series of substeps. This procedure is presented in detail using as a concrete example Norbert Altenhofer’s »model interpretation« ( Modellinterpretation ) of Heinrich von Kleist’s The Earthquake in Chile . In the process, we develop a multi-stage operation – reflected upon throughout in terms of its epistemological implications – that is based on a rational-hermeneutic reconstruction of Altenhofer’s interpretation, which focuses on »mysteriousness« ( Rätselhaftigkeit ), a concept from everyday language. As we go on to demonstrate, when trying to operationalize this term, one encounters numerous difficulties, which is owing to the fact that Altenhofer’s use of it is underspecified in a number of ways. Thus, for instance, and contrary to Altenhofer’s suggestion, Kleist’s sentences containing »relativizing or perspectivizing phrases such as ›it seemed‹ or ›it was as if‹« (Altenhofer 2007, 45) do by no means, when analyzed linguistically, suggest a questioning or challenge of the events narrated, since the unreal quality of those German sentences only relates to the comparison in the subordinate clause, not to the respective main clause. Another indicator central to Altenhofer’s ascription of »mysteriousness« is his concept of a »complete facticity« ( lückenlose Faktizität ) which »does not seem to leave anything ›open‹« (Altenhofer 2007, 45). Again, the precise designation of what exactly qualifies facticity as »complete« is left open, since Kleist’s novella does indeed select for portrayal certain phenomena and actions within the narrated world (and not others). The degree of factuality in Kleist’s text may be higher than it is in other texts, but it is by no means »complete«. In the context of Altenhofer’s interpretation, »complete facticity« may be taken to mean a narrative mode in which terrible events are reported using conspicuously sober and at times drastic language. Following the critical reconstruction of Altenhofer’s use of terminology, the central terms and their relationship to one another are first explicated (in natural language), which already necessitates intensive conceptual work. We do so implementing a hierarchical understanding of the terms discussed: the definition of one term uses other terms which also need to be defined and operationalized. In accordance with the requirements of computational text analysis, this hierarchy of terms should end in »directly measurable« terms – i. e., in terms that can be clearly identified on the surface of the text. This, however, leads to the question of whether (and, if so, on the basis of which theoretical assumptions) the terminology of literary studies may be traced back in this way to text-surface phenomena. Following the pragmatic as well as the theoretical discussion of this complex of questions, we indicate ways by which such definitions may be converted into manual or automatic recognition. In the case of manual recognition, the paradigm of annotation – as established and methodologically reflected in (computational) linguistics – will be useful, and a well-controlled annotation process will help to further clarify the terms in question. The primary goal, however, is to establish a recognition rule by which individuals may intersubjectively and reliably identify instances of the term in question in a given text. While it is true that in applying this method to literary studies, new challenges arise – such as the question of the validity and reliability of the annotations –, these challenges are at present being researched intensively in the field of computational literary studies, which has resulted in a large and growing body of research to draw on. In terms of computer-aided recognition, we examine, by way of example, two distinct approaches: 1) The kind of operationalization which is guided by precedent definitions and annotation rules benefits from the fact that each of its steps is transparent, may be validated and interpreted, and that existing tools from computational linguistics can be integrated into the process. In the scenario used here, these would be tools for recognizing and assigning character speech, for the resolution of coreference and the assessment of events; all of these, in turn, may be based on either machine learning, prescribed rules or dictionaries. 2) In recent years, so-called end-to-end systems have become popular which, with the help of neural networks, »infer« target terms directly from a numerical representation of the data. These systems achieve superior results in many areas. However, their lack of transparency also raises new questions, especially with regard to the interpretation of results. Finally, we discuss options for quality assurance and draw a first conclusion. Since numerous decisions have to be made in the course of operationalization, and these, in practice, are often pragmatically justified, the question quickly arises as to how »good« a given operationalization actually is. And since the tools borrowed from computational linguistics (especially the so-called inter-annotator agreement) can only partially be transferred to computational literary studies and, moreover, objective standards for the quality of a given implementation will be difficult to find, it ultimately falls to the community of researchers and scholars to decide, based on their research standards, which operationalizations they accept. At the same time, operationalization is the central link between the computer sciences and literary studies, as well as being a necessary component for a large part of the research done in computational literary studies. The advantage of a conscious, deliberate and reflective operationalization practice lies not only in the fact that it can be used to achieve reliable quantitative results (or that a certain lack of reliability at least is a known factor); it also lies in its facilitation of interdisciplinary cooperation: in the course of operationalization, concrete sets of data are discussed, as are the methods for analysing them, which taken together minimizes the risk of misunderstandings, »false friends« and of an unproductive exchange more generally.
July 2020
·
580 Reads
·
6 Citations
The Center for Reflected Text Analytics (CRETA) develops interdisciplinary mixed methods for text analytics in the research fields of the digital humanities. This volume is a collection of text analyses from specialty fields including literary studies, linguistics, the social sciences, and philosophy. It thus offers an overview of the methodology of the reflected algorithmic analysis of literary and non-literary texts.
July 2020
·
20 Reads
July 2020
·
86 Reads
·
10 Citations
This chapter discusses the central CRETA workflow. Starting with a research question from the humanities and/or social sciences, we define work packages and partial questions. On the basis of these questions the central terms are operationalized via annotations and automations - such as machine learning. The application of these labeling rules to the corpus data leads mostly to quantitative results, which are to be interpreted in a holistic fashion. In addition to the details of the operationalization, further assumptions such as domain knowledge are included in this overall view, the consequences of which are critically reflected and considered for interpretation.
July 2020
·
70 Reads
·
3 Citations
This contribution is devoted to the question of the extent to which selected texts by Theodor W. Adorno actually realize the concept of a constellative use of terms propagated by him by comparing the links between concepts in Adorno’s texts with those in selected texts by Rudolf Carnap. In order to carry out this comparison, a model of conceptuality is developed which allows to operationalize term references and their relationship to one another on the basis of linguistic criteria and transfers these relations into networks which are finally compared with one another by means of network analysis. The network analytical comparisons show that both authors do not fully correspond to their ideal of concept-linkage, but that Adorno implements his own much more consistently than Carnap.
July 2020
·
76 Reads
·
7 Citations
This contribution presents the results of our interdisciplinary engagement with entities. We developed interdisciplinary guidelines for a reliable and semantically valid annotation of entity references, and a methodological workflow for their semi-automatic identification within large amounts of textual data. From the perspective of four different disciplines within the Humanities and Social Sciences, we discuss challenges related to the application of a generic workflow to heterogeneous text corpora, and present possible solutions. We conclude that the interdisciplinary collaboration enhances the overall methodological stringency and fosters conceptual reflections within each participating discipline.
July 2020
·
39 Reads
This chapter presents various activities related to internal and external communication, including activities related to the dissemination of ideas developed in CRETA. Specifically, we present the ‘hackatorial’ (workshop “Learning machine learning”), a ‘workshop on operationalization’ as a core task for the digital humanities, and the ‘CRETA coaching’. For all activities we collect our results and experiences in a conclusion.
January 2020
·
124 Reads
... This proposal also underscores the limitations of current digital humanities, which struggles to operationalise the multi-layeredness and poly-temporality of cultural phenomena. 45 Computational models used in the humanities remain constrained by conceptual frameworks borrowed from non-computational fields such as history and political science -profoundly limiting our ability to harness the explanatory potential of their compositional architectures. The challenge moving forward is to develop humanistic approaches that could better account for the hierarchical and temporal complexities of narrative experience as an embodied phenomenon. ...
December 2022
Journal of Cultural Analytics
... Other scholars haver ecently tried to implement operationalisation in literary theory Alvarado 2019;Horstmann & Kleymann 2019;Gius 2019;Reiter et al. 2020;Weitin 2021, 55-57). 4 Their analyses represent attempts to demonstratet hat the quantitativea pproacheso ft he Digital Humanities allow operationalisation of concepts in literary theory-that is, translation from a theoretical level to an empirical one. ...
December 2021
Journal of Literary Theory
... The decision in favour of manual annotation of the May68 Corpus was based on the fact that this is a specialized corpus for which established automatic labelling of named entities would not predictably yield adequate and satisfactory results, as well as on experience from related research on named entities in various national literary corpora (cf. Stanković et al., 2019;Vala et al., 2015;Ketschik, 2020;Papay and Padó, 2020). As Won et al. (2018) have noted, using historical texts as an example, a single automatic tagging tool is not optimal for automatic tagging of place name and instead a clever combination of multiple approaches is required. ...
July 2020
... We will go on to describe this research process in detail. A comparable process was previously described by Pichler et al. [20]. Schöch et al. [2], in their survey on methodology in CLS also present a description of the typical research steps required for CLS, before going on to approach the field from the perspective of multiple literary research problems (e.g. ...
July 2020
... Methodology and tools are understood to play an enabling role. First and foremost, however, the group that advances Kangde relies on the methodology of the history of concepts in its global extension (Betti and van den Berg 2016; Pichler et al. 2020;Pozzo 2021). What is more, the group takes advantage of achievements that have proven to be particularly effective for the advancement of the history of philosophy from a global perspective, such as the English-French Vocabulaire de Philosophie (INIST 2018, a CLARIN lexicon),3 the Lessico Intellettuale Europeo (Gregory et al. 1967(Gregory et al. -2022, and the Key Concepts in Chinese Thought and Culture (Wang Lin and Han Zhen 2015-2021). ...
July 2020