Fig 1 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Influence of context window size on F1-score.
Source publication
This paper presents a supervised method for a novel task, namely, detecting elements of narration in passages of dialogue in prose fiction. The method achieves an F1-score of 80.8%, exceeding the best baseline by almost 33 percentage points. The purpose of the method is to enable a more fine-grained analysis of fictional dialogue than has previousl...
Contexts in source publication
Context 1
... investigate the influence of the context window size, the model was tested with a window of 0-9 tokens. Figure 1 shows the results of this, which indicate that a context window of 0 to 1 performs poorly in comparison to larger windows. The model's performance stabilizes when the context window is 4 tokens or larger, only showing minor fluctuations thereafter. ...
Context 2
... of the main strengths of the model is that it is able to detect narration in lines solely based on the tokens in the line, with no other contextual information available. In Figure 1 we examined the influence of context window size on the performance and showed that using more than four context tokens only has minor effects. This indicates that while there may be long-range dependencies, the most important features are captured in a context window of four tokens. ...
Citations
Computational recognition of narratives, if successful, would find innumerable applications with large digitized datasets. Systematic identification of narratives in the text flow could significantly contribute to such pivotal questions as where, when, and how narratives are employed. This paper discusses an approach to extract narratives from two datasets, Finnish parliamentary records (1980–2021) and oral history interviews with former Finnish MPs (1988–2018). Our study was based on an iterative approach, proceeding from original expert readings to a rule-based, computational approach that was elaborated with the help of annotated samples and annotation scheme. Annotated samples and computationally found extracts were compared, and a good correspondence was found. In this paper, we exhibit and compare the results from annotation and rule-based approach, and discuss examples of correctly and incorrectly found narrative sections. We consider that all attempts at recognizing and extracting narratives are definition dependent, and feed back to narrative theory.