ThesisPDF Available

The Reconstruction of Virtual Cuneiform Fragments in an Online Environment

Authors:

Abstract and Figures

Reducing the time spent by experts on the process of cuneiform fragment reconstruction means that more time can be spent on the translation and interpretation of the information that the cuneiform fragments contain. Modern computers and ancillary technologies such as 3D printing have the power to simplify the process of cuneiform reconstruction, and open up the field of reconstruction to non-experts through the use of virtual fragments and new reconstruction methods. In order for computers to be effective in this context, it is important to understand the current state of available technology, and to understand the behaviours and strategies of individuals attempting to reconstruct cuneiform fragments. This thesis presents the results of experiments to determine the behaviours and actions of participants reconstructing cuneiform tablets in the real and virtual world, and then assesses tools developed specifically to facilitate the virtual reconstruction process. The thesis also explores the contemporary and historical state of relevant technologies. The results of experiments show several interesting behaviours and strategies that participants use when reconstructing cuneiform fragments. The experiments include an analysis of the ratio between rotation and movement that show a significant difference between the actions of successful and unsuccessful participants, and an unexpected behaviour that the majority of participants adopted to work with the largest fragments first. It was also observed that the areas of the virtual workspace used by successful participants was different from the areas used by unsuccessful participants. The work further contributes to the field of reconstruction through the development of appropriate tools that have been experimentally proved to dramatically increase the number of potential joins that an individual is able to make over period of time.
Content may be subject to copyright.
A preview of the PDF is not available
... 4 This sort of information could be used in a process that proposes compatible fragments for a machine-assisted resolution of broken texts or objects (e.g. Koller & Levoy 2006;Lewis 2015;Toler-Franklin et al. 2010;Reggiani 2017: 152-54;Brusuelas 2016). ...
Chapter
What are the implications of digital representation on intellectual property and ownership of cultural heritage? Are aspirations to preservation and accessibility in the digital space reconcilable with cultural sensitivities, colonized history, and cultural appropriation? This volume brings together different perspectives from academics and practitioners of Cultural Heritage, to address current debates in the digitization and other computational study of cultural artifacts. From the tension between the materiality of cultural heritage objects and the intangible character of digital models, we explore larger issues in intellectual property, collection management, pedagogical practice, inclusion and accessibility, and the role of digital methods in decolonization and restitution debates. The contributions include perspectives from a wide range of disciplines, addressing these questions within the study of the material culture of Africa, Europe, Asia, Oceania, and the Americas.
... In the first setting all manually reconstructed parts of the tablets were removed (i.e., the parts that are 15 The papers do not give any numerical evaluation. 16 Lewis (2015) is a doctoral dissertation based on several papers discussed here. 17 Cammarosano also reports a higher success rate of 80%. ...
Thesis
Full-text available
This thesis explores the use of Natural Language Processing (NLP) on the Akkadian language documented from 2400 BCE to 100 CE. The methods and tools proposed in this thesis aim to fill the gaps left in previous research in Computational Assyriology, contributing to the transformation of transliterated cuneiform tablets into richly annotated text corpora, as well as to the quantitative lexicographic analysis of cuneiform texts. Three contributions of this thesis address the task of transforming Akkadian from its basic Latinized representation, transliteration, into linguistically annotated text corpora. These include (I) neural network-based automatic phonological transcription of transliterated cuneiform text, which is essential for normalizing the diverse spelling variations encountered in the Akkadian writing system; (II) finite-state-based automatic morphological analysis of Akkadian that allows deconstructing word forms into morphological labels, lemmata and part-of-speech tags to improve the useability of Akkadian corpora for quantitative analysis; and (III) creation of a morphological gold standard, and a standardized Universal Dependencies approved morphological label set for Akkadian morphology as the byproduct of an Akkadian treebank. Three contributions address the previously unexplored quantitative analysis of Akkadian lexical semantics using word association measures and word embeddings in order to better understand the language in its own terms. One of these contributions is (IV) an algorithmic method for reducing the distortion caused by fully or partially duplicated sequences in Akkadian texts. This algorithm solves over-representation issues encountered in pointwise mutual information (PMI)-based collocation analysis, and according to preliminary results, also in PMI-based word embeddings. Two contributions (V and VI) are quantitative case studies that demonstrate the use of PMI and word embeddings in Akkadian lexicography, and compare the results with previous qualitative philological research. The last contribution (VII) is a hybrid approach, where PMI is applied to social network analysis of the Neo-Assyrian pantheon in order to reinforce the statistical relevance between the actors. These "semantic" social networks are used to study the position of the Assyrian main god, Aššur, within the pantheon. In addition to the contributions, this thesis presents the first survey of Computational Assyriology, which covers six decades of research on automatic artifact reconstruction, optical character recognition, linguistic annotation, and quantitative analysis of cuneiform texts.
Conference Paper
Full-text available
The process of reassembling fragmented wall paintings is currently prohibitively time consuming, limiting the amount of material that can be examined and reconstructed. Computer-assisted technologies hold the promise of helping humans in this task, making it possible to digitize detailed shape, color, and surface relief information for each fragment. The data can be used for documentation, visualization (both on- and off-site), virtual restoration, and to automatically propose matches between fragments. Our focus in this paper is on improving the workflow, tools, and visualizations, as they are used by archaeologists and conservators to scan fragments and find matches. In particular, we evaluate the system’s performance and user experience in ongoing acquisition and matching work at a Roman excavation in Tongeren, Belgium. Compared to prior systems, we can acquire fragments approximately 10 times faster, and support a wider range of fragment sizes (from 1 cm to 20 cm in diameter).
Chapter
Personal and reflective essays that describe how particular works—whether papers, books, or demos, from classics to forgotten gems—have influenced each writer's approach to HCI. Over almost three decades, the field of human-computer interaction (HCI) has produced a rich and varied literature. Although the focus of attention today is naturally on new work, older contributions that played a role in shaping the trajectory and character of the field have much to tell us. The contributors to HCI Remixed were asked to reflect on a single work at least ten years old that influenced their approach to HCI. The result is this collection of fifty-one short, engaging, and idiosyncratic essays, reflections on a range of works in a variety of forms that chart the emergence of a new field. An article, a demo, a book: any of these can solve a problem, demonstrate the usefulness of a new method, or prompt a shift in perspective. HCI Remixed offers us glimpses of how this comes about. The contributors consider such HCI classics as Sutherland's Sketchpad, Englebart's demo of NLS, and Fitts on Fitts' Law—and such forgotten gems as Pulfer's NRC Music Machine, and Galloway and Rabinowitz's Hole in Space. Others reflect on works somewhere in between classic and forgotten—Kidd's “The Marks Are on the Knowledge Worker,” King Beach's “Becoming a Bartender,” and others. Some contributors turn to works in neighboring disciplines—Henry Dreyfuss's book on industrial design, for example—and some range farther afield, to Lovelock's Gaia hypothesis and Jane Jacobs's The Death and Life of Great American Cities. Taken together, the essays offer an accessible, lively, and engaging introduction to HCI research that reflects the diversity of the field's beginnings.
Chapter
It is essential that we develop effective systems for the management and preservation of digital heritage data. This chapter outlines the key issues surrounding access, sharing and curation, and describes current efforts to establish research infrastructures in a number of countries. It aims to provide a detailed overview of the issues involved in the creation, ingest, preservation and dissemination of 3D datasets in particular. The chapter incorporates specific examples from past and present Archaeology Data Service (ADS) projects and highlights the recent work undertaken by the ADS and partners to specify standards and workflows in order to aid the preservation and reuse of 3D datasets.
Conference Paper
The sensitivity review of government records is essential before they can be released to the official government archives, to prevent sensitive information (such as personal information, or that which is prejudicial to international relations) from being released. As records are typically reviewed and released after a period of decades, sensitivity review practices are still based on paper records. The transition to digital records brings new challenges, e.g. increased volume of digital records, making current practices impractical to use. In this paper, we describe our current work towards developing a sensitivity review classifier that can identify and prioritise potentially sensitive digital records for review. Using a test collection built from government records with real sensitivities identified by government assessors, we show that considering the entities present in each record can markedly improve upon a text classification baseline.
Article
Purpose – This paper aims to focus on a highly significant yet under-recognised concern: the huge growth in the volume of digital archival information and the implications of this shift for information professionals. Design/methodology/approach – Though data loss and format obsolescence are often considered to be the major threats to digital records, the problem of scale remains under-acknowledged. This paper discusses this issue, and the challenges it brings using a case study of a set of Second World War service records. Findings – TNA’s research has shown that it is possible to digitise large volumes of records to replace paper originals using rigorous procedures. Consequent benefits included being able to link across large data sets so that further records could be released. Practical implications – The authors will discuss whether the technical capability, plus space and cost savings will result in increased pressure to retain, and what this means in creating a feedback-loop of volume. Social implications – The work also has implications in terms of new definitions of the “original” archival record. There has been much debate on challenges to the definition of the archival record in the shift from paper to born-digital. The authors will discuss where this leaves the digitised “original” record. Originality/value – Large volumes of digitised and born-digital records are starting to arrive in records and archive stores, and the implications for retention are far wider than simply digital preservation. By sharing novel research into the practical implications of large-scale data retention, this paper showcases potential issues and some approaches to their management.