Figure 1 - uploaded by Amirabbas Davari
Content may be subject to copyright.
Source publication
High-resolution imaging has delivered new prospects for detecting the material composition and structure of cultural treasures. Despite the various techniques for analysis, a significant diagnostic gap remained in the range of available research capabilities for works on paper. Old master drawings were mostly composed in a multi-step manner with va...
Contexts in source publication
Context 1
... & Prat 1994), approximately 75% are multi- material drawings. In the final drawing, multiple layers of different substrates overlap, which means the first layers applied by the artist are often unidentifiable and undetected visually by the unaided eye (Figure 1). The different substrates, however, illustrate the various steps involved in the genesis of the artwork. ...
Context 2
... involved in the genesis of the artwork. Over the centuries, we find numerous techniques of laying down a preliminary drawing in dry media (mostly chalks or graphite) followed by a subsequent wet layer (pen or brush with ink). Exceptions naturally exist and, in these instances, the artist returned to a dry medium after laying down an ink wash. Fig. 1 is a clear example: The artist marked the bleeding wounds of the warriors with red chalk at a later time. (*For iconography and work process analysis ...
Context 3
... 2016b pp. 51-59, 79-81, 110, fig. 1). For this reason, the preliminary dry drawing could be considered to be mostly finished by the time the artist changed to the wet technique. Thus, the first layer of the drawing represents the starting point of the creative process and represents a crucial point of ...
Citations
... Various image processing techniques have been used to facilitate the identification of individual features and attributes of artefacts by highlighting differences and similarities between images in the capture sequence. The approach taken depends on the user, artefact and institution, and includes: Contrast Stretching [41], [88], [94], [116], [140], Spectral Curves [37], [49], [81], [91], [98], [114], [135], [141], Principal Component Analysis [21], [38], [74], [110], [126], Independent Component Analysis [34], [89], [139], [142], Linear Discriminate Analysis [55], [143]- [146], Spectral Mixture Analysis [30], [89], [98], [104], [139], Mosaicing [35], [45], [77], [91], [100], Clustering [22], [37], [47], [104], [147], [148], Spectral Angle Mapper [55], [149]- [153] and Colour Image Processing [22], [34], [37], [52], [82], [100] including: Pseudocolour Rendering, False Colour Image Creation, and True Colour Image Creation. It would be good practice to record any processing techniques used in data analysis, including a brief description of the method as the terminology often varies between papers. ...
... Pixels with similar properties can be grouped together using a clustering algorithm such as Kmeans clustering [44], [47], [104], [147], [148], fuzzy C-means [37], [44], [147], Gaussian mixture models [148], and the Linde, Buzo and Gray clustering algorithm [22], [170]. Since clustering methods are unsupervised techniques, they do not require labels and instead cluster the data based on some distance measure. ...
... Pixels with similar properties can be grouped together using a clustering algorithm such as Kmeans clustering [44], [47], [104], [147], [148], fuzzy C-means [37], [44], [147], Gaussian mixture models [148], and the Linde, Buzo and Gray clustering algorithm [22], [170]. Since clustering methods are unsupervised techniques, they do not require labels and instead cluster the data based on some distance measure. ...
Although multispectral imaging (MSI) of cultural heritage, such as manuscripts, documents and artwork, is becoming more popular, a variety of approaches are taken and methods are often inconsistently documented. Furthermore, no overview of the process of MSI capture and analysis with current technology has previously been published. This research was undertaken to determine current best practice in the deployment of MSI, highlighting areas that need further research, whilst providing recommendations regarding approach and documentation. An Action Research methodology was used to characterise the current pipeline, including: literature review; unstructured interviews and discussion of results with practitioners; and reflective practice whilst undertaking MSI analysis. The pipeline and recommendations from this research will improve project management by increasing clarity of published outcomes, the reusability of data, and encouraging a more open discussion of process and application within the MSI community. The importance of thorough documentation is emphasised, which will encourage sharing of best practice and results, improving community deployment of the technique. The findings encourage efficient use and reporting of MSI, aiding access to historical analysis. We hope this research will be useful to digitisation professionals, curators and conservators, allowing them to compare and contrast current practices.
... The results of this extensive study constitute a valuable foundation into the understanding of Redon's materials and techniques. However, since this study was completed, noninvasive methods of chemical and elemental analysis have substantially advanced, including, significantly, macroscale imaging technologies [3][4][5][6][7][8]. The current study was thus developed to expand upon the earlier foundational work by employing these new technologies, specifically macro X-ray fluorescence (MA-XRF) scanning and reflectance imaging spectroscopy (RIS), as well as site-specific Raman microspectroscopy with principal component analysis (PCA) and fiber optic reflectance spectroscopy (FORS) to study the two selected drawings by Redon. ...
The artist Odilon Redon (1840–1916) was a French symbolist known for both the dark, surreal prints and drawings he created in the first half of his career, as well as the colorful pastel works that characterized the second half of his career. This study examines two drawings by Redon in the J. Paul Getty Museum collection—Apparition (ca. 1880–1890) and Head within an Aureole (ca. 1894–1895)—executed during the period in his career in which he was transitioning between these two modes. In order to better understand the materials the artist chose and the methods by which he applied them, two noninvasive, macroscopic characterization techniques—macro X-ray fluorescence (MA-XRF) scanning and reflectance imaging spectroscopy (RIS)—were employed. These techniques allowed the materials present to be distinguished and the relationship between their applications visualized. Coupled with fiber optic reflectance spectroscopy (FORS) and Raman microspectroscopy with principal component analysis (PCA), these results give new insight into the materials and methods used by Redon. Six distinct black drawing materials and a yellow pastel were identified in Apparition, underscoring the complexity of Redon’s noir drawings. As he began using color pastel more frequently he seemed to use a simplified black palette; in Head within an Aureole the artist used only two black drawing materials and three color pastels (two pink and one blue). This research provides a framework for future noninvasive technical analysis of works by Redon in other collections as well as mixed media drawings more generally.
... In a previous study on this application we followed an unsupervised approaches with k-means and GMM clustering algorithms [12], which performed weakly, especially for diluted red chalk. In this work, we assume that it is feasible to obtain a limited number of labeled pixels by a specialist, e.g., an art historian. ...
Old master drawings were mostly created step by step in several layers using different materials. To art historians and restorers, examination of these layers brings various insights into the artistic work process and helps to answer questions about the object, its attribution and its authenticity. However, these layers typically overlap and are oftentimes difficult to differentiate with the unaided eye. For example, a common layer combination is red chalk under ink. In this work, we propose an image processing pipeline that operates on hyperspectral images to separate such layers. Using this pipeline, we show that hyperspectral images enable better layer separation than RGB images, and that spectral focus stacking aids the layer separation. In particular, we propose to use two descriptors in hyperspectral historical document analysis, namely hyper-hue and extended multi-attribute profile (EMAP). Our comparative results with other features underline the efficacy of the three proposed improvements.
This thesis studies the use case of hyperspectral images in two applications, namely remote sensing and art history. The common challenge that is present in both these applications is the limited availability of labeled data. This limitation is caused by the tedious, time-consuming, and expensive manual data labeling process by the experts in each respective field. At the same time, hyperspectral images and their feature vectors are typically significantly high dimensional. Combination of these two challenge the supervised machine learning algorithms. In order to tackle this problem, this work proposes to either adapt the limited data to the classifier, or adapt the classifier to the limited training data. Any discrete data can be assumed samples from an unknown distribution which is not accessible. Having access to this underlying distribution enables drawing infinite number of data points. Motivated by this idea, this work takes advantage of Gaussian mixture model (GMM) to estimate the underlying distribution of each class in the dataset. Considering the limited available data, in order to limit the number of parameters, GMMs are constrained to have diagonal covariance matrix. Both on phantom data and the real hyperspectral images, it has been shown that adding only a few synthetic training samples significantly improves the untuned classifier's performance. Further, it has been observed that the untuned classifiers reinforced with the synthesized training samples outperform the tuned classifier's performance on the original training set. The latter suggests that the synthetic samples can replace the expensive parameter tuning process in classifiers. In a different approach, this work proposes to adapt the classifier to the limited data. Traditional classifiers with high capacity often overfit on extremely small training data sets. The Bayesian learning regime has a hardcoded regularization property in its formulation. This property motivates the idea of using Bayesian neural networks to remedy the overfitting problem of the normal (frequentist) convolutional neural networks (CNNs). The experimental results demonstrate that for the same convolutional network architecture, the Bayesian variant outperforms the frequentist version. Using ensemble learning on the sample networks drawn from the Bayesian network further improves the classification performance. Moreover, studying the evolution of the train and validation loss plots for both the Bayesian and the frequentist CNN clearly shows that the Bayesian CNN is significantly more robust against overfitting in the case of extremely limited training data and has higher generalization capability in this situation. For the second application, i.e., the layer separation in the old master drawings, this work studies the effectiveness of hyperspectral images, introduces to use the extended multi-attribute profiles (EMAPs) and hyper-hue features, and compares them against the other state-of-the-art features, using synthesized and real data. The results show that the EMAPs and hyper-hue are more informative and representative feature spaces. Mapping the HS images to these spaces resulted in more accurate color pigment layer segmentation.
In this work, we conducted a survey on different registration algorithms and investigated their suitability for hyperspectral historical image registration applications. After the evaluation of different algorithms, we choose an intensity based registration algorithm with a curved transformation model. For the transformation model, we select cubic B-splines since they should be capable to cope with all non-rigid deformations in our hyperspectral images. From a number of similarity measures, we found that residual complexity and localized mutual information are well suited for the task at hand. In our evaluation, both measures show an acceptable performance in handling all difficulties, e.g., capture range, non-stationary and spatially varying intensity distortions or multi-modality that occur in our application.