Austrian Academy of Sciences (OeAW)
Recent publications
Ceramic assemblages from the Bukhara Oasis show an interesting mix of vessel forms and decorative styles, specifically in pre-Islamic times. Some elements of the tableware bear resemblance to material from adjacent regions in the south-west and south-east, including Margiana and Bactria, while others, specifically the storage vessels, appear to be inspired by the north-west or eastern areas, such as Khoresm and Chach. Surface coatings and slip paint proved to be highly significant in identifying these distinct cultural links. Drawing upon samples from the recent excavations by the MAFOUB project, this paper will trace the various cultural strains through analyzing the use of slip and slip-paint as an exemplary feature in the ceramic assemblages of the Bukhara Oasis from antiquity to the early Islamic period. Individual decorative patterns appear to be restricted to certain parts of the vessel repertoire. While coated vessels firmly belong to the tableware inventory, cursory applied slip paint with drizzling effects is mostly observed on containers and storage jars. Some slip painted decorations illustrate the considerable variation this technique enjoyed within the Bukhara Oasis. Our diachronic study highlights these diversities and follows the implied cultural alliances to shed light on Bukhara's position and role within the Central Asian neighborhood.
Let $$\varvec{F}_q$$ F q be the finite field of q elements, where $$q=p^r$$ q = p r is a power of the prime p , and $$\left( \beta _1, \beta _2, \dots , \beta _r \right) $$ β 1 , β 2 , ⋯ , β r be an ordered basis of $$\varvec{F}_q$$ F q over $$\varvec{F}_p$$ F p . For $$\begin{aligned} \xi =\sum _{i=1}^rx_i\beta _i, \quad x_i\in \varvec{F}_p, \end{aligned}$$ ξ = ∑ i = 1 r x i β i , x i ∈ F p , we define the Thue–Morse or sum-of-digits function $$T(\xi )$$ T ( ξ ) on $$\varvec{F}_q$$ F q by $$\begin{aligned} T(\xi )=\sum _{i=1}^{r}x_i. \end{aligned}$$ T ( ξ ) = ∑ i = 1 r x i . For a given pattern length s with $$1\le s\le q$$ 1 ≤ s ≤ q , a vector $$\varvec{\alpha }=(\alpha _1,\ldots ,\alpha _s)\in \varvec{F}_q^s$$ α = ( α 1 , … , α s ) ∈ F q s with different coordinates $$\alpha _{j_1}\not = \alpha _{j_2}$$ α j 1 ≠ α j 2 , $$1\le j_1<j_2\le s$$ 1 ≤ j 1 < j 2 ≤ s , a polynomial $$f(X)\in \varvec{F}_q[X]$$ f ( X ) ∈ F q [ X ] of degree d and a vector $$\mathbf{c} =(c_1,\ldots ,c_s)\in \varvec{F}_p^s$$ c = ( c 1 , … , c s ) ∈ F p s we put $$\begin{aligned} \mathcal{T}(\mathbf{c} ,\varvec{\alpha },f)=\{\xi \in \varvec{F}_q : T(f(\xi +\alpha _i))=c_i,~i=1,\ldots ,s\}. \end{aligned}$$ T ( c , α , f ) = { ξ ∈ F q : T ( f ( ξ + α i ) ) = c i , i = 1 , … , s } . In this paper we will see that under some natural conditions, the size of $$\mathcal{T}(\mathbf{c} ,\varvec{\alpha },f)$$ T ( c , α , f ) is asymptotically the same for all $$\mathbf{c} $$ c and $$\varvec{\alpha }$$ α in both cases, $$p\rightarrow \infty $$ p → ∞ and $$r\rightarrow \infty $$ r → ∞ , respectively. More precisely, we have $$\begin{aligned} \left||\mathcal{T}(\mathbf{c} , \varvec{\alpha }, f) |- p^{r-s} \right|\le (d-1)q^{1/2} \end{aligned}$$ | T ( c , α , f ) | - p r - s ≤ ( d - 1 ) q 1 / 2 under certain conditions on d , q and s . For monomials of large degree we improve this bound as well as we find conditions on d , q and s for which this bound is not true. In particular, if $$1\le d<p$$ 1 ≤ d < p we have the dichotomy that the bound is valid if $$s\le d$$ s ≤ d and for $$s\ge d+1$$ s ≥ d + 1 there are vectors $$\mathbf{c} $$ c and $$\varvec{\alpha }$$ α with $$\mathcal{T}(\mathbf{c} ,\varvec{\alpha },f)=\emptyset $$ T ( c , α , f ) = ∅ so that the bound fails for sufficiently large r . The case $$s=1$$ s = 1 was studied before by Dartyge and Sárközy.
There are only aleph-zero rational numbers, while there are 2 to the power aleph-zero real numbers. Hence the probability that a randomly chosen real number would be rational is 0. Yet proving rigorously that any specific, natural, real constant is irrational is usually very hard, witness that there are still no proofs of the irrationality of the Euler–Mascheroni constant, the Catalan constant, or ζ(5)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta (5)$$\end{document}. Inspired by Frits Beukers’ elegant rendition of Apéry’s seminal proofs of the irrationality of ζ(2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta (2)$$\end{document} and ζ(3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta (3)$$\end{document}, and heavily using algorithmic proof theory, we systematically searched for other similar integrals that lead to irrationality proofs. We found quite a few candidates for such proofs, including the square-root of π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi $$\end{document} times Γ(7/3)/Γ(-1/6)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma (7/3)/\Gamma (-1/6)$$\end{document} and Γ(19/6)/Γ(8/3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma (19/6)/\Gamma (8/3)$$\end{document} divided by the square-root of π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi $$\end{document}.
Quantum finite automata (QFA) are basic computational devices that make binary decisions using quantum operations. They are known to be exponentially memory efficient compared to their classical counterparts. Here, we demonstrate an experimental implementation of multi-qubit QFAs using the orbital angular momentum (OAM) of single photons. We implement different high-dimensional QFAs encoded on a single photon, where multiple qubits operate in parallel without the need for complicated multi-partite operations. Using two to eight OAM quantum states to implement up to four parallel qubits, we show that a high-dimensional QFA is able to detect the prime numbers 5 and 11 while outperforming classical finite automata in terms of the required memory. Our work benefits from the ease of encoding, manipulating, and deciphering multi-qubit states encoded in the OAM degree of freedom of single photons, demonstrating the advantages structured photons provide for complex quantum information tasks.
In this paper we propose a method to generate suitably refined finite element meshes using neural networks. As a model problem we consider a linear elasticity problem on a planar domain (possibly with holes) having a polygonal boundary. We impose boundary conditions by fixing the position of a part of the boundary and applying a force on another part of the boundary. The resulting displacement and distribution of stresses depend on the geometry of the domain and on the boundary conditions. When applying a standard Galerkin discretization using quadrilateral finite elements, one usually has to perform adaptive refinement to properly resolve maxima of the stress distribution. Such an adaptive scheme requires a local error estimator and a corresponding local refinement strategy. The overall costs of such a strategy are high. We propose to reduce the costs of obtaining a suitable discretization by training a neural network whose evaluation replaces this adaptive refinement procedure. We set up a single network for a large class of possible domains and boundary conditions and not on a single domain of interest. The computational domain and boundary conditions are interpreted as images, which are suitable inputs for convolution neural networks. In our approach we use the U-net architecture and we devise training strategies by dividing the possible inputs into different categories based on their overall geometric complexity. Thus, we compare different training strategies based on varying geometric complexity. One of the advantages of the proposed approach is the interpretation of input and output as images, which do not depend on the underlying discretization scheme. Another is the generalizability and geometric flexibility. The network can be applied to previously unseen geometries, even with different topology and level of detail. Thus, training can easily be extended to other classes of geometries.
Metallogeny is the science of ore and mineral deposit formation in geological space and time. Metallogeny is interdisciplinary by nature, comprising elements of natural science disciplines such as planetology to solid state physics and chemistry, and volcanology. It is the experimental forefront of research and bold thinking, based on an ever-growing foundation of solid knowledge. Therefore, metallogeny is not a closed system of knowledge but a fast-growing assemblage of structured and unstructured information in perpetual flux. This paper intends to review its current state and trends. The latter may introduce speculation and fuzziness. Metallogeny has existed for over 100 years as a branch of Earth Science. From the discovery of plate tectonics (ca. 1950) to the end of the last century, metallogeny passed through a worldwide phase of formally published ‘metallogenetic’ maps. In the last decades, a rapidly growing number of scientists, digitization and splendid new tools fundamentally boosted research. More innovations may be expected by the growing use of an evolving systematic ‘Geodata Science’ for metallogenic research by an increasingly global human talent pool. Future requirements for metallic and mineral raw materials, especially the critical natural elements and compounds that are needed for the nascent carbon-free economy, already drive activities on stock markets and in the resource industry. State geological surveys, academia and private companies embrace the challenges. The new age requires intensified metallogenic backing. In this paper, principles of metallogeny are recalled concerning concepts and terms. A metallogenic classification of ore and mineral deposits is proposed, and the intimate relations of metallogenesis with geodynamics are sketched (ancient lid tectonics and modern plate tectonics). Metallogenic models assemble a great diversity of data that allow an ever better understanding of ore formation, foremost by illuminating the geological source-to-trap migration of ore metals, the petrogenetic and geodynamic–tectonic setting, the spatial architecture of ore deposits and the nature and precise timing of involved processes. Applied metallogeny allows companies to choose strategy and tactics for exploration investment and for planning the work. Based on comprehensive metallogenic knowledge, mineral system analysis (MSA) selects those elements of complex metallogenic models, which are detectable and can guide exploration in order to support applications such as mineral prospectivity mapping, mineral potential evaluation and targeting of detailed investigations. MSA founded on metallogenic models can be applied across whole continents, or at the scale of regional greenfield search, or in brownfields at district to camp scale. By delivering the fundamental keys for MSA, supported by unceasing innovative research, the stream of new metallogenic insights is essential for improving endowment estimates and for successful exploration.
The notion of topology in physical systems is associated with the existence of a nonlocal ordering that is insensitive to a large class of perturbations. This brings robustness to the behaviour of the system and can serve as a ground for developing new fault-tolerant applications. We discuss how to design and study a large variety of topology-related phenomena for phonon-like collective modes in arrays of ultracold polarized dipolar particles. These modes are coherently propagating vibrational excitations, corresponding to oscillations of particles around their equilibrium positions, which exist in the regime where long-range interactions dominate over single-particle motion. We demonstrate that such systems offer a distinct and versatile tool to investigate a wide range of topological effects in a single experimental setup with a chosen underlying crystal structure by simply controlling the anisotropy of the interactions via the orientation of the external polarizing field. Our results show that arrays of dipolar particles provide a promising unifying platform to investigate topological phenomena with phononic modes.
In recent years, evidence has been provided that individuals with dyslexia show alterations in the anatomy and function of the auditory cortex. Dyslexia is considered to be a learning disability that affects the development of music and language capacity. We set out to test adolescents and young adults with dyslexia and controls (N = 52) for their neurophysiological differences by investigating the auditory evoked P1–N1–P2 complex. In addition, we assessed their ability in Mandarin, in singing, their musical talent and their individual differences in elementary auditory skills. A discriminant analysis of magnetencephalography (MEG) revealed that individuals with dyslexia showed prolonged latencies in P1, N1, and P2 responses. A correlational analysis between MEG and behavioral variables revealed that Mandarin syllable tone recognition, singing ability and musical aptitude (AMMA) correlated with P1, N1, and P2 latencies, respectively, while Mandarin pronunciation was only associated with N1 latency. The main findings of this study indicate that the earlier P1, N1, and P2 latencies, the better is the singing, the musical aptitude, and the ability to link Mandarin syllable tones to their corresponding syllables. We suggest that this study provides additional evidence that dyslexia can be understood as an auditory and sensory processing deficit.
Cities face an evident demographic change, making assistive technologies (AAL) an interesting choice to support older adults to autonomously age in place. Yet, supportive technologies are not as widely spread as one would expect. Hence, we investigate the surroundings of older adults living in Vienna and analyse their “socio relational setup”, considering their social integration and psychophysical state compared to others (health, fitness, activeness, contentedness). Method: Our data included 245 older adults (age: M = 74, SD = 6654) living in their own homes (2018–2020 with different grades of needing support). We calculated univariate and multivariate models regressing the socio-relational setup on the change of routines, technology attitude, mobility aid use, internet use, subjective age, openness to move to an institutional care facility in the future, and other confounding variables. Results: We found a strong correlation between all categories (health, fitness, activeness, contentedness) of older adults comparing themselves to their peers. Among others, they are significantly related to institutional care openness, which implies that participants who felt fitter and more active than their peers were less clear in visualising their future: unpleasant circumstances of ageing are suppressed if the current life circumstances are perceived as good. This is an example of cognitive dissonance.
We study Gaussian random functions on the complex plane whose stochastics are invariant under the Weyl–Heisenberg group (twisted stationarity). The theory is modeled on translation invariant Gaussian entire functions, but allows for non-analytic examples, in which case winding numbers can be either positive or negative. We calculate the first intensity of zero sets of such functions, both when considered as points on the plane, or as charges according to their phase winding. In the latter case, charges are shown to be in a certain average equilibrium independently of the particular covariance structure (universal screening). We investigate the corresponding fluctuations, and show that in many cases they are suppressed at large scales (hyperuniformity). This means that universal screening is empirically observable at large scales. We also derive an asymptotic expression for the charge variance. As a main application, we obtain statistics for the zero sets of the short-time Fourier transform of complex white noise with general windows, and also prove the following uncertainty principle: the expected number of zeros per unit area is minimized, among all window functions, exactly by generalized Gaussians. Further applications include poly-entire functions such as covariant derivatives of Gaussian entire functions.
Recent technological advances have broadened the application of palaeoradiology for non-destructive investigation of ancient remains. X-ray microtomography (microCT) in particular is increasingly used as an alternative to histological bone sections for interpreting pathological alterations, trauma, microstructure, and, more recently, bioerosion with direct or ancillary use of histological indices. However, no systematic attempt has been made to confirm the reliability of microCT for histotaphonomic analysis of archaeological bone. The objective of this study is therefore to compare thin sections of human femora rated with the Oxford Histological Index to microCT sections using the newly developed Virtual Histological Index in order to provide an accessible methodology for the evaluation of bioerosion in archaeological bone. We provide detailed descriptions of virtual sections and assess the efficacy of the method on cranial and postcranial elements, cremated long bones, and faunal samples. The traditional histological and virtual methods showed a strong correlation, providing the first systematic data substantiating lab-based microCT as a suitable alternative tool for reconstructing post-mortem history in the archaeological record, and for the reliable, non-destructive screening of samples for further analyses.
Entanglement and quantum communication are paradigmatic resources in quantum information science leading to correlations between systems that have no classical analogue. Correlations due to entanglement when communication is absent have for long been studied in Bell scenarios. Correlations due to quantum communication when entanglement is absent have been studied extensively in prepare-and-measure scenarios in the last decade. Here, we set out to understand and investigate correlations in scenarios that involve both entanglement and communication, focusing on entanglement-assisted prepare-and-measure scenarios. In a recent companion paper [\href{}{arXiv:2103.10748}], we investigated correlations based on unrestricted entanglement. Here, our focus is on scenarios with restricted entanglement. We establish several elementary relations between standard classical and quantum communication and their entanglement-assisted counterparts. In particular, while it was already known that bits or qubits assisted by two-qubit entanglement between the sender and receiver constitute a stronger resource than bare bits or qubits, we show that higher-dimensional entanglement further enhance the power of bits or qubits. We also provide a characterisation of generalised dense coding protocols, a natural subset of entanglement-assisted quantum communication protocols, finding that they can be understood as standard quantum communication protocols in real-valued Hilbert space. Though such dense coding protocols can convey up to two bits of information, we provide evidence, perhaps counter-intuitively, that resources with a small information capacity, such as a bare qutrits, can sometimes produce stronger correlations. Along the way we leave several conjectures and conclude with a list of interesting open problems.
Although it is an integral part of global change, most of the research addressing the effects of climate change on forests have overlooked the role of environmental pollution. Similarly , most studies investigating effects of air pollutants on forests have generally neglected impacts of climate change. We review the current knowledge on combined air pollution and climate change effects on global forest ecosystems and identify several key research priorities as a roadmap for the future. Specifically, we recommend 1) establishment of much denser array of monitoring sites, particularly in the South Hemisphere; 2) further integration of ground and satellite monitoring; 3) generation of flux-based standards and critical levels taking into account the sensitivity of dominant forest tree species; 4) long-term monitoring of N, S, P cycles and base cations deposition together at global scale; 5) intensification of experimental studies, addressing combined effects of different abiotic factors on forests by assuring a better representation of taxonomic and functional diversity across the ~ 73,000 tree species on Earth; 6) more experimental focus on phenomics and genomics; 7) improved knowledge on key processes regulating the dynamics of radionuclides in forest systems; and 8) development of models integrating air pollution and climate change data from long-term monitoring programs.
Octave equivalence describes the perception that notes separated by a doubling in frequency sound similar. While the octave is used cross-culturally as a basis of pitch perception, experimental demonstration of the phenomenon has proved to be difficult. In past work, members of our group developed a three-range generalization paradigm that reliably demonstrated octave equivalence. In this study we replicate and expand on this previous work trying to answer three questions that help us understand the origins and potential cross-cultural significance of octave equivalence: (1) whether training with three ranges is strictly necessary or whether an easier-to-learn two-range task would be sufficient, (2) whether the task could demonstrate octave equivalence beyond neighbouring octaves, and (3) whether language skills and musical education impact the use of octave equivalence in this task. We conducted a large-sample study using variations of the original paradigm to answer these questions. Results found here suggest that the three-range discrimination task is indeed vital to demonstrating octave equivalence. In a two-range task, pitch height appears to be dominant over octave equivalence. Octave equivalence has an effect only when pitch height alone is not sufficient. Results also suggest that effects of octave equivalence are strongest between neighbouring octaves, and that tonal language and musical training have a positive effect on learning of discriminations but not on perception of octave equivalence during testing. We discuss these results considering their relevance to future research and to ongoing debates about the basis of octave equivalence perception.
We present an adaptive refinement algorithm for T-splines on unstructured 2D meshes. While for structured 2D meshes, one can refine elements alternatingly in horizontal and vertical direction, such an approach cannot be generalized directly to unstructured meshes, where no two unique global mesh directions can be assigned. To resolve this issue, we introduce the concept of direction indices, i.e., integers associated to each edge, which are inspired by theory on higher-dimensional structured T-splines. Together with refinement levels of edges, these indices essentially drive the refinement scheme. We combine these ideas with an edge subdivision routine that allows for I-nodes, yielding a very flexible refinement scheme that nicely distributes the T-nodes, preserving global linear independence, analysis-suitability (local linear independence) except in the vicinity of extraordinary nodes, sparsity of the system matrix, and shape regularity of the mesh elements. Further, we show that the refinement procedure has linear complexity in the sense of guaranteed upper bounds on a) the distance between marked and additionally refined elements, and on b) the ratio of the numbers of generated and marked mesh elements.
The Jamtal Environmental Education Centre is a joint effort of the local communities of Galtür and Ischgl and the Alpinarium museum, dedicated to high mountain livelihoods and landscapes. For this study, we compiled available scientific evidence and personal views in the two communities on the co-evolution of human health and the biodiversity of local ecosystems. Main sources are historical records and maps, chronosequencing in the glacier forefields, and an analysis of contemporary land cover and glacier changes. In both communities, a large part of the area has remained unused since the start of the records in 1857. While the glacier area has shrunk by 70% since then, the forest area has increased as a result of changing land use and climate. Chronosequencing reveals that the glacier forefields are refugia for cold-adapted species under pressure from climate warming. Although land cover has changed, no type of land use recorded in the historical data has disappeared completely. While health services and infrastructure are thought to be sufficient, interviewees saw the largest potential for improvement in today’s lifestyle. Traditional practices involving usage of herbs or food culture, for example related to Gentiana punctata, are still alive and important for the communities.
Sierra NevadaSierra Nevada, comprising 2348 vascular floraFlora taxa (including 95 endemic taxa) is considered one of the most important plant hotspotsHot-spot within the Mediterranean region. Sierra NevadaSierra Nevada presents 362 taxa inhabiting the alpine area (ca. 242 km2), 75 endemic species (62 endemic plus 13 sub-endemic) among them, constituting ca. 79% of the endemismEndemism of the entire area. This high-mountain has preserved many species, allowing the current presence of many artic-alpine species, including twelve cold-adapted species with their southernmost limit here. There are 23 nano-hotspotsHot-spot, most of them occurring at the highest altitude, at the coldest parts. Altogether, they host 30% of the Baetic endemic floraFlora in just 0.07% of the area. Plant communities are also original, and they are composed of a mixture of Alpine and Mediterranean species. Climate changeClimate changeisClimate strongly impacting alpine biota leading to an adaptationAdaptation to the new conditions. When this adaptationAdaptation capacity is overcome species are forced to migrate to avoid extinction. Some responses are already noticeable in alpine areas, such as: phenological changes, altitudinal movements, increasing competitionCompetitionand hybridizationHybridization, and changes in plant assemblages. Direct impact related to human activities such as livestock grazing, use of fire to manage alpine pasturelands, mountain agriculture, outdoor activities, and infrastructure construction have additive effects toClimateclimate changeClimate change, and altogether they can exacerbate negative changes. Monitoring, evaluating, and understanding the effect of global changeGlobal change in the Mediterranean mountains is a top priority. We offer guidelines to orient the conservationConservation agenda at Sierra NevadaSierra Nevada: To (i) establish an early warning indicators system, (ii) preserve plant species and habitats, (iii) preserve threatened plant species ex situ, (iv) promote adaptive managementAdaptive management measures, (v) evaluate outdoor recreation activities, and (vi) control and regulate activities.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
703 members
Viktor Johannes Bruckman
  • Commission for Interdisciplinary Ecological Studies
Michael Nentwich
  • Institute of Technology Assessment
René Fries
  • Institute of Technology Assessment
Helge Torgersen
  • Institute of Technology Assessment
Michael Ornetzeder
  • Institute of Technology Assessment
Dr. Ignaz Seipel-Platz 2, 1010, Vienna, Austria
Head of institution
President Prof. Dr. Anton Zeilinger
+43 (1) 515 81 - 0