Article

Fractals and Chaos in Geology and Geophysics

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The definition of a fractal distribution is that the number of objects N with a characteristic size greater than r scales with the relation N ~ r −D . The frequency-size distributions for islands, earthquakes, fragments, ore deposits, and oil fields often satisfy this relation. Fractals were originally introduced by Mandelbrot to relate the length of a coastline to the length of the measuring stick. This application illustrates a fundamental aspect of fractal distributions, scale invariance. The requirement of an object to define a scale in photographs of many geological features is one indication of the wide applicability of scale invariance to geological problems, scale invariance can lead to fractal clustering. Geophysical spectra can also be related to fractals; these are self-affine fractals rather than self-similar fractals. Examples include the earth’s topography and geoid.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Furthermore, faults and fractures represent the geometrical anisotropies that provide planes of weakness within the rock masses and are thus most likely to be reactivated seismically if suitably oriented with respect to the local stress field. Faults and earthquakes can thus be regarded as the long-55 and short-time-scale phenomena of brittle tectonics, and there should exist an inherent relation between the size distributions of faults and earthquakes (Scholz, 2019(Scholz, , 1998Turcotte, 1997). However, the relationship between these size distributions in natural datasets has yet received little attention, mainly due to the challenging acquisition of reliable quantitative information of fault networks due to limited outcrop conditions and small dynamic range. ...
... We assume an underlying power law distribution, which is most commonly used to model fault size distributions (e.g., Bonnet et al., 2001;Bour et al., 2002;Odling, 1997;Scholz et al., 1993;Torabi and Berg, 2011;Yielding et al., 1996;Davy et al., 2010;Scholz, 2019). In 155 nature, all power laws must have upper and lower limits (Bonnet et al., 2001;Torabi and Berg, 2011;Turcotte, 1997Turcotte, , 1989. For faults, the upper limit is likely related to the thickness of the crust or the stratigraphic layering, while the lower limit is constrained by a physical length scale (e.g., grain size). ...
... The power law distributions derived in this study cover fault length scales of three orders of magnitudes between about 10 0 and 10 3 m, and it is likely that the power law holds also for smaller and larger 365 faults. In nature, however, all power laws must have upper and lower physical limits (Bonnet et al., 2001;Torabi and Berg, 2011;Turcotte, 1997Turcotte, , 1989). The detected lower scaling limit of 10 0 m likely does not reflect the actual physical length scale (e.g., caused by grain size effects), and we thus speculate that our analysis does not incorporate the lower limit of the power law scaling. ...
Preprint
Full-text available
Pre-existing geological discontinuities such as faults represent structural and mechanical discontinuities in rocks which influence earthquake processes. As earthquakes occur in the subsurface, seismogenic reactivation of pre-existing fault networks is difficult to investigate in natural settings. However, it is well-known that there exists a physical link between both faults and earthquakes since an earthquake’s magnitude is related to the ruptured fault area and therefore fault length. Furthermore, faults and earthquakes exhibit similar statistical properties, as their size distributions follow power laws. In this study, we exploit the relation between the size distributions of faults and earthquakes to decipher the seismic deformation processes within the exhumation-related orogen-internal setting of the Southwestern Swiss Alps, which due to its well-monitored seismic activity and the excellent outcrop conditions provides an ideal study site. Characterizing the size distribution of exhumed fault networks from different tectonic units based on multi-scale drone-based mapping, we find that power law exponents of 3D fault networks generally range between 3 and 3.6. Comparing these values with the depth-dependent exponents of estimated earthquake rupture lengths, we observe significantly larger values of 5 to 8 for earthquake ruptures at shallow depths (< 3 km below sea level (BSL)). At intermediate crustal depths (~3 to 9 km BSL), the power law exponents of faults and earthquakes appear to be similar. These findings imply depth-dependent differences in the seismogenic reactivation of pre-existing faults in the study region: while partial rupturing is the prevailing deformation mechanism at shallow depths, faults are more likely to rupture along their entire length at intermediate crustal depths. Therefore, the present-day near surface differential stresses are likely insufficient to rupture entire pre-existing faults seismogenically. Our findings have direct implications for seismic hazard considerations, as earthquakes that rupture along entire faults appear to become less likely with decreasing depth.
... Another reason for the research interest in earthquakes is the hypothesis that earthquakes are specific members of the class of critical phenomena [2][3][4]. A reason for this hypothesis is that the frequency of earthquakes has a power law relationship to quantities which are characteristic of the size and energy of an earthquake [5]. ...
... where B 0 = 2b/3, and it is analogous to the parameter b from the Gutenberg-Richter law [5,8,9]. In all empirical catalogs of earthquakes, a noticeable decline can be observed in the value of b for earthquakes of small magnitudes. ...
Article
Full-text available
We discuss a model of seismic activity that is based on the concept of energy in a cluster of sources of seismic activity. We show that specific cases of the studied model lead to the Gutenberg–Richter relationship and the Omori law. These laws are valid for earthquakes that happen in a single cluster of sources of seismic activity. Further, we discuss the distribution of earthquakes for several clusters containing sources of seismic activity. This distribution contains, as a specific case, a version of the negative binomial distribution. We show that at least a part of the roll-off effect connected to the parameter b of the Gutenberg– Richter law occurs because one records earthquakes that happen in more than one cluster of sources of seismic activity.
... Fracture phenomena are quite common in nature and play a fundamental role in many situations of interest for science and technological applications [1,2]. Despite many advances in materials science and applied mechanics along the past decades, the full description of such problems remains a great challenge to physicists and engineers [3]. ...
... It should be emphasized that, as soon as the first secondary crack appears, the stress amplitude ∆σ(x; {a k , x k } n ) is no longer given by the analogue of the simple form in Eq. (2). Due to the lack of an analytical solution for the stress field of multiple thin cracks, even in the simplest case where the cracks are arranged along the same line, we resort to an independent-crack approximation, to be detailed below, whenever it is necessary to deal with secondary cracks, except in the case γ = 0, which we now present in detail. ...
Preprint
We investigate a model for fatigue crack growth in which damage accumulation is assumed to follow a power law of the local stress amplitude, a form which can be generically justified on the grounds of the approximately self-similar aspect of microcrack distributions. Our aim is to determine the relation between model ingredients and the Paris exponent governing subcritical crack-growth dynamics at the macroscopic scale, starting from a single small notch propagating along a fixed line. By a series of analytical and numerical calculations, we show that, in the absence of disorder, there is a critical damage-accumulation exponent γ\gamma, namely γc=2\gamma_c=2, separating two distinct regimes of behavior for the Paris exponent m. For γ>γc\gamma>\gamma_c, the Paris exponent is shown to assume the value m=γm=\gamma, a result which proves robust against the separate introduction of various modifying ingredients. Explicitly, we deal here with (i) the requirement of a minimum stress for damage to occur; (ii) the presence of disorder in local damage thresholds; (iii) the possibility of crack healing. On the other hand, in the regime γ<γc\gamma<\gamma_c the Paris exponent is seen to be sensitive to the different ingredients added to the model, with rapid healing or a high minimum stress for damage leading to m=2 for all γ<γc\gamma<\gamma_c, in contrast with the linear dependence m=62γm=6-2\gamma observed for very long characteristic healing times in the absence of a minimum stress for damage. Upon the introduction of disorder on the local fatigue thresholds, which leads to the possible appearance of multiple cracks along the propagation line, the Paris exponent tends to m4m\approx 4 for γ2\gamma\lesssim 2, while retaining the behavior m=γm=\gamma for γ4\gamma\gtrsim 4.
... Nikora et al. (1998) reported that the heights of the surface roughness of a gravel bed are nearly Gaussian with an isotropic second-order structure function D( ) proportional to 2H for sufficiently small spatial separation , where H is the Hurst exponent, which those authors determined to be 0.79 for a water-worked gravel bed. This behaviour indicates that, at sufficiently small spatial scales, the surface roughness has the properties of a self-affine fractal surface (Turcotte 1997;Meakin 1998). Other examples of surface roughness showing self-affinity are the surface of the planet Mars (Orosei et al. 2003) and fracture surfaces (Ponson et al. 2006). ...
Article
Full-text available
The influence of irregular three-dimensional rough surfaces on the displacement of the logarithmic velocity profile relative to that of a smooth wall in turbulent flow, known as the roughness function, is studied using direct numerical simulations. Five different surface power spectral density (PSD) shapes were considered, and for each, several rough Gaussian surfaces were generated by varying the root mean square ( krmsk_{rms} ) of the surface heights. It is shown that the roughness function ( ΔU+\Delta U^{+} ) depends on both the PSD and krmsk_{rms} . For a given krmsk_{rms} , ΔU+\Delta U^{+} increases as the wavenumbers of the PSD expand to large values, but at a rate that decreases with the magnitude of the wavenumbers. Although ΔU+\Delta U^{+} generally does not scale with either krmsk_{rms} or the effective slope ES when these variables are considered singularly, for PSDs with low wavenumbers, ΔU+\Delta U^{+} tends to scale with ES , whereas as wavenumbers increase, ΔU+\Delta U^{+} tends to scale with krmsk_{rms} . An equivalent Nikuradse sand roughness of about eight times krmsk_{rms} is found, which is similar to that observed in previous studies for a regular three-dimensional roughness. Finally, it is shown that krmsk_{rms} and the effective slope are sufficient to describe the roughness function in the transitional rough regime.
... In addition, an increase in the dynamic loading rate also enhances the energy dissipation of the rock and alters the distribution characteristics of its fragments (Feng et al. 2021). To quantitatively analyze the distribution characteristics of fragments, a fractal dimension calculation method was introduced to characterize rock fragmentation (Turcotte 1997;Adler and Thovert 1999;Domokos et al. 2020), which characterizes the fragmentation complexity by the fractal dimension of the fracture surface of the cracks (Mardoukhi et al. 2017). Researchers have studied the fractal characteristics of granite fragmentation under static and dynamic loads and found that granite debris has fractal characteristics (Wang et al. 2009;Li et al. 2010a, b). ...
Article
Full-text available
To investigate the crushing behavior and energy dissipation patterns of rocks subjected to impact loads, impact experiments were performed on limestone samples with various aspect ratios using a split Hopkinson pressure bar testing apparatus. The effects of aspect ratio and impact velocity on the failure modes and energy consumption during the fragmentation process of the specimen were examined. High-speed photography techniques and scanning electron microscopy technology were utilized to investigate the morphological features of the fracture surfaces in rock fragments. The results indicated that at a constant aspect ratio, an increase in the impact velocity resulted in a decrease in the equivalent particle size and an increase in the fractal dimension. At the same impact speed, the equivalent particle size increased with an increase in the aspect ratio, whereas the fractal dimension decreased as the aspect ratio increased. The failure modes of the limestone specimens gradually shifted with higher aspect ratios, transitioning from axial splitting to compressive shear as the dominant failure mechanism. The fractal dimension of the fragments increased with a higher energy dissipation density. As the aspect ratio of the specimen increases, the length-to-thickness ratio of the fragments increases linearly. The proposed prediction function model for the variation of the equivalent particle size of fragments with dissipated energy agrees well with the experimental results. Numerical simulations employing the finite element method confirmed the dynamic crushing characteristics of limestone. These simulations illustrated the variations in crack propagation paths in limestone specimens with various aspect ratios under impact conditions.
... Fractal analysis is a useful tool for studying self-similar patterns and structures in systems like natural landscapes, biological forms, and financial markets [1][2][3]. Unlike traditional methods based on simple shapes, fractal analysis captures the complex shapes and patterns that appear at different scales, providing a deeper understanding of these systems [4][5][6][7], being widely used in fields like environmental science, geology, and data analysis, where understanding detailed patterns is important [8][9][10][11]. ...
Article
Full-text available
This paper presents an original and comprehensive comparative analysis of eight fractal analysis methods, including Box Counting, Compass, Detrended Fluctuation Analysis, Dynamical Fractal Approach, Hurst, Mass, Modified Mass, and Persistence. These methods are applied to artificially generated fractal data, such as Weierstrass–Mandelbrot functions and fractal Brownian motion, as well as natural datasets related to environmental and geophysical domains. The objectives of this research are to evaluate the methods’ capabilities in capturing fractal properties, their computational efficiency, and their sensitivity to data fluctuations. Main findings indicate that the Dynamical Fractal Approach consistently demonstrated the highest accuracy across different datasets, particularly for artificial data. Conversely, methods like Mass and Modified Mass showed limitations in complex fractal structures. For natural datasets, including meteorological and geological data, the fractal dimensions varied significantly across methods, reflecting their differing sensitivities to structural complexities. Computational efficiency analysis revealed that methods with linear or logarithmic complexity, such as Persistence and Compass, are most suited for larger datasets, while methods like DFA and Dynamic Fractal Approaches required higher computational resources. This study provides an original comparative study for researchers to select appropriate fractal analysis techniques based on dataset characteristics and computational limitations.
... Individual earthquake sequences are, therefore, strongly dependent on fault geometry as well as pre-earthquake strength and stress distribution. Even small differences in fault geometry or strength distribution accumulate so that respective catalogs would quickly diverge, attesting to the chaotic nature of the underlying processes (e.g., Turcotte, 1997;Ben-Zion, 2008). ...
Article
The nonplanar geometry of faults influences their seismotectonic behavior, affecting the initiation, propagation, and termination of individual earthquakes as well as the stress–slip relationship and probability of multisegment rupture. Consequently, computer simulations that aim to resolve the earthquake rupture process and make predictions about a fault’s future behavior should incorporate nonplanar fault geometries. Although surface traces of faults can be mapped with high accuracy, a key challenge is to define a fault’s detailed subsurface geometry due to a general lack of data. This raises the question of which geometry to use. Does it matter which subsurface geometry is utilized in earthquake rupture simulations, as long as at least the fault trace is considered? How different is the simulated fault behavior for faults that share the same surface trace but different subsurface geometries? Using the physics-based earthquake-cycle simulator MCQsim, we generate seismic catalogs for 100 × 20 km strike-slip faults, assuming variations in fault surface trace, subsurface geometry, and strength distribution. We investigate how the long-term fault behavior—in the form of magnitude–frequency distribution, earthquake interevent time, and maximum earthquake size—is affected by fault geometry and strength distribution. We find that the simulated behavior of strike-slip faults with identical fault traces is interchangeable—even if their subsurface fault geometries differ. Implementing the fault trace constrains possible fault geometries to a level that makes the long-term behavior indistinguishable—at least for strike-slip faults with “known” strength distribution and length-to-width aspect ratios that are equal or larger than what we used here. The fault trace can provide a satisfactory representation of subsurface geometry for assessing long-term fault behavior.
... Complex network structure consists of many interacting elements, where elements are known as nodes or vertices, and interactions between nodes are known as links or edges [1,27]. The complex network theory has been extensively used to model real-world complex systems such as the geology [21], ecosystems [22], mathematical biology [23] and neuroscience [24] as well as physics of neutrinos [25] and superconductors [26]. ...
Preprint
Full-text available
Synchronization is an emergent phenomenon in coupled dynamical networks. The Master Stability Function (MSF) is a highly elegant and powerful tool for characterizing the stability of synchronization states. However, a significant challenge lies in determining the MSF for complex dynamical networks driven by nonlinear interaction mechanisms. These mechanisms introduce additional complexity through the intricate connectivity of interacting elements within the network and the intrinsic dynamics, which are governed by nonlinear processes with diverse parameters and higher dimensionality of systems. Over the past 25 years, extensive research has focused on determining the MSF for pairwise coupled identical systems with diffusive coupling. Our literature survey highlights two significant advancements in recent years: the consideration of multilayer networks instead of single-layer networks and the extension of MSF analysis to incorporate higher-order interactions alongside pairwise interactions. In this review article, we revisit the analysis of the MSF for diffusively pairwise coupled dynamical systems and extend this framework to more general coupling schemes. Furthermore, we systematically derive the MSF for multilayer dynamical networks and single-layer coupled systems by incorporating higher-order interactions alongside pairwise interactions. The primary focus of our review is on the analytical derivation and numerical computation of the MSF for complex dynamical networks. Finally, we demonstrate the application of the MSF in data science, emphasizing its relevance and potential in this rapidly evolving field.
... Researchers aim to predict the occurrence of earthquakes by identifying precursory signals linked to preparatory processes. In some cases these are observed as spatio-temporal variations of both seismic and aseismic parameters (e.g., Smalley et al., 1987;Turcotte, 1997;Brenguier et al., 2008;Gulia and Wiemer, 2019) and are associated with the localization of deformation (e.g., Bürgmann, 2014;Kato and Ben-Zion, 2020). Geodetic measurements have shown that aseismic deformation occurring near mainshock hypocenters correlated with increased seismic activity in several earthquake sequences (e.g., Kato et al., 2012;Obara and Kato, 2016). ...
Article
Full-text available
The b-value of the magnitude distribution of natural earthquakes appears to be closely influenced by the faulting style. We investigate this in the laboratory for the first time by analyzing the moment tensor solutions of acoustic emissions detected during a triaxial compression test on Berea sandstone. We observe systematic patterns showing that faulting style influences the b-value and differential stress. Similar trends are observed in a complementary physics-based numerical model that captures mechanical energy dissipation. Both the differential stress and dissipation are found to be inversely correlated to the b-value. The results indicate that, at late stages of the test, the dissipation increases and is linked to a change in AE faulting style and drop in b-value. The patterns observed in the laboratory Frohlich diagrams could be explained by the integrated earthquake model: damaged rock regions form as microcracks coalesce, leading to strain localization and runaway deformation. The modeling results also align with the micromechanics responsible for dissipation at various stages of the experiment and agrees with moment tensor solutions and petrographic investigations. The integration of physics-based models that can capture dissipative processes of the earthquake cycle could assist researchers in constraining seismic hazard in both natural and anthropogenic settings.
... Likewise, the gradient -termed «b-value» -is characteristic for different regions and geotectonic settings (e.g., [Turcotte, 1997;Utsu, 2002a;Spada et al., 2013;Godano et al., 2014]). It usually varies around 1.0 with a range of ±0.2 in seismically active regions [Clauser, 2014]. ...
Article
Full-text available
Earthquake likelihoods have occupied humankind ever since, and the estimation of potential magnitudes is crucial for a multitude of aspects of safety. In this work, we present a probabilistic analysis of extreme magnitudes in 16 regions across the globe characterized by different seismicity to invert the traditional question of «what probability is associated with certain magnitudes». We combine the Gutenberg-Richter Law and Rank-Ordering-Statistics in a methodological approach to estimate what magnitude ranges can be almost certainly (i.e., with 95%) expected, and what magnitudes become reasonably unlikely beyond those ranges. This approach allows for estimating probabilities for maximal magnitudes per region and comparing thereto the maximal magnitudes (mr) that appeared in reality. The method explores the maximal magnitudes that could occur or be exceeded with a probability of 95%, if the respective mr are equal to or greater than these 95%-predictions, and how probable it is, that also these mr could be reproduced or exceeded. We suspect a lack of great magnitudes in the Alps and a surplus across the Atlantic Ocean from these statistical considerations.
... The exponent of the power-law fit depends on several factors, including the geological context, the strength of the material, and possibly the size of boulders. Power-law behavior of cumulative size-frequency distributions of terrestrial fragmented objects has also been observed (Turcotte 1997; see also Table 1 in Pajola et al. 2015), with the examples listed in having exponents ranging from -1.89 to -3.54. ...
Preprint
Asteroid (162173) Ryugu is the target object of Hayabusa2, an asteroid exploration and sample return mission led by Japan Aerospace Exploration Agency (JAXA). Ground-based observations indicate that Ryugu is a C-type near-Earth asteroid with a diameter of less than 1 km, but the knowledge of its detailed properties is still very limited. This paper summarizes our best understanding of the physical and dynamical properties of Ryugu based on remote sensing and theoretical modeling. This information is used to construct a design reference model of the asteroid that is used for formulation of mission operations plans in advance of asteroid arrival. Particular attention is given to the surface properties of Ryugu that are relevant to sample acquisition. This reference model helps readers to appropriately interpret the data that will be directly obtained by Hayabusa2 and promotes scientific studies not only for Ryugu itself and other small bodies but also for the Solar System evolution that small bodies shed light on.
... The OFC model is an abstract representation of the 'slider block' model of seismicity (Burridge and Knopoff, 1967), with the equations of motion of the spring blocks being replaced by a CA consisting of a grid of cells on which the associated stress distribution evolves according to a simple set of rules (e.g. Turcotte, 1997). The 2D OFC model is normally assumed to represent an abstract section of a fault plane subject to a constant applied stress; the earthquake events arise from local 'stick-slip' behaviour, which might remain localised (small events) or trigger an 'avalanche' of events involving neighbouring cells (leading to larger events). ...
Preprint
The cellular automaton model of Piegari, Di Maio, Scandone and Milano, J. Volc. Geoth. Res., 202, 22-28 (2011) is extended to include magma-induced stress (i.e. a local magma-related augmentation of the stress field). This constitutes a nonlinear coupling between the magma and stress fields considered by this model, which affects the statistical distributions of eruptions obtained. The extended model retains a power law relation between eruption size and frequency for most events, as expected from the self organised criticality inspiring this model; but the power law now applies for a reduced range of size, and there are new peaks of relatively more frequent eruptions of intermediate and large size. The cumulative frequency of repose time between events remains well modelled by a stretched exponential function of repose time (approaching a pure exponential distribution only for the longest repose times), but the time scales of this behaviour are slightly longer, reflecting the increased preference for larger events. The eruptions are relatively more likely to have high volatile (water) content, so would generally be more explosive. The new model also naturally favours a central `axial' transport conduit, as found in many volcano systems, but which otherwise must be artificially imposed within such models.
... Hierarchically organized transfer (HLS) criteria are also of great interest [7][8][9]. Recently, these models have received much attention in the geophysical literature [10], because one would reasonably expect the emergence of universal scaling laws of the type observed in seismology [11,12]. In this field, the bundle is a representation of a fault, and the individual elements or fibers represent asperities on the fault plane. ...
Preprint
A probabilistic method for solving time-dependent load-transfer models of fracture is developed. It is applicable to any rule of load redistribution, i.e, local, hierarchical, etc. In the new method, the fluctuations are generated during the breaking process (annealed randomness) while in the usual method, the random lifetimes are fixed at the beginning (quenched disorder). Both approaches are equivalent.
... Traditional geophysical approaches suggest that actually this phenomenon is challenging also for science of complex systems. They involve complexity of the high-level regarding accumulation of stress at faults inside a volcano due to quite a few factors including propagating/inflating/magma-filled dikes, groundwater transport through porous media, and nontrivial geometry of the shape of magma migration (Roman and Cashman 2006;Turcotte 1997;Zobin 2012). Thus, volcanic seismicity results from interplay of diverse complex dynamics on complex architecture at various scales. ...
Preprint
A comparative study is performed on volcanic seismicities at Icelandic volcano, Eyjafjallaj\"okull, and Mt. Etna in Sicily from the viewpoint of complex systems science, and the discovery of remarkable similarities between them is reported. In these seismicities as point processes, the jump probability distributions of earthquakes (i.e., distributions of the distance between the hypocenters of two successive events) are found to obey the exponential law, whereas the waiting-time distributions (i.e., distributions of inter-occurrence time of two successive events) follow the power law. A careful analysis is made about the finite size effects on the waiting-time distributions, and the previously reported results for Mt. Etna (Abe and Suzuki 2015) are reinterpreted accordingly. It is shown that the growth of the seismic region in time is subdiffusive at both volcanoes. The aging phenomenon is commonly observed in the "event-time-averaged" mean-squared displacements of the hypocenters. A comment is also made on (non-)Markovianity of the processes.
... In such situations one uses the averaged measurements. For instance, the famed Gutenberg-Richter law [12,38,3] that gives exponential approximation to the size distribution of earthquakes (via their magnitudes) is valid only after appropriate averaging over a wide spatio-temporal domain. This explains the importance of the question: How do the distributions of Eq. (10), (12) change after temporal averaging? ...
Preprint
The dynamics of a 2D site percolation model on a square lattice is studied using the hierarchical approach introduced by Gabrielov et al., Phys. Rev. E, 60, 5293-5300, 1999. The key elements of the approach are the tree representation of clusters and their coalescence, and the Horton-Strahler scheme for cluster ranking. Accordingly, the evolution of percolation model is considered as a hierarchical inverse cascade of cluster aggregation. A three-exponent time-dependent scaling for the cluster rank distribution is derived using the Tokunaga branching constraint and classical results on percolation in terms of cluster masses. Deviations from the pure scaling are described. An empirical constraint on the dynamics of a rank population is reported based on numerical simulations.
... The dimension f (α) for high α characterizes the pattern of extremes of Y, in this case, the Rn concentration. For rigid definitions and a more detailed discussion, see, e.g., [51,52] or the textbook [53]. ...
Article
Full-text available
Consumer-grade economical radon monitors are becoming increasingly popular in private and institutional use, in the contexts of both Citizen Science and traditional research. Although originally designed for screening indoor radon levels in view of radon regulation and decisions about mitigation or remediation—motivated by the health hazard posed by high radon concentrations—researchers are increasingly exploring their potential in some environmental studies. For long time, radon has been used as a tracer for investigating atmospheric transport processes. This paper focuses on RadonEye, currently the most sensitive among low-cost monitors available on the market, and specifically, its potential use for monitoring very low radon concentrations. It has two objectives: firstly, discussing issues of statistics of low count rates, and secondly, analyzing radon concentration time series acquired with RadonEyes outdoors and in low-radon indoor spaces. Regarding the first objective, among other things, the inference radon concentration reported to expected true is discussed. The second objective includes the application of autoregressive methods and fractal statistics to time series analysis. The overall result is that radon dynamics can be well captured using this “low-tech” approach. Statistical results are plausible; however, few results are available in the literature for comparison, particularly concerning fractal methods. The paper may therefore be seen as an incentive for further research in this direction.
... Human observation typically categorizes the seismic events based on their relative magnitudes and positions in the space-time sequence, identifying sequences and the so-called swarms. Earthquake sequences are associated with a mainshock, which is followed by smaller events nearby (Omori 1894;Gutenberg and Richter 1941;Kagan 1994;Turcotte 1997;Scholz 2002;Bak et al. 2002). Occasionally, the mainshock is preceded by small precursory events (Brodsky and Lay 2014;Mignan 2014;Seif et al. 2019;Lippiello 2021, 2023). ...
Article
Full-text available
The identification of seismic clusters is essential for many applications of statistical analysis and seismicity forecasting: uncertainties in cluster identification leads to uncertainties in results. However, there are several methods to identify clusters, and their results are not always compatible. We tested different approaches to analyze the clustering: a traditional window-based approach, a complex network-based technique (nearest neighbor—NN), and a novel approach based on fractal analysis. The case study is the increase in seismicity observed in Molise, Southern Italy, from April to November 2018. To analyze the seismicity in detail with the above-mentioned methods, an improved template-matching catalog was created. A stochastic declustering method based on the Epidemic Type Aftershock Sequence (ETAS) model was also applied to add probabilistic information. We explored how the significant discrepancies in these methods’ results affect the result of NExt STrOng Related Earthquake (NESTORE) algorithm—a method to forecast strong aftershocks during an ongoing cluster—previously successfully applied to the whole Italian territory. We performed a further analysis of the spatio-temporal pattern of seismicity in Molise, using the Principal Component Analysis (PCA), the ETAS algorithm, as well as other analyses, aimed at detecting possible migration and diffusion signals. We found a relative quiescence of several months between the main events of April and August, the tendency of the events to propagate upwards, and a migration of the seismicity consistent with a fluid-driven mechanism. We hypothesize that these features indicate the presence of fluids, which are also responsible for the long duration of the sequence and the discrepancies in cluster identification methods’ results. Such results add to the other pieces of evidence of the importance of the fluid presence in controlling the seismicity in the Apennines. Moreover, this study highlights the importance of refined methods to identify clusters and encourages further detailed analyses when different methods supply very different results. Graphical Abstract
... These networks are strongly reminiscent of fractals 10 , patterns that are widely recognized as nature's optimized strategy for filling space and distributing resources 11 . Researchers have long applied power-law scaling relationships to describe the order and universality within branching networks [12][13][14][15] . A prime example in fluvial studies is Hack's law 16 , which relates the length of the main (longest) channel of a given river network (L m ) to its drainage area (A d ) using a power function: L m / A h d . ...
Article
Full-text available
Branching networks are key elements in natural landscapes and have attracted sustained research interest across the geosciences and numerous intersecting fields. The prevailing consensus has long held that branching networks are optimized and exhibit fractal properties adhering to power-law scaling relationships. However, tidal networks in coastal wetlands and mudflats exhibit scaling properties that defy conventional power-law descriptions, presenting a longstanding enigma. Here we show that the observed atypical scaling represents a universal deviation from an ideal fractal branching network capable of fully occupying the available space. Using satellite imagery of tidal networks from diverse global locations, we identified an inherent “laziness” in this deviation—where the increased ease of channel formation paradoxically decreases the space-filling efficiency of the network. We developed a theoretical model that reproduces the ideal fractal branching network and the laziness phenomenon. The model suggests that branching networks can emerge under a localized competition principle without adhering to conventionally assumed optimization-driven processes. These results reveal the dual nature of branching networks, where “laziness” complements the well-known optimization process. This property provides more flexible strategies for controlling tidal network morphogenesis, with implications for coastal management, wetland restoration, and studies in fluvial and planetary systems.
... I then took the Fourier transform of the noise array and multiplied the Fourier coefficients C pq by a powerlaw of the wavenumber module k = p 2 + q 2 with exponent β/2 = −1 − H ′ = −1.2, so that C f ilt pq = C pq k −1.2 , following a similar procedure as described in (Turcotte, 1997) with minor variations. Finally, I took the inverse Fourier transform to obtain a random χ distribution with the desired self-affine properties. ...
Preprint
Full-text available
Self-similarity indicates that large and small earthquakes share the same physics, where all variables scale with rupture length L. Here I show that rupture tip acceleration during the start of dynamic rupture (break-out phase) is also self-similar, scaling with LcL_c in space and Lc/ClimL_c/C_{lim} in time (where LcL_c is the breakout patch length and ClimC_{lim} the limiting rupture velocity in the subsonic regime). Rupture acceleration in the breakout phase is slower for larger initial breakout patches LcL_c. Because small faults cannot host large breakout patches, a large and slower initial breakout may be indicative of a potentially large final earthquake magnitude. Initial moment rate M˙o\dot{M}_o also grows slower for larger LcL_c, therefore it may reflect fault dimensions and carry a probabilistic forecast of magnitude as suggested in some Early Warning studies. This result does not violate causality and is fully compatible with the shared fundamental, self-similar physics across all the magnitude spectrum.
... The application of fractal and multifractal theory to Earth sciences (Goltz, 1997;Turcotte, 1997;Karsten et al., 2005, among others) represents an intriguing avenue for analyzing complex geophysical and atmospheric phenomena, serving as a significant step in the forecasting process. Examples include studying rainfall patterns (Koscielny-Bunde et al., 2006;Lana et al., 2017Lana et al., , 2020Lana et al., , 2023, extreme temperature variations (Burgueño et al., 2014), wind speed characteristics (Sun et al., 2020), hydrological analyses (Movahed and Hermanis, 2008), seismic activity (Ghosh et al., 2012;Telesca and Toth, 2016;Monterrubio-Velasco et al., 2020), and emissions of volcanic energy (Monterrubio-Velasco et al., 2023). ...
Article
Full-text available
The evolution of multifractal structures in various physical processes, such as climatology, seismology, or volcanology, serves as a crucial tool for detecting changes in corresponding phenomena. In this study, we explore the evolution of the multifractal structure of volcanic emissions with varying energy levels (observed at Colima, Mexico, during the years 2013–2015) to identify clear indicators of imminent high-energy emissions nearing 8.0×108 J. These indicators manifest through the evolution of six multifractal parameters: the central Hölder exponent (α0); the maximum and minimum Hölder exponents (αmax, αmin); the multifractal amplitude (W=αmax-αmin); the multifractal asymmetry (γ=[αmax-α0]/[α0-αmin]); and the complexity index (CI), calculated as the sum of the normalized values of α0, W, and γ. Additionally, the results obtained from adapting the Gutenberg–Richter seismic law to volcanic energy emissions, along with the corresponding skewness and standard deviation of the volcanic emission data, further support the findings obtained through multifractal analysis. These results, derived from multifractal structure analysis, adaptation of the Gutenberg–Richter law to volcanic emissions, and basic statistical parameters, hold significant relevance in anticipating potential volcanic episodes of high energy. Such anticipation can be further quantified using an appropriate forecasting algorithm.
Article
Rivers exhibit self‐similarity, or spectral scaling, across a wide range of spatial scales, from clusters of individual sediment grains to periodic features such as ripples, dunes, and meanders, extending to entire river valleys and networks. Previous studies have identified relationships between reaches characterized by specific wavelet scales and distinct morphological units or valley controls. Drawing on available high‐resolution lidar‐based bathymetries of 35 channel reaches, this study investigates linkages between spectral analysis measures and established channel typologies and morphological attributes across diverse river settings. We use spectral analysis to demonstrate how sub‐reach‐scale topographic variability patterns vary by flow stage and channel type. Uniform channels had the mildest spectral slopes for bed elevation variability, while confined channels had the mildest spectral slopes for width variability. In contrast, braided channels had the steepest spectral slopes for both bed and width variability. Coherence analysis revealed that the harmonic components of bed and width are largely in‐phase (i.e., when the bed is high, the channel is wide) at bankfull and flood stage, but some out‐of‐phase relationships were found at baseflow within the low‐frequency range. Finally, the longitudinal bed elevation series exhibited steeper spectral slopes with increasing mean wetted width across channel types and flow stages. Our findings on spectral slope and coherence of bed and width undulations may help improve understanding and representation of the nested structure of a river's terrain and variability at different scales from sub‐reach to watershed.
Article
Full-text available
Determining accurate magnetization directions is essential for interpreting magnetic anomalies and inferring the subseafloor crustal magnetization of submarine volcanoes. Furthermore, magnetization directions can be used to determine the polarity of the Earth's magnetic field at the time the seamount was formed, which in turn can be correlated with the geomagnetic polarity time scale to provide independent means of dating submarine volcanic edifices. Here I show a new method to determine seamount magnetization directions from observed magnetic anomalies, based on their fundamental properties expressed by Helbig's infinite integrals, and I propose practical strategies to reduce effects associated with limited‐size surveys. The method provides more reliable results than conventional methods based on semi‐norm minimization, as demonstrated by the example of Ita Mai Tai Seamount on the Magellan Seamount Trail. The systematic application of this method to the Rano Rahi Seamount Field, in proximity of the East Pacific Rise (EPR) 17°–19°S shows a pattern of alternating crustal magnetization polarities, consistent with few available radiometric ages and with the geomagnetic polarity time scale for the last 3.5 Ma. The corresponding correlation provides an independent tool for dating seamounts in this region, yielding an average constructional volume rate in the range ∼0.5 × 10⁻³–1.3 × 10⁻³ km³/yr for each volcano, which implies a significant contribution of the total magma supply rate is produced off‐axis.
Article
Fractal geometry quantitatively analyzes the irregular distribution of geological features, highlighting the dynamic aspects of tectonics, seismic heterogeneity, and geological maturity. This study analyzed the active fault data along the Kuhbanan fault zone in southeastern Iran by applying the boxcounting method and observing the changes in Coulomb stress and tried to find the potential triggering parts. The entire region was divided into 16 subzones with the box-counting method, and then the fractal dimension (D) in each zone was calculated. The analysis of the fractal dimension for active faults and earthquake epicenters along with the seismicity parameter (b) and their ratio in the Kuhbanan region indicates an imbalance between seismic fractals and faults. This finding suggests that the area may have the potential for future earthquakes or hidden faults. In conjunction with b-value and changes in Coulomb stress change, D-value analysis reveals intense tectonic activity and stress accumulation, particularly within the Ravar, Zarand, and Kianshahr sections. It may be considered a potential location for future earthquakes. The changes in Coulomb stress resulting from the 2005 Dahuieh earthquake have also placed this region within the stress accumulation zone, potentially triggering the mentioned areas. This integrative approach, backed by historical earthquake data, highlights the impact of fault geometry and stress dynamics, offering an enhanced framework for earthquake forecasting and seismic risk mitigation applicable to other tectonically active areas within the Iranian plateau.
Chapter
In this chapter all the techniques presented in the previous chapters as theoretical concepts are discussed on examples of concrete dynamical systems and circuits. For certain models, circuits developed by means of classical electronic engineering approach and analog modelling are shown to demonstrate advantages and disadvantages of both concepts. In addition to the dynamical system description, the behaviour exhibited by the considered model is explained to provide an overview on fundamental deterministic and stochastic phenomena constituting a basis for nonlinear dynamics and theory of stochastic processes.
Article
Full-text available
Seismo-electromagnetic (SEM) signatures recorded in geomagnetic data prior to an earthquake have the potential to reveal pre-earthquake processes in focal zones. The present study analyses the vertical component of geomagnetic field data from March 2019 to April 2020 using fractal and multifractal approaches to identify the EM signatures in Campbell Bay (CBY), a seismically active region of Andaman and Nicobar. The significant enhancements in monofractal dimension and spectrum width components of multifractal analysis arise due to superpositioned high- and low-frequency SEM field emitted by the pre-earthquake processes. It is observed that the higher-frequency components associated with microfracturing dominating signatures of earthquakes occurring around the West Andaman Fault (WAF) and Andaman Trench (AT), while the lower frequencies, which result from slower electrokinetic mechanisms, have some correlation with the earthquakes around the Seulimeum strand (SS). Thus, the monofractal, spectrum width, and Hölder exponent parameters reveal a different nature of pre-earthquake processes that can be identified, on average, 10, 12, and 20 d prior to the moderate earthquakes, which holds promise for short-term earthquake prediction.
Article
Accurately analysis of the multifractal characteristics of geochemical element distribution is crucial for identifying geochemical anomalies and meaningful element associations. However, the most commonly used multifractal method, i.e., the method of moments, may generate different multifractal spectra for a single element distribution due to variations in the range of moment orders. This is because multifractals and their control mechanisms are not well defined. Fractal topography provides a basis for defining multifractals and clarifies the physical meaning of the singularity index. Therefore, a multifractal analysis method based on fractal topography is proposed to generate a unified multifractal spectrum and give new insight into the singularity analysis of element distribution. The similarities and distinctions between the two methods were evaluated using the de Wijs model. The distributions of two multifractal spectra are shown to be fundamentally consistent. The novel method, nevertheless, utilizes fewer statistics and presents a simplified criterion for element enrichment or depletion. To demonstrate its application, Cu geochemical distribution in the Zhongdian area, China, was used as a case study. Based on the comparison results of the two approaches, the proposed novel approach proves beneficial for accurately characterizing the heterogeneity of geochemical element distribution while maintaining a consistent range of the singularity index. The singularity index distribution map at a fine scale provides a comprehensively detailed zonation of geochemical anomalies and, at different scales, it can effectively reveal and interpret the variation of element distribution.
Article
Full-text available
In this paper, we review the theoretical studies of the electromagnetic and other non-seismic phenomena accompanying earthquakes. This field of geophysical research is at the interception of several sciences: electrodynamics, solid-state physics, fracture mechanics, seismology, acoustic-gravity waves, magnetohydrodynamics, ionospheric plasma, etc. In order to make physics of these phenomena as transparent as possible, we use a simplified way of deriving some theoretical results and restrict our analysis to order-of-magnitude estimates. The main emphasis is on those theoretical models which give not only a qualitative, but also a quantitative, description of the observed phenomena. After some introductory material, the review is begun with an analysis of the causes of local changes in the rock conductivity occasionally observed before earthquake occurrence. The mechanisms of electrical conductivity in dry and wet rocks, including the electrokinetic effect, are discussed here. In the next section, the theories explaining the generation of low-frequency electromagnetic perturbations resulting from the rock fracture are covered. Two possible mechanisms of the coseismic electromagnetic response to the propagation of seismic waves are studied theoretically. Hereafter, we deal with atmospheric phenomena, which can be related to seismic events. Here we discuss models describing the effect of pre-seismic changes in radon activity on atmospheric conductivity and examine hypotheses explaining abnormal changes in the atmospheric electric field and in infrared radiation from the Earth, which are occasionally observed on Earth and from space over seismically active regions. In the next section, we review several physical mechanisms of ionospheric perturbations associated with seismic activity. Among them are acoustic-gravity waves resulting from the propagation of seismic waves and tsunamis and ionospheric perturbations caused by vertical acoustic resonance in the atmosphere. In the remainder of this paper, we discuss whether variations in radon activity and vertical seismogenic currents in the atmosphere can affect the ionosphere.
Article
Full-text available
Here, we suggest a procedure through which one can identify when the accumulation of stresses before major earthquakes (EQs) (of magnitude M 8.2 or larger) occurs. Analyzing the seismicity in natural time, which is a new concept of time, we study the evolution of the fluctuations of the entropy change of seismicity under time reversal for various scales of different length i (number of events). Although the stress might be accumulating throughout the entire process of EQ preparation due to tectonic loading, here we find that the proposed complexity measure reveals different stress accumulation characteristics from those in the long-term background when the system approaches the critical stage. Specifically, we find that anomalous intersections between scales of different i are observed upon approaching a major EQ occurrence. The investigation is presented for the seismicity in Japan since 1984 including the M9 Tohoku EQ on 11 March 2011, which is the largest EQ ever recorded there, as well as for the seismicity before 2017 Chiapas M8.2 EQ, which is Mexico’s largest EQ in more than a century. Based on this new complexity measure, a preprint submitted on 5 December 2023 anticipated the 1 January 2024 M7.6 EQ in Japan.
Conference Paper
Subsurface fractures play an important role in reservoir characterization of any hydrocarbon reservoir. Traditionally, fracture data from image-log and drilling data are used for modeling the entire reservoir and can result in suboptimal history match. 3D seismic data are the densest ones available. However, fractures being subseismic in nature, it becomes a challenge for explicitly modeling them because of the scale-difference between lineaments extracted from seismic volume and the geo-cellular model. Fractures from outcrop data and reservoir are often fractal in nature. This research therefore attempts to downscale seismic lineaments to geologic-scale fractures using the well-known Sierpinski carpet. A set of simulations are then run by considering three cases: seismic lineaments, fractal-fracture model and matrix-only model. It is shown that fractures at a model scale yield the highest recovery and that the matrix-only case shows much less recovery numbers compared to the ones that consider fractures in the reservoir.
Article
Ground-penetrating radar (GPR) is a vital tool in the domain of non-destructive testing. However, its capability to accurately discern subsurface targets faces challenges from substantial background clutter. Current methods aimed at clutter suppression often leave residual clutter or distort the hyperbolic tails of target-scattered signals, particularly in heterogeneous soil conditions. This study endeavors to tackle the complexities of clutter suppression in practical scenarios. To this end, we introduce a novel framework for GPR clutter suppression utilizing stationary graph signal (SGS) processing techniques. In our proposed approach, GPR B-scan images are treated as graph signals and transformed into the graph frequency domain via graph Fourier transform (GFT). This framework incorporates B-scan images containing both targets and clutter, alongside clutter-only B-scan images gathered within the same testing environment. B-scan images featuring both targets and clutter serve as reference data samples for constructing a graph shift operator (GSO), with clutter and targets’ scattering signals interpreted as SGS. Following the establishment of weakly stationary graph signals with respect to the GSO, a variant of the graph-based Wiener filter tailored for GPR applications is applied to effectuate clutter suppression. Through our proposed SGS processing-based filtering method, clutter can be effectively suppressed, thereby facilitating the restoration of target scattering signals. Extensive experiments conducted on both numerical simulation data and field test data underscore the efficacy of the proposed approach, which can be further applied to the general non-destructive testing realm.
Article
Full-text available
The Gutenberg–Richter law establishes a log-linear relationship between the number of earthquakes that have occurred within some spatiotemporal volume and their magnitude. This similarity property presumably reflects fractal structure of the fault system in which earthquake sources are formed. The Gutenberg–Richter law plays a key role in the problems of seismic hazard and risk assessment. Using the Gutenberg–Richter relationship, we can estimate the average recurrence period of strong earthquakes from the recurrence rate of weaker earthquakes. Since the strongest earthquakes occur infrequently, with intervals of a few hundred years or more, it is not possible to directly assess their recurrence. From indirect geologic and paleoseismic estimates it often seems that strong earthquakes on individual faults occur more frequently than expected in accordance with the Gutenberg–Richter law. Such estimates underlie the hypothesis of the so called characteristic earthquakes. This hypothesis is in many cases additionally supported by the form of the magnitude–frequency distributions for individual faults, constructed from the data of modern earthquake catalogs. At the same time, an important factor affecting the form of the magnitude–frequency distribution is the choice of the spatial domain in which the distribution is constructed. This paper investigates the influence of this factor and determines the conditions under which the Gutenberg–Richter law is applicable for estimating the recurrence of strong earthquakes.
Chapter
The study of the relationships between seismic events based on a large number of statistical models has shown correlations between seismic events in the catalogs under consideration based on selected criteria. Correlations between events lead to the appearance of nonlocality properties in time (hereditarity) and in space in these sequences. Then the representation of the seismic process as a stream of independent random changes of dislocations and its description by the standard Poisson process using an exponential function becomes incorrect. A logical generalization of this approach is the use of a fractional Poisson process to describe the seismic deformational process, taking into account these properties. The fractional model considers the deformation process from a probabilistic point of view as a transition from one regime (or state) to another. To describe the probability of preserving the deformation process in a certain regime, the fractional Mittag-Leffler function as a generalization of the exponential function is used. This function takes into account the properties of the hereditary (i.e., the history of the process) and of the non-stationary of the event stream, and its fractional parameters are determined by the parameters of the medium. The study of the activation regime in the foreshock phase made it possible to construct foreshock sequences (clusters) based on criteria related to the characteristics of the earthquake energy and medium of the earthquake preparation area. Using the method of superpose epochs, empirical functions of the waiting time distribution of the foreshocks depending on the time before the mainshock are obtained. Varying the fractional parameters and the scale factor of the Mittag-Leffler function made it possible to approximate the empirical functions with greater accuracy than by the exponential function. Fractional parameters take values less than one, which allows us to conclude that there are hereditary and non-stationary properties in the foreshock sequences. The parameter values become closer to one with an increase in the energy class of the mainshock. Such trends in the behavior of foreshock sequences for stronger mainshocks can be interpreted as a difference of the hereditary properties for mainshocks of different energies. Also, the reason for the weakening of hereditary properties in the foreshock sequences for the high-energy mainshocks may be the inclusion of events that are not related to this mainshock, but fall into its large spetial-temporal area. It should be noted that for a reason of the small amounts of data, seismology is statistically insufficient to strictly solve the problem of choosing a deformation process model. However, the fractional Poisson process model is preferable because it has a more universal nature.
Article
Full-text available
This work uses the frequency-magnitude relation for the earthquakes to analyze seismicity and return period for the earthquakes in Makran subduction and its adjacent region. The data for seismicity analysis and return period range from 1934 to 2022, with moment magnitude and depth ranging from 4.0 to 8.1 and 0 to 115 km. The b-values for eastern and western Makran are obtained by the maximum likelihood technique of 0.7 ± 0.04 and 0.764 ± 0.04, respectively. The temporal decrease of the b-value of the eastern and western vicinities of Makran suggests that stress levels in both regions are increasing, indicating potential big earthquakes may occur subsequentially. Similarly, the spatial variation of the b-value also suggests that stress in both areas is increasing. Notably, the change in the b-value is inversely proportional to the stress level in the region. If the stress level increases, the b-value decreases, and vice versa. The earthquake return periods with magnitude 5 and 5.5 vary from 1 to 2.7 and 2 to 8 years, and magnitude 6 and 6.5 vary from 5 to 30 and 10 to 60 years, respectively. The earthquake return periods with magnitude 7 and 7.5 vary from 20 to 120 and 50 to 300 years, and magnitude 8 and 8.5 range from 100 to 600 and 200 to 1200 years, respectively. The finding demonstrates that both the eastern and western vicinities of the Makran fault zone have high tectonic stress, and significant big earthquakes may occur in the future.
Article
In this article we examine the effects of eclogitization of slab rocks on the subduction regime under a continent. Eclogitization of rocks in high-pressure metamorphic complexes occurs only in the areas of penetration of hydrous fluid. In the absence of hydrous fluid, the kinetic delay of eclogitization preserves lowdensity rocks under P‒T conditions of eclogite metamorphism, delaying the weighting of a slab and reducing the efficiency of the slab-pull mechanism, which contributes to steep subduction into the deep mantle. The results of numerical petrological–thermomechanical 2D modeling of subduction under a continent in a wide range of eclogitization parameters of oceanic crustal rocks (discrete eclogitization) are presented. The effects of a lower kinetic delay of eclogitization in a water-bearing basalt layer, compared to a drier underlying gabbro layer, have been tested. Based on the results of 112 numerical experiments with 7 variants of eclogitization ranges (400–650°C for basalt and 400–1000°C for gabbro) at different potential mantle temperatures (ΔT = 0–250°C, above the modern value), and steep, flat, and transitional subduction regimes were identified. The steep subduction regime occurs under modern conditions (ΔT = 0°C) with all ranges of eclogitization. Here, it is characterized by an increase in the angle of subduction of the slab as the plate descends, and above the boundary of the mantle transition zone there is a flattening and/or tucking of the slab. Subduction is accompanied by the formation of felsic and mafic volcanics and their plutonic analogues. At elevated mantle temperatures (ΔT ≥ 150°С) and discrete eclogitization over a wide range, the flat subduction regime is observed with periodic detachments of its steeper frontal eclogitized part. The flat subduction regime is accompanied by significant serpentinization of the mantle wedge and sporadic, scarce magmatism (from mafic to felsic), which occurs at a significant distance (≥500 km) from the trench. During the transition regime, which is also achieved in models with elevated mantle temperatures, a characteristic change occurs from flat to steep subduction, resulting in a stepped shape of the slab. As the kinetic shift of eclogitization increases, flat subduction develops. An increase in the thickness of the continental lithosphere from 80 to 150 km contributes to steep subduction, while the influence of the convergence rate (5–10 cm/year) is ambiguous. Discrete eclogitization of thickened oceanic crust and depletion of lithospheric mantle in the oceanic plate are the main drivers of flat subduction. In modern conditions, their influence becomes insignificant due to the decrease in thickness of oceanic crust and degree of depletion of the oceanic mantle lithosphere. As a result, less frequent flat movement of slabs is determined by other factors.
Article
In this article we examine the effects of impact of slab rocks eclogitization on the subduction regime under the continent. Eclogitization of rocks in high-pressure metamorphic complexes occurs only in the areas of penetration of hydrous fluid. In the absence of hydrous fluid, the kinetic delay of eclogitization preserves low-density rocks under P‒T conditions of eclogite metamorphism, delaying the weighting of a slab and reducing the efficiency of the slab-pull mechanism which contributes to the steep subduction into the deep mantle. The results of numerical petrological-thermomechanical 2D modeling of subduction under the continent in a wide range of eclogitization parameters of oceanic crust rocks (discrete eclogitization) are presented. The effects of a lower kinetic delay of eclogitization in the water-bearing basalt layer, compared to the drier underlying gabbro layer, have been tested. Based on results of 112 numerical experiments with 7 variants of eclogitization ranges (in range 400–650°C for basalt and 400–1000°C for gabbro) at different potential mantle temperatures (ΔT = 0–250°C, above modern value), and steep, flat and transitional subduction regimes were identified. The mode of steep subduction occurs under modern conditions (ΔT = 0°C) with all ranges of eclogitization. Here it is characterised by an increase in the angle of subduction of the slab as the plate descends, and above the boundary of the mantle transition zone there is a flattening or and then tucking of the slab. Subduction is accompanied by the formation of felsic and mafic volcanics and their plutonic analogues. At elevated temperatures of the mantle (ΔT≥150°С) and discrete eclogitization over a wide range, the flat subduction regime is observed with periodic detachments of its steeper frontal eclogitized part. The flat subduction regime is accompanied by significant serpentinization of the mantle wedge and episodic, scarce magmatism (from mafic to felsic), which occurs at a significant distance (≥500 km) from the trench. During the transition regime, which is also realised in models with elevated mantle temperatures, there is a characteristic change occurs from flat to steep subduction, resulting in a stepped shape of the slab. As the kinetic shift of eclogitisation increases, flat subduction develops. An increase in the thickness of the continental lithosphere from 80 km to 150 km contributes to the implementation of steep subduction, while the influence of the convergence rate (5–10 cm/year) is ambiguous. Discrete eclogitization of thickened oceanic crust and depletion of lithospheric mantle in the oceanic plate are the main drivers of flat subduction. In modern conditions, their influence becomes insignificant due to the decrease in the thickness of the oceanic crust and the degree of depletion of the oceanic mantle lithosphere. As a result, the less frequent flat movement of slabs is determined by other factors.
Article
Full-text available
The Gutenberg–Richter law establishes a log-linear relationship between the number of earthquakes that have occurred within some spatiotemporal volume and their magnitude. This similarity property presumably reflects fractal structure of the fault system in which earthquake sources are formed. The Gutenberg–Richter law plays a key role in the problems of seismic hazard and risk assessment. Using the Gutenberg–Richter relationship, we can estimate the average recurrence period of strong earthquakes from the recurrence rate of weaker earthquakes. Since the strongest earthquakes occur infrequently, with intervals of a few hundred years or more, it is not possible to directly assess their recurrence. From indirect geologic and paleoseismic estimates it often seems that strong earthquakes on individual faults occur more frequently than expected in accordance with the Gutenberg–Richter law. Such estimates underlie the hypothesis of the so-called characteristic earthquakes. This hypothesis is in many cases additionally supported by the form of the magnitude–frequency distributions for individual faults, constructed from the data of modern earthquake catalogs. At the same time, an important factor affecting the form of the magnitude–frequency distribution is the choice of the spatial domain in which the distribution is constructed. This paper investigates the influence of this factor and determines the conditions under which the Gutenberg–Richter law is applicable for estimating the recurrence of strong earthquakes.
Article
Full-text available
Brittle deformation in the upper crust is thought to occur primarily via faulting. The fault length‐frequency distribution determines how much deformation is accommodated by numerous small faults versus a few large ones. To evaluate the amount of deformation due to small faults, we analyze the fault length distribution using high‐quality fault maps spanning a wide range of spatial scales from a laboratory sample to an outcrop to a tectonic domain. We find that the cumulative fault length distribution is well approximated by a power law with a negative exponent close to 2. This is in agreement with the earthquake magnitude‐frequency distribution (the Gutenberg‐Richter law with b‐value of 1), at least for faults smaller than the thickness of the seismogenic zone. It follows that faulting is a self‐similar process, and a substantial fraction of tectonic strain can be accommodated by faults that don't cut through the entire seismogenic zone, consistent with inferences of “hidden strain” from natural and laboratory observations. A continued accumulation of tectonic strain may eventually result in a transition from distributed fault networks to localized mature faults.
Article
The spherical radial basis function (SRBF) approach, widely used in gravity modeling, is theoretically surveyed from a viewpoint of random field theory. Let the gravity potential be a random field which is represented as an integral functional of another random field, namely an isotropic Gaussian random field (IGRF) on a sphere inside the Bjerhammar sphere with the SRBF as the integral kernel. When the integration is approximated by a discrete sum within a local region, one gets the widely applicable SRBF model. With this theoretical study, the following two findings are made. First, the IGRF implies a Gaussian prior on the spherical harmonic coefficients (SHCs) of the gravity potential; for this prior the SHCs are independent with each other and their variances are degree-only dependent. This should be reminiscent of two well-known priors, namely the power-law Kaula’s rule and the asymptotic power-law Tscherning-Rapp model. Second, the IGRF-SRBF representation is non-unique. Benefiting from this redundant representation, one can employ a simple IGRF, e.g., the simplest white field, and then design the SRBF accordingly to represent a potential with desired prior statistical properties. This can simplify the corresponding SRBF modeling significantly; to be more specific, the regularization matrix in parameter estimation of the SRBF modeling can be chosen to be a diagonal matrix, or even the naïve identity matrix.
Article
Изучение явления фрагментирования, делимости горных пород имеет большое значение для понимания сущности сейсмического процесса, решения задач горнопромышленной индустрии, выявления параметров и механизмов разломно-трещинной тектоники. Принципиально значимый вклад в разработку этого направления был внесен академиком Михаилом Александровичем Садовским. Обычно проблема делимости рассматривается в контексте разломно-трещинной тектоники, однако делимость определяется не только линейными структурами (разломами, линеаментами), но и существованием трехмерных геологических провинций с характерным только для этих провинций структурно-вещественным наполнением. Эта разновидность делимости практически не изучена. В статье приведено описание областей ареального рифтинга: Провинции бассейнов и хребтов (Северная Америка), Селенгино-Витимской зоны (Западное Забайкалье), Зондского шельфа (Малайский архипелаг), являющихся элементами объемной делимости земной коры и в то же время обладающих внутренней делимостью более высокого ранга. В основу исследования положен сравнительный структурно-тектонический анализ с использованием как авторского, так и литературного материала. Рассмотренные в статье объекты образуют тектонически обособленные линзовидно-плоскостные геологические тела, обладающие индивидуальной морфоструктурой, геологической историей и геодинамическими условиями формирования. Они ограничены разломными зонами и представлены структурным парагенезом хрупко-пластического диффузного (рассредоточенного) сдвига, для которого характерно фрагментирование горных масс на систему линейно упорядоченных поднятий и прогибов. Существование подобных морфоструктурных ансамблей (парагенезов) отражает 3D тектоническую делимость корового масштаба. The study of the fragmentation phenomenon and divisibility of rocks is of great importance for understanding the essence of the seismic process, solving the problems of the mining industry, and identifying the parameters and mechanisms of fault-fracture tectonics. A fundamentally significant contribution to the development of this direction was made by Academician Mikhail Alexandrovich Sadovsky. Usually, the problem of divisibility is сonsidered in the context of fault-fracture tectonics, but divisibility is determined not only by linear structures (faults, lineaments), but also by the existence of three-dimensional geological provinces with structural and material content characteristic only of these provinces. This type of divisibility is practically not studied. The article describes the areas of areal rifting: the Provinces of basins and ridges (North America), the Selengino-Vitim zone (Western Transbaikalia), the Sunda shelf (the Malay Archipelago), which are elements of the volumetric divisibility of the earth›s crust and at the same time have an internal divisibility of a higher rank. The study is based on a comparative structural and tectonic analysis using both author›s and literary material. The objects considered in the article form tectonically isolated lenticular planar geological bodies with an individual morphostructure, geological history and geodynamic conditions of formation. They are bounded by fault zones and are represented by structural paragenesis of brittle-plastic diffuse (dispersed) shear, which is characterized by the fragmentation of rock masses into a system of linearly ordered uplifts and troughs. The existence of such morphostructural ensembles (paragenesis) reflects the 3D tectonic divisibility of the crustal scale.
Article
Full-text available
Explosive volcanic eruptions inject hot mixtures of solid particles (tephra) and gasses into the atmosphere. Entraining ambient air, these mixtures can form plumes rising tens of kilometers until they spread laterally, forming umbrella clouds. While the largest clasts tend to settle in proximity to the volcano, the smallest fragments, commonly referred to as ash (≤2 mm in diameter), can be transported over long distances, forming volcanic clouds. Tephra plumes and clouds pose significant hazards to human society, affecting infrastructure, and human health through deposition on the ground or airborne suspension at low altitudes. Additionally, volcanic clouds are a threat to aviation, during both high‐risk actions such as take‐off and landing and at standard cruising altitudes. The ability to monitor and forecast tephra plumes and clouds is fundamental to mitigate the hazard associated with explosive eruptions. To that end, various monitoring techniques, ranging from ground‐based instruments to sensors on‐board satellites, and forecasting strategies, based on running numerical models to track the position of volcanic clouds, are efficiently employed. However, some limitations still exist, mainly due to the high unpredictability and variability of explosive eruptions, as well as the multiphase and complex nature of volcanic plumes. In the next decades, advances in monitoring and computational capabilities are expected to address these limitations and significantly improve the mitigation of the risk associated with tephra plumes and clouds.
Article
Full-text available
We present an analysis of magnitude clustering of microfractures inferred from acoustic emissions (AEs) during stick‐slip (SS) dynamics of faulted Westerly granite samples in frictional sliding experiments, with and without fluids, under triaxial loading with constant displacement rate. We investigate magnitude clustering in time across periods during, preceding and after macroscopic slip events on laboratory faults. Our findings reveal that magnitude clustering exists such that subsequent AEs tend to have more similar magnitudes than expected. Yet, this clustering only exists during macroscopic slip events and is strongest during major slip events in fluid‐saturated and dry samples. We demonstrate that robust magnitude clustering arises from variations in frequency‐magnitude distributions of AE events during macroscopic slip events. These temporal variations indicate a prevalence of larger AE events right after (0.3–3 s) the SS onset. Hence, magnitude clustering is a consequence of non‐stationarities.
Chapter
Time series analysis is used to investigate the temporal behavior of a variable x(t). Examples include investigations into long-term records of mountain uplift, sea level fluctuations, orbitally induced insolation variations (and their influence on the glacial–interglacial cycles), millennium-scale variations in the atmosphere–ocean system, the effect of the El Niño–Southern Oscillation on tropical rainfall and sedimentation, and tidal influences on noble gas emissions from boreholes. The temporal pattern of a sequence of events can be random, clustered, cyclic, or chaotic.
Article
Full-text available
Clustering and machine learning-based predictions are increasingly used for environmental data analysis and management. In fluvial geomorphology, examples include predicting channel types throughout a river network and segmenting river networks into a series of channel types, or groups of channel forms. However, when relevant information is unevenly distributed throughout a river network, the discrepancy between data-rich and data-poor locations creates an information gap. Combining clustering and predictions addresses this information gap, but challenges and limitations remain poorly documented. This is especially true when considering that predictions are often achieved with two approaches that are meaningfully different in terms of information processing: decision trees (e.g., RF: random forest) and deep learning (e.g., DNN: deep neural networks). This presents challenges for downstream management decisions and when comparing clusters and predictions within or across study areas. To address this, we investigate the performance of RF and DNN with respect to the information gap between clustering data and prediction data. We use nine regional examples of clustering and predicting river channel types, stemming from a single clustering methodology applied in California, USA. Our results show that prediction performance decreases when the information gap between field-measured data and geospatial predictors increases. Furthermore , RF outperforms DNN, and their difference in performance decreases when the information gap between field-measured and geospatial data decreases. This suggests that mismatched scales between field-derived channel types and geospatial predictors hinder sequential information processing in DNN. Finally, our results highlight a sampling trade-off between uniformly capturing geomorphic variability and ensuring robust generalization.
Article
Full-text available
Характерною ознакою багатьох родовищ критичної мінеральної сировини є просторові хаотичні коливання вмісту корисної копалини (логнормальний розподіл, відсутність чітких геологічних границь тощо), що суттєво обмежує використання традиційних методів геолого-економічної оцінки, особливо геостатистики. Можливим вирішенням може бути застосування різних фрактальних методів, які засновані на принципі багаторазової подільності геологічного об’єкта до розміру його елементарної частинки, що називається фракталом. У цій статті розглядається кількісно-розмірний фрактальний метод з метою відокремлення потенційно рудоносних зон та мінералізованих геохімічних аномалій літію від фонових значень на прикладі Полохівського родовища метасоматично змінених пегматитів. Доведено потенціал вказаного методу для обгрунтування напряму геологорозвідувальних робіт та проведення геолого-економічної оцінки.
Article
In this paper, we present the results of a careful investigation into the correlational pattern of various pollutants over Kolkata during the months before to the monsoon, which correspond to the pre-lockdown (2019) and lockdown (2020) periods. When Ozone, NOx, SO2, and surface temperature were subjected to detrended fluctuation analysis, we found that the SO2 exhibits long-term positive autocorrelation in the 2019 pre-monsoon. However, during the lockdown, the Hurst exponent (H) dropped below 0.5, which led to the interpretation that the lockdown caused neighbouring pairs to transition between long-term high and low autocorrelation coefficients. Additionally, although the autocorrelation function for NOx resembles a roughly sinusoidal pattern, lockdown has caused a change in H. Using H, fractal dimension and climate predictability, we have analysed the predictability behaviour of the pollutants and temperature under consideration.
Article
Full-text available
We present a detailed analysis of the dynamical behavior of an inhomogeneous Burridge-Knopoff model, a simplified mechanical model of an earthquake. Regardless of the size of seismic faults, a soil element rarely has a continuous appearance. Instead, their surfaces have complex structures. Thus, the model we suggest keeps the full Newtonian dynamics with inertial effects of the original model, while incorporating the inhomogeneities of seismic fault surfaces in stick-slip friction force that depends on the local structure of the contact surfaces as shown in recent experiments. The numerical results of the proposed model show that the cluster size and the moment distributions of earthquake events are in agreement with the Gutenberg-Richter law without introducing any relaxation mechanism. The exponent of the power-law size distribution we obtain falls within a realistic range of value without fine tuning any parameter. On the other hand, we show that the size distribution of both localized and delocalized events obeys a power law in contrast to the homogeneous case. Thus, no crossover behavior between small and large events occurs. Published by the American Physical Society 2024
Article
Full-text available
Fragmentation is a common phenomenon in complex rock‐avalanches. The fragmentation intensity and process determines exceptional spreading of such mass movements. However, studies focusing on the simulation of fragmentation are still limited and no operational dynamic simulation model of fragmentation has been proposed yet. By enhancing the mechanically controlled landslide deformation model (Pudasaini & Mergili, 2024, https://doi.org/10.1029/2023jf007466), we propose a novel, unified dynamic simulation method for rock‐avalanche fragmentation. The model includes three important aspects: mechanically controlled rock mass deformation, momentum loss while the rock mass fiercely impacts the ground, and the energy transfer during fragmentation resulting in the generation of dispersive lateral pressure. We reveal that the dynamic fragmentation, resulting from the overcoming of the tensile strength by the impact on the ground, leads to enhanced spreading, thinning, run‐out and hypermobility of rock‐avalanches. Thereby, the elastic strain energy release caused by fragmentation becomes an important process. Energy conversion between the front and rear parts caused by the fragmentation results in the enhanced forward movement of the front and hindered motion of the rear of the rock‐avalanche. The new model describes this by amplifying the lateral pressure gradient in the opposite direction: enhanced for the frontal particles and reduced for the rear particles after the fragmentation. The main principle is the switching between the compressional stress and the tensile stress, and therefore from the controlled deformation to substantial spreading of the frontal part in the flow direction while backward stretching of the rear part of the rock mass. Laboratory experiments and field events support our simulation results.
Article
Full-text available
Asteroids smaller than 10 km are thought to be rubble piles formed from the reaccumulation of fragments produced in the catastrophic disruption of parent bodies. Ground-based observations reveal that some of these asteroids are today binary systems, in which a smaller secondary orbits a larger primary asteroid. However, how these asteroids became binary systems remains unclear. Here, we report the analysis of boulders on the surface of the stony asteroid (65803) Didymos and its moonlet, Dimorphos, from data collected by the NASA DART mission. The size-frequency distribution of boulders larger than 5 m on Dimorphos and larger than 22.8 m on Didymos confirms that both asteroids are piles of fragments produced in the catastrophic disruption of their progenitors. Dimorphos boulders smaller than 5 m have size best-fit by a Weibull distribution, which we attribute to a multi-phase fragmentation process either occurring during coalescence or during surface evolution. The density per km² of Dimorphos boulders ≥1 m is 2.3x with respect to the one obtained for (101955) Bennu, while it is 3.0x with respect to (162173) Ryugu. Such values increase once Dimorphos boulders ≥5 m are compared with Bennu (3.5x), Ryugu (3.9x) and (25143) Itokawa (5.1x). This is of interest in the context of asteroid studies because it means that contrarily to the single bodies visited so far, binary systems might be affected by subsequential fragmentation processes that largely increase their block density per km². Direct comparison between the surface distribution and shapes of the boulders on Didymos and Dimorphos suggest that the latter inherited its material from the former. This finding supports the hypothesis that some asteroid binary systems form through the spin up and mass shedding of a fraction of the primary asteroid.
Article
Full-text available
In a little-known series of papers beginning in 1966, Tokunaga introduced an infinite class of tree graphs based on the Strahler ordering scheme. As recognized by Tokunaga (1984), these trees are characterized by a self-similarity property, so we will refer to them as self-similar trees, or SSTs. SSTs are defined in terms of a generator matrix which acts as a "blueprint" for constructing different trees. Many familiar tree constructions are absorbed as special cases. However, in Tokunaga's work an additional assumption is imposed which restricts from SSTs to a much smaller class. We will refer to this subclass as Tokunaga's trees. This paper presents several new and unifying results for SSTs. In particular, the conditions under which SSTs have well-defined Horton-Strahler stream ratios are given, as well as a general method for computing these ratios. It is also shown that the diameters of SSTs grow like m/betam^/beta, where m is the number of leaves. In contrast to many other tree constructions, here /beta need not be equal to 1/2; thus SSTs offer an explanation for Hack's law. Finally, it is demonstrated that large discrepancies exist between the predictions of Shreve's well-known model and detailed measurements for large river networks, which other SSTs fit the data quite well. Other potential applications of the SST framework include diffusion-limited aggregation (DLA), lightning, bronchial passages, neural networks, and botanical trees.
Article
Full-text available
The linear and circular polarizations of infinite dielectric cylinders of the graphite-silicate mixture with a truncated power-law size distribution proposed by Mathis, Rumpl and Nordsieck (1977) as a model of interstellar particles are calculated. Particles above a specified minimum size are assumed to have perfect Davis-Greenstein alignment, and an index of refraction of 1.6 is used. Results are found to be in good agreement with observations of interstellar extinction between 0.1 and 1 micron, the wavelength dependence of linear and circular polarization, and the albedo and phase function of diffuse galactic light between 0.18 and 0.6 microns, for a maximum particle size of 0.28 micron, a minimum particle size of 0.005 micron and a minimum aligned particle size of 0.08 micron. Observations of mass loss in carbon- and oxygen-rich stars are also noted in support of the proposed graphite-silicate composition of the interstellar grain mixture.
Article
Full-text available
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
Article
The solute transport equation describes dispersion as a diffusion-like process characterized by a constant dispersivity, implying that small pulses of contaminant grow as the square root of distance traveled. Measured dispersivities tend to be proportional to the scale of the measurement, suggesting that the pulses grow in linear proportion to distance. An exponent r, defined by the relationship d˜Lr where d is the standard deviation of position of a contaminant particle and L is the distance traversed, may be used to characterize dispersion. It takes a value ½ in the first case and 1 in the second. The movement of a contaminant particle is analyzed here as a random walk through a fracture network composed of fissures of varying transmissivity. If large-transmissivity fractures are sufficiently few, then one calculates r = ½. If large-transmissivity fractures are numerous, one obtains 2r = q + 1, where q is an exponent that describes the relationship between fracture spacing and transmissivity. Inserting the reasonable value q= 1 gives r = 1. This theory thus provides a physical basis for the observed scale-dependence of dispersivity.
Article
The commodities considered are mercury, copper and its byproducts gold and silver, and petroleum; the production and discovery data are for the US. The results indicate that the cumulative return per unit of effort, herein measured as grade of metal ores and discovery rate of recoverable petroleum, is proportional to a negative power of total effort expended, herein measured as total ore mined and total exploratory wells or footage drilled. This power relationship can be extended to some limiting point (a lower ore grade or a maximum number of exploratory wells or footage), and the apparent quantity of available remaining resource at that limit can be calculated. -from Authors
Article
It was in the context of floods that Hurst introduced the concept of the rescaled range. This was subsequently extended by Mandelbrot and his colleagues to concepts of fractional Gaussian noises and fractional Brownian walks. These studies introduced the controversial possibility that the extremes of floods and droughts could be fractal. An extensive study of flood gauge records at 1200 stations in the United States indicates a good correlation with fractal statistics. It is convenient to introduce the parameter F which is the ratio of the 10 yr flood to the 1-yr flood; for fractal statistics F is also the ratio of the 100 year flood to the 10 yr flood and the ratio of the 1000 yr flood to the 100 yr flood. It is found that the parameter F has strong regional variations associated with climate. -from Author
Article
Taylor dispersion is studied on a tree and on a Sierpinski gusket. On a tree, the exact expression of the probability is obtained, from which the m-adic global moments are derived; various temporal behaviours arc then exhibited, as they depend upon a geometrical parameter. On a Sierpinski gasket, numerical calculations are performed, the flow “field” is first discussed; Taylor dispersion is analysed with the help of the two first moments; the influence of the finite character of the network is clearly pointed out; the main conclusion is that Taylor dispersion is almost completely independent upon the flow field even for a low number of generations.
Article
The equivalent thermal resistance and permeability of a Leibniz packing with small gap between the disks are calculated in the lubrication approximation. Power laws are obtained for both processes. For conduction, the packing of the interstices yields an isotropic equivalent network, whatever the original configurations. This is related to the scale invariance of the resistances of the gaps, and is used to derive the exponent of the power law with an excellent precision. The equivalent permeability cannot be reduced to such a simple result, since the original configuration is always reminded; a good estimation of the exponent is derived by a simple argument.
Article
Quantitative geomorphic methods developed within the past few years provide means of measuring size and form properties of drainage basins. Two general classes of descriptive numbers are (1) linear scale measurements, whereby geometrically analogous units of topography can be compared as to size; and (2) dimensionless numbers, usually angles or ratios of length measures, whereby the shapes of analogous units can be compared irrespective of scale. Linear scale measurements include length of stream channels of given order, drainage density, constant of channel maintenance, basin perimeter, and relief. Surface and crosssectional areas of basins are length products. If two drainage basins are geometrically similar, all corresponding length dimensions will be in a fixed ratio. Dimensionless properties include stream order numbers, stream length and bifurcation ratios, junction angles, maximum valley-side slopes, mean slopes of watershed surfaces, channel gradients, relief ratios, and hypsometric curve properties and integrals. If geometrical similarity exists in two drainage basins, all corresponding dimensionless numbers will be identical, even though a vast size difference may exist. Dimensionless properties can be correlated with hydrologic and sediment-yield data stated as mass or volume rates of flow per unit area, independent of total area of watershed.
Article
A method is proposed to utilize the geologic record of fault displacement along with fault length to estimate the maximum magnitude earthquake that can be expected on an active fault. The method uses seismic moment as the link between geologic data and earthquake statistics. Several examples illustrate that faults of comparable length but different total slip in Holocene time have significantly different earthquake potentials.
Article
The distribution of asteroids with diameters greater than 130 km is analyzed using a maximum likelihood method. The mass distribution index of these asteroids for various diameter ranges, taxonomic classes, and distance zones is estimated together with the large sample errors and confidence intervals. This method suffers none of the major defects of the least squares linear regression method used by previous authors. It was found that the C- and S-type asteroids had very similar mass distribution indices. The distance zones I and IV have also similar indices as to the zones II and III, though the differences between zones are not statistically significant. It is found that all the asteroids in the range of diameter greater than 130 km can adequately be fitted by one mass distribution function of index 2.018 + or - 0.091. It is suggested that collisional fragmentation is the most likely process to have produced this distribution.
Article
The construction of a fractal capillary network is briefly recalled. The nonlinear transfer function of the basic graph is calculated. Then, a general method is given to obtain the transfer function of a fractal; it depends upon its algebraic description, the nonlinear character of the fluid and the basic graph. The threshold for a Bingham fluid to flow is also determined. Power-law fluids and Bingham fluids are then studied on a Sierpinski gasket.
Article
General fractal capillary networks are constructed and described via a systematic use of algebraic graph theory. The Stokes flow problem is then addressed in this contribution; the matrix relating the fiow rates to the pressures is obtained through a general iteration formula, which is deduced from the algebraic description of the geometrical structure. This formula is generally nonlinear; it is only in the iimit of a large number of iterations that it can be linearized and that a fractal exponent can be obtained. Two examples are provided to illustrate the formal developments. Various aspects of the results are then discussed and the permeability of a spatially periodic network whose unit cell is a fractal calculated.
Article
The spectrum of the earth's gravitational potential and topography, as represented by spherical harmonic expansions to degree 180, have been computed. Modeling the decay in the form of (A x l) exp-Beta, values of A and Beta for several degree (l) ranges were computed. For degree range 5-180, Beta was 2.54 for the potential and 2.16 for equivalent rock topography. The potential decay was somewhat slower than that implied by Kaula's rule. However, at high degree ranges, the Beta values were larger agreeing better with recent determinations from terrestrial gravity data and geoid undulations implied by satellite altimetric data. The values imply that the potential decays faster at higher l values.
Article
Global spectra are available for topography and geoid on the earth, Venus, Mars, and the moon. If the spectral energy density has a power law dependence on wave number, a fractal is defined. The topography spectrum for the earth is a well-defined fractal with D = 1.5; this corresponds to Brown noise with the amplitude proportional to the wavelength. Although there is more scatter for the other planetary bodies, the data for Mars and the moon correlate well with the data for the earth. Venus topography also exhibits a Brown noise behavior but with a smaller amplitude. The power law dependence of the earth's geoid is known as Kaula's law. It is shown that uncompensated Brown topography gives a geoid with a power law dependence that is in quite good agreement with Kaula's law. However, the required amplitude is only 8 percent of the observed topography. A similar result is found for the other bodies, with the ratio of the amplitude of topography required to explain the geoid to the observed topography increasing to 72 percent for the moon.
Article
Geographical curves are so involved in their detail that their lengths are often infinite or, rather, undefinable. However, many are statistically "selfsimilar," meaning that each portion can be considered a reduced-scale image of the whole. In that case, the degree of complication can be described by a quantity D that has many properties of a "dimension," though it is fractional; that is, it exceeds the value unity associated with the ordinary, rectifiable, curves.
Global distribution of lakes Physics and Chemistry of Lakes
  • M Meybeck