Article

The Fractal Geometry of Nature.

Taylor & Francis
The American Mathematical Monthly
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Fractals represents a new field of mathematics and art. We recall that the fractal geometry is one of the great advances in mathematics [13]. The researchers have realized that the fractal geometry is an excellent tool for discovering some secrets from a large variety of systems and solving important problems in various branches of science and engineering [13,14,15,16,17]. ...
... We recall that the fractal geometry is one of the great advances in mathematics [13]. The researchers have realized that the fractal geometry is an excellent tool for discovering some secrets from a large variety of systems and solving important problems in various branches of science and engineering [13,14,15,16,17]. ...
... Thus, they are created by repetition of a simple process c z z n n    2 1 over and over. Fractals are images of dynamic systems [13]. ...
... In contrast, our Formula-driven Supervised Learning and the generated formula-driven image dataset has significant potential to automatically generate an image pattern and a label. For example, we consider using fractals, a sophisticated natural formula (Mandelbrot 1983). Generated fractals can differ drastically following a slight change in the parameters, and can often be distinguished in the real-world. ...
... One of the best-known formula-driven image projections is fractals. Fractal theory has been discussed for many years (e.g., Mandelbrot 1983;Landini et al. 1995;Smith et al. 1996). Fractal theory has been applied to rendering a graphical pattern in a simple equation (Barnsley 1988;Monro and Budbridge 1995;Chen and Bi 1997) and constructing visual recognition models (Pentland 1984;Varma and Garg 2007;Xu et al. 2009;Larsson et al. 2017). ...
... Since the success of these studies relies on the fractal geometry of naturally occurring phenomena (Mandelbrot 1983;Falconer 2004), our assumption that fractals can assist learning image representations for recognizing natural scenes and objects is supported. Other methods, namely those involving Bezier curves (Farin 1993) or Perlin noise (Perlin 2002), have also been discussed in terms of computational rendering. ...
Article
Full-text available
Is it possible to use convolutional neural networks pre-trained without any natural images to assist natural image understanding? The paper proposes a novel concept, Formula-driven Supervised Learning (FDSL). We automatically generate image patterns and their category labels by assigning fractals, which are based on a natural law. Theoretically, the use of automatically generated images instead of natural images in the pre-training phase allows us to generate an infinitely large dataset of labeled images. The proposed framework is similar yet different from Self-Supervised Learning because the FDSL framework enables the creation of image patterns based on any mathematical formulas in addition to self-generated labels. Further, unlike pre-training with a synthetic image dataset, a dataset under the framework of FDSL is not required to define object categories, surface texture, lighting conditions, and camera viewpoint. In the experimental section, we find a better dataset configuration through an exploratory study, e.g., increase of #category/#instance, patch rendering, image coloring, and training epoch. Although models pre-trained with the proposed Fractal DataBase (FractalDB), a database without natural images, do not necessarily outperform models pre-trained with human annotated datasets in all settings, we are able to partially surpass the accuracy of ImageNet/Places pre-trained models. The FractalDB pre-trained CNN also outperforms other pre-trained models on auto-generated datasets based on FDSL such as Bezier curves and Perlin noise. This is reasonable since natural objects and scenes existing around us are constructed according to fractal geometry. Image representation with the proposed FractalDB captures a unique feature in the visualization of convolutional layers and attentions.
... In order to study the properties of systems with fractal geometry, Mandelbrot defined some random sets that are generated recursively from a given initial set as a percolation process [2,4]. These sets are formed by successive application of a defined set of rules that, either divide the initial set and discard some subsets in each partition, or enlarge the initial set by substituting the existing subsets by multiple replicas of themselves. ...
... The generation of the sequences studied in this work is a one-dimensional example of the classical well-known Mandelbrot's percolation process [4]. Contrary to the classical formulation, where an initial segment of length L is subdivided into k subsegments of length L/k and, with probability p, some of them are chosen to be further divided, the model we study here considers an initial digit, that we denotate by 1, that is substituted by a word of length k formed by digits {0, 1}. ...
... Motivated by the classical percolation process defined by Mandelbrot [4], we have studied a class of random substitutions of constant length that maps 0 → 0, 0, . . . , 0 and 1 → Y 1 , Y 2 , . . . ...
Article
Full-text available
We study some properties of binary sequences generated by random substitutions of constant length. Specifically, assuming the alphabet {0,1}, we consider the following asymmetric substitution rule of length k: 0→〈0,0,…,0〉 and 1→〈Y1,Y2,…,Yk〉, where Yi is a Bernoulli random variable with parameter p∈[0,1]. We obtain by recurrence the discrete probability distribution of the stochastic variable that counts the number of ones in the sequence formed after a number i of substitutions (iterations). We derive its first two statistical moments, mean and variance, and the entropy of the generated sequences as a function of the substitution length k for any successive iteration i, and characterize the values of p where the maxima of these measures occur. Finally, we obtain the parametric curves entropy-variance for each iteration and substitution length. We find two regimes of dependence between these two variables that, to our knowledge, have not been previously described. Besides, it allows to compare sequences with the same entropy but different variance and vice versa.
... However, the mechanistic mental model is limited when comes to design or creation, as the goodness of designed or created things is sidelined as an opinion or personal preference rather than a matter of fact (Alexander 2002(Alexander -2005. Under Newtonian absolute and Leibnizian relational views of space -a geographic space is represented as a collection of geometric primitives such as points, lines, polygons, and pixels (c.f., Figure 2 for illustration), which tend to be "cold and dry" (Mandelbrot 1982), so it is not seen as a living structure. ...
... (where e 1 , e 2 , e 3 , … e 10 are very small values), the first dataset does not follow Zipf's law, while the second does. Zipf's law is a major source of inspirations of fractal geometry (Mandelbrot 1982). In his autobiography, Mandelbrot (2012) made the following remark while describing the first time he was introduced to a book review on Zipf's law: "I became hooked: first deeply mystified, next totally incredulous, and then hopelessly smitten … to this day. ...
... The new city science is a science of living structure, substantially based on living structure that resembles yet exceeds fractal geometry (Mandelbrot 1982). Like the new science of cities, fractal geometry belongs to the camp of mechanistic thought. ...
Article
Full-text available
The third view of space states that space is neither lifeless nor neutral, but a living structure capable of being more living or less living, which was formulated by Christopher Alexander under the organismic world view that was first conceived by the British philosopher Alfred Whitehead (1861–1947). The living structure is defined as a physical and mathematical structure or simply characterized by the recurring notion (or inherent hierarchy) of far more small substructures than large ones. The more substructures the more living or more beautiful structurally, and the higher hierarchy of the substructures the more living or more beautiful structurally. This paper seeks to lay out a new kind of city science on the notion of living structure and on the third view of space. The new city science aims not only to better understand geographic forms and processes but also-maybe more importantly-to make geographic space or the Earth's surface living or more living. We introduce two fundamental laws of living structure: Tobler's law on spatial dependence or homogeneity and scaling law on spatial interdependence or heterogeneity. We further argue that these two laws favor statistics over exactitude, because the statistics tends to make a structure more living than the exactitude. We present the concept of living structure through some working examples and make it clear how a living structure differs from a non-living structure. In order to make a structure or space living or more living, we introduce two design principles-differentiation and adaptation-using two paintings and two city plans as working examples. The new city science is a science of living structure, dealing with a wide range of scales, from the smallest scale of ornaments on walls to the scale of the entire Earth's surface.
... It is shown that porous media exhibit fractal properties and their pore spaces are statistically self-similar over several length scales (e.g., Mandelbrot, 1982;Katz and Thompson, 1985). Theory on fractal porous media has attracted much attention in different areas (e.g., Mandelbrot, 1982;Feder and Aharony, 1989). ...
... It is shown that porous media exhibit fractal properties and their pore spaces are statistically self-similar over several length scales (e.g., Mandelbrot, 1982;Katz and Thompson, 1985). Theory on fractal porous media has attracted much attention in different areas (e.g., Mandelbrot, 1982;Feder and Aharony, 1989). Therefore, the models based on the capillary tubes in combination with the fractal theory have been applied to study transport phenomena in both fully and partially saturated porous media (e.g., Li and Horne, 2004;Guarracino, 2007;Cai et al., 2012a,b;Liang et al., 2014;Guarracino and Jougnot, 2018;Soldi et al., 2017Soldi et al., , 2019Soldi et al., , 2020Thanh et al., 2018Thanh et al., , 2019Thanh et al., , 2020a or to study hydraulic conductivity and biological clogging in bio-amended variably saturated soils (e.g., Rosenzweig et al., 2009;Samsó et al., 2016;Carles et al., 2017). ...
Article
Full-text available
Predicting the permeability of porous media in saturated and partially saturated conditions is of crucial importance in many geo-engineering areas, from water resources to vadose zone hydrology or contaminant transport predictions. Many models have been proposed in the literature to estimate the permeability from properties of the porous media such as porosity, grain size or pore size. This study develops a model of the permeability for porous media saturated by one or two fluid phases with all physically based parameters using a fractal upscaling technique. The model is related to microstructural properties of porous media such as fractal dimension for pore space, fractal dimension for tortuosity, porosity, maximum radius, ratio of minimum pore radius and maximum pore radius, water saturation and irreducible water saturation. The model is favorably compared to existing and widely used models from the literature. Then, comparison with published experimental data for both unconsolidated and consolidated samples shows that the proposed model estimates the permeability from the medium properties very well.
... Bazell & Desert 1988 ;Falgarone, Phillips & Walker 1991 ;Elmegreen 1997 ;Stutzki et al. 1998 ;Elia et al. 2014 ). Here D E is the fractal dimension of an image in E-dimensional space, and is essentially a measure of how efficiently structures fill that space (Mandelbrot & Cannon 1984 ). The fractal dimension is a non-integer number, with possible values ranging from D E = E − 1 to D E = E (e.g. the perimeter area dimension), or from D E = E to D E = E + 1 (e.g. the box-counting dimension). ...
... Various methods have been adopted to estimate D 2 , in particular perimeter-area analysis of contoured images to estimate D 2:PA (Bazell & Desert 1988 ;Dickman, Margulis & Horvath 1990 ;E-mail: matthew.bates@astro.cf.ac.uk Falgarone et al. 1991 ;Williams, Blitz & McKee 2000 ;Marchuk et al. 2021 ), and box-counting analysis to measure D 2:BC (Mandelbrot & Cannon 1984 ;Sanchez, Alfaro & Perez 2005 ;Federrath, Klessen & Schmidt 2009 ;Elia et al. 2018 ). These two fractal dimensions should be related by D 2:PA = D 2:BC − 1 (Voss 1988 ;Vogelaar & Wakker 1994 ;Stutzki et al. 1998 ). ...
Article
Full-text available
We construct Convolutional Neural Networks (CNNs) trained on exponentiated fractional Brownian motion (xfBm) images, and use these CNNs to analyse Hi-GAL images of surface-density in the Galactic Plane. The CNNs estimate the Hurst parameter, H{\cal H} (a measure of the power spectrum), and the scaling exponent, S{\cal S} (a measure of the range of surface-densities), for a square patch comprising [N×N]=[{\cal N}\times {\cal N}]= [128 × 128], [64 × 64] or [32 × 32] pixels. The resulting estimates of H{\cal H} are more accurate than those obtained using Δ-variance. We stress that statistical measures of structure are inevitably strongly dependent on the range of scales they actually capture, and difficult to interpret when applied to fields that conflate very different lines of sight. The CNNs developed here mitigate this issue by operating effectively on small fields (small N{\cal N}), and we exploit this property to develop a procedure for constructing detailed maps of H{\cal H} and S{\cal S}. This procedure is then applied to Hi-GAL maps generated with the ppmap procedure. There appears to be a bimodality between sightlines with higher surface-density (32Mpc2\gtrsim 32\, {\rm M}_{_\odot }\, {\rm pc^{-2}}), which tend to have higher H(0.8){\cal H}\,\,(\gtrsim 0.8) and S(1){\cal S}\,\,(\gtrsim 1); and sightlines intercepting regions of lower surface-density (32Mpc2\lesssim 32\, {\rm M}_{_\odot }\, {\rm pc^{-2}}), which tend to have lower H(0.8){\cal H}\,\,(\lesssim 0.8) and S(1){\cal S}\,\,(\lesssim 1); unsurprisingly the former sightlines are concentrated towards the Galactic Midplane and the Inner Galaxy. The surface-density PDF takes the form dP/dΣ∝Σ−3 for Σ32Mpc2\Sigma \gtrsim 32\, {\rm M}_{_\odot }\, {\rm pc^{-2}}, and on most sightlines this power-law tail is dominated by dust cooler than 20K\, \sim 20\, \rm {K}, which is the median dust temperature in the Galactic Plane.
... Fractal theory can scientifically reflect the uniformity and dispersion trend of material particles, and it is gradually widely used in describing the PSD characteristics of brittle materials [15,18,37,38]. Therefore, this study preliminarily believes that the PSD of CGS has fractal characteristics in the grinding process and calculates the fractal dimension of PSD of CGS with different grinding time according to Equation (5) [39]. ...
... Materials 2022, 15, x FOR PEER REVIEW the fractal dimension of PSD of CGS with different grinding time according to E (5) [39]. ...
Article
Full-text available
Based on the test results of laser particle size analyzer, specific surface area analyzer and infrared spectrometer, the grinding kinetics of coal gasification slag (CGS) was systematically described by using Divas Aliavden grinding kinetics, Rosin Rammler Bennet (RRB) distribution model and particle size fractal theory. The influence of grinding time and particle group of CGS on the strength activity index of mortar was studied by using the strength activity index of mortar and grey correlation analysis. The results show that the particles are gradually refined before mechanical grinding of CGS for 75 min. When the mechanical grinding time is greater than 75 min, the agglomeration phenomenon of fine CGS particles led to the decrease in various properties. Divas Aliavden grinding kinetics, the RRB model and fractal dimension can characterize the change of CGS particle size in the grinding process quantitatively. The strength activity index of CGS at different curing ages is positively correlated with grinding time, and the influence on the later strength activity index is the most obvious. The relationship between CGS particle size distribution and strength activity index were probed using grey correlation analysis. The CGS particle groups with the particle size of 20~30 m and 10~20 m have the greatest impact on the early and late strength activity index, respectively. Therefore, the optimal grinding time of CGS as auxiliary cementing material is 75 min, considering factors, such as economy and performance, and the specific surface area (SSA) is 4.4874 m2·g 1.
... The concept of a fractal dimension (D), which was defined and popularized by Benoit Mandelbrot [16,17], is a mathematical measure of the relative diversity and density of geometric data in an image (where 1.0 < D < 2.0) or object (where 2.0 < D < 3.0). This property, which could also be thought of as 'statistical roughness' or 'characteristic complexity', is simply a measure of the volume and distribution of geometry in a form. ...
... The box-counting method is well known in mathematics [17]. In its standard architectural application, there are four stages: (i) data preparation, (ii) data representation, (iii) pre-processing and (iv) processing. ...
Article
Full-text available
Frank Lloyd Wright, one of the world’s most famous architects, produced several masterworks in his career, possibly the most celebrated of which is the Kaufmann House, better known as Fallingwater. One of the common arguments historians make about this house is that it is unique in Wright’s oeuvre, as it is not similar to other designs he produced in the three major styles that dominated his career: the Prairie, Textile-Block and Usonian styles. In this paper, the derived fractal dimensions (D) using the standard architectural variation and application of the box-counting method are developed for the elevations and plans of Fallingwater. Using the measurements derived from a set of 15 Prairie, Textile-Block and Usonian houses, this paper tests whether Fallingwater is indeed an outlier in his body of work, as some historians suggest. The results indicate that, contrary to the standard view, Fallingwater has D measures that are broadly similar to those of his other styles, and on average, Fallingwater has formal parallels to several aspects of Wright’s Usonian style. Full text available at https://doi.org/10.3390/fractalfract6040187
... Here, we show the benefits of bursting activity in learning sequences generated by a special class of random walks observed in various animal behaviors. We investigate whether and how bursting neurons improve the ability of neural network models to learn the dynamical trajectories of Lévy flight, which is a random walk with step sizes obeying a heavy-tailed distribution [17][18][19] . As a consequence, Lévy flight consists of many short steps and rare long-distance jumps. ...
... rvs(), in the Scipy library of Python for scientific calculations. This function generates a series of random numbers that obey the Lévy distribution 17,18 . In short, a stable distribution has the characteristic function of the form, where α , β , c, and µ are the characteristic exponent, skewness parameter, scale parameter, and location parameter, respectively, and ...
Article
Full-text available
Isolated spikes and bursts of spikes are thought to provide the two major modes of information coding by neurons. Bursts are known to be crucial for fundamental processes between neuron pairs, such as neuronal communications and synaptic plasticity. Neuronal bursting also has implications in neurodegenerative diseases and mental disorders. Despite these findings on the roles of bursts, whether and how bursts have an advantage over isolated spikes in the network-level computation remains elusive. Here, we demonstrate in a computational model that not isolated spikes, but intrinsic bursts can greatly facilitate learning of Lévy flight random walk trajectories by synchronizing burst onsets across a neural population. Lévy flight is a hallmark of optimal search strategies and appears in cognitive behaviors such as saccadic eye movements and memory retrieval. Our results suggest that bursting is crucial for sequence learning by recurrent neural networks when sequences comprise long-tailed distributed discrete jumps.
... Reichert et al. (2017) emphasized "the power of 3D fractal dimensions" for comparing shapes in an objective way. Based on Mandelbrot (1982) and his concept of fractal geometry, another more secret "revolution in morphometrics" may pick up speed despite the criticism "that a fractal cow is often not much better than a spherical cow" (Buldyrev, 2012). Quite a few studies across (paleo-) biological disciplines have demonstrated the potential of fractals for morphometrics (Aiello et al., 2007;Bruno et al., 2008;Isaeva et al., 2006;Klinkenbuß et al., 2020;Lutz & Boyajian, 1995). ...
... Voxelization transforms surface mesh data into a 3D image represented in Cartesian space required for box counting. The box counting algorithm is then applied to calculate the fractal dimension or Minkowski-Bouligand dimension D MB (Doube et al., 2010;Mandelbrot, 1982;Parkinson & Fazzalari, 2000). ...
Article
Full-text available
Morphometrics are fundamental for the analysis of size and shape in fossils, particularly because soft parts or DNA are rarely preserved and hard parts such as shells are commonly the only source of information. Geometric morphometrics, that is, landmark analysis, is well established for the description of shape but it exhibits a couple of shortcomings resulting from subjective choices during landmarking (number and position of landmarks) and from difficulties in resolving shape at the level of micro‐sculpture. With the aid of high‐resolution 3D scanning technology and analyses of fractal dimensions, we test whether such shortcomings of linear and landmark morphometrics can be overcome. As a model group, we selected a clade of modern viviparid gastropods from Lake Lugu, with shells that show a high degree of sculptural variation. Linear and landmark analyses were applied to the same shells in order to establish the fractal dimensions. The genetic diversity of the gastropod clade was assessed. The genetic results suggest that the gastropod clade represents a single species. The results of all morphometric methods applied are in line with the genetic results, which is that no specific morphotype could be delimited. Apart from this overall agreement, landmark and fractal dimension analyses do not correspond to each other but represent data sets with different information. Generally, the fractal dimension values quantify the roughness of the shell surface, the resolution of the 3D scans determining the level. In our approach, we captured the micro‐sculpture but not the first‐order sculptural elements, which explains that fractal dimension and landmark data are not in phase. We can show that analyzing fractal dimensions of gastropod shells opens a window to more detailed information that can be considered in evolutionary and ecological contexts. We propose that using low‐resolution 3D scans may successfully substitute landmark analyses because it overcomes the subjective landmarking. Analyses of 3D scans with higher resolution than used in this study will provide surface roughness information at the mineralogical level. We suggest that fractal dimension analyses of a combination of differently resolved 3D models will significantly improve the quality of shell morphometrics.
... Under atypical climatic conditions during a hurricane, the water movement in the back reef greatly exceeds the levels of energy common to corals inhabiting this zone (Drost et al. 2019;Lugo-Fernández et al. 1994), making them more vulnerable to the impact of hurricanes and therefore breakage or dislodgement. Given that fractal or repeated structure arrangements across different spatial scales are common in nature (Cannon 1984), including coral reefs (Zawada and Brock 2009), we expect that aggregated spear-like colonies give the patches they conform the same structural characteristics that allow them to better resist the hurricane impacts as a repeated spatial trait from the colony level. ...
Article
Full-text available
Acropora palmata is the species that contributes the most to the structural complexity of Caribbean reefs. Information concerning the complexity of its populations at the landscape level is relevant to determine how the reef system responds to disturbances, such as cyclonic events. This study examines the repercussions of hurricanes Gamma and Delta (2020) on the patches of A. palmata in Limones reef, one of the best-preserved reefs in the Caribbean. Two orthomosaics were generated using programmed drone flights, one before and one after the passage of both hurricanes. Visually identified polygon files representing A. palmata patches were delineated in both orthomosaics. Regression models were used to analyze the influence of spatial characteristics of those patches, measured through landscape ecology indices, on the probability of patch permanence and, for those patches that remained, the remaining area. Our results show that the A. palmata population suffered a total loss of 25% due to hurricanes. More compact and complex patches at shallower depths exhibited a higher persistence probability. Furthermore, the spatial location of the patches in relation to each other (proximity and size of their neighbors) did not significantly affect the permanence probability. The metrics used were not a good indicator of the area loss of the patches that remained. Here, the damages suffered could mainly be explained by the reef zone, which we attribute to the phenotypic plasticity of A. palmata colonies in high-energy zones, affecting growth characteristics that allow them to better withstand the impact of hurricanes. Overall, we show that using landscape indices to understand the drivers of change in the spatial structure of reefs is an effective method to evaluate and even predict the modifications suffered after disturbance events, information that could be readily available for management and conservation strategies.
... The concept of fractal patterns suggests that structures and behaviors observed at one scale can be mirrored at other scales, exhibiting self-similarity across different levels of complexity (Cannon, 1984). In the context of human development and societal evolution, this implies that the stages an individual undergoes from conception to maturity may be thought of as an unconscious basic template or pattern for the broader developmental trajectory of human societies. ...
Preprint
This article examines the broader societal implications of blockchain technology and crypto-assets, emphasizing their role in the evolution of humanity as a "superorganism" with decentralized, self-regulating systems. Drawing on interdisciplinary concepts such as Nate Hagens' "superorganism" idea and Francis Heylighen's "global brain" theory, the paper contextualizes blockchain technology within the ongoing evolution of governance systems and global systems such as the financial system. Blockchain's decentralized nature, in conjunction with advancements like artificial intelligence and decentralized autonomous organizations (DAOs), could transform traditional financial, economic, and governance structures by enabling the emergence of collective distributed decision-making and global coordination. In parallel, the article aligns blockchain's impact with developmental theories such as Spiral Dynamics. This framework is used to illustrate blockchain's potential to foster societal growth beyond hierarchical models, promoting a shift from centralized authority to collaborative and self-governed communities. The analysis provides a holistic view of blockchain as more than an economic tool, positioning it as a catalyst for the evolution of society into a mature, interconnected global planetary organism.
... We also confirm previous studies showing that CCM identifies corneal nerve degeneration in patients with MS, [34][35][36][37] differentiates subtypes, [38][39][40] and predicts disease progression. 41,42 FD describes a shape that is self-similar across space or time, 43 and as neurodegeneration progresses, one would predict this disrupts tissue structure and shape to affect FD. We have employed the widely used box counting method 44 to estimate corneal nerve FD in CCM images 25 and utilized ACNFrD as an objective measure of geometric complexity of the corneal subbasal nerve plexus in patients with diabetic neuropathy. ...
... We also confirm previous studies showing that CCM identifies corneal nerve degeneration in patients with MS, [34][35][36][37] differentiates subtypes, [38][39][40] and predicts disease progression. 41,42 FD describes a shape that is self-similar across space or time, 43 and as neurodegeneration progresses, one would predict this disrupts tissue structure and shape to affect FD. We have employed the widely used box counting method 44 to estimate corneal nerve FD in CCM images 25 and utilized ACNFrD as an objective measure of geometric complexity of the corneal subbasal nerve plexus in patients with diabetic neuropathy. ...
Article
Purpose: To assess whether corneal nerve analysis can identify and differentiate patients with multiple sclerosis (MS) from those with epilepsy. Methods: Participants with MS (n = 83), participants with epilepsy (n = 50), and healthy controls (HCs) (n = 20) underwent corneal confocal microscopy (CCM) and quantification of automated corneal nerve fiber length (ACNFL), automated corneal nerve fractal dimension (ACNFrD), and ACNFrD/ACNFL ratio of the subbasal nerve plexus. Results: ACNFL (MS: P < 0.0001; epilepsy: P = 0.002) and ACNFrD (MS: P < 0.0001; epilepsy: P = 0.025) were significantly lower and the ACNFrD/ACNFL ratio (MS: P < 0.0001; epilepsy: P = 0.018) was significantly higher compared to HCs. ACNFL (P = 0.001), ACNFrD (P = 0.0003), and ACNFrD/ACNFL ratio (P = 0.006) were significantly lower in patients with MS compared to those with epilepsy. ACNFL had the highest diagnostic utility for identifying patients with MS (sensitivity/specificity 0.86/0.85, area under the curve [AUC] 0.90, P < 0.0001), and ACNFrD had the highest diagnostic utility for identifying patients with epilepsy (sensitivity/specificity 0.78/0.75, AUC 0.76, P = 0.0008). ACNFrD had the highest diagnostic utility for differentiating patients with MS from epilepsy (sensitivity/specificity 0.66/0.65, AUC 0.70, <0.0001). Conclusions: Corneal neurodegeneration occurs in and is characterized by a distinct pattern that differentiates patients with MS and epilepsy. Translational relevance: CCM identifies and differentiates patients with MS and epilepsy, albeit with moderate performance. Further validation, with a larger sample size, is needed.
... This dimension is a measure of how completely a fractal fills space as you zoom in on it. The Euclidean geometry, named after the ancient Greek mathematician Euclid, deals with the properties and relationships of points, lines, surfaces, and solids in 1D, 2D, and 3D dimensional spaces, but fractal geometry studies shapes and structures that exhibit self-similarity at different scales and are often too irregular to be described by traditional Euclidean geometry [31][32][33][34][35][36][37]. ...
Preprint
Full-text available
The electron configuration of atoms is known for its well-defined hierarchy of progressive models with guaranteed convergence to the exact result and its significant contributions to fundamental sciences in terms of applications. In traditional electron configurations of atoms, the principal quantum numbers are always positive integers and describe the energy levels of the electrons in an atom. The concept of noninteger (effective) principal quantum numbers, first proposed by Slater, has appeared in some advanced and theoretical contexts, often involving the true total energy of systems or extensions beyond standard approximation models. This phenomenon, the origin of which is still unclear, has recently been solved by explaining that it is due to the fractal structure of atoms and molecules, which is also considered proof that atoms and molecules have a fractal structure. In this paper the new electron configuration of atoms is proposed, the main concept of this configuration is based on the fractal nature of atoms and molecules, in which the general configuration includes noninteger principal quantum numbers is in viewpoint. We believe that the proposed model demonstrates that the electron configuration of atoms provides outstanding results in dealing with natural science design problems.
... For example, clusters of houses in Benin city and its surrounding villages (in present-day Nigeria) are laid out in fractal patterns [Egl]. However, it was not until the 1970s that Mandelbrot [Man1;Man2] coined the term 'fractal' (from the Latin word 'fractus, ' meaning 'fractured' or 'broken'), and widely popularised the concept. Today, fractal geometry is a flourishing branch of mathematics which puts fractals into a rigorous framework [Bar2;Bar3], and has particular relevance to chaotic dynamical systems [PJS]. ...
Preprint
Full-text available
Hausdorff and box dimension are two familiar notions of fractal dimension. Box dimension can be larger than Hausdorff dimension, because in the definition of box dimension, all sets in the cover have the same diameter, but for Hausdorff dimension there is no such restriction. This thesis focuses on a family of dimensions parameterised by θ(0,1)\theta \in (0,1), called the intermediate dimensions, which are defined by requiring that \mbox{diam}(U) \leq (\mbox{diam}(V))^{\theta} for all sets U,V in the cover. We begin by generalising the intermediate dimensions to allow for greater refinement in how the relative sizes of the covering sets are restricted. These new dimensions can recover the interpolation between Hausdorff and box dimension for compact sets whose intermediate dimensions do not tend to the Hausdorff dimension as θ0\theta \to 0. We also use a Moran set construction to prove a necessary and sufficient condition, in terms of Dini derivatives, for a given function to be realised as the intermediate dimensions of a set. We proceed to prove that the intermediate dimensions of limit sets of infinite conformal iterated function systems are given by the maximum of the Hausdorff dimension of the limit set and the intermediate dimensions of the set of fixed points of the contractions. This applies to sets defined using continued fraction expansions, and has applications to dimensions of projections, fractional Brownian images, and general H\"older images. Finally, we determine a formula for the intermediate dimensions of all self-affine Bedford-McMullen carpets. The functions display features not witnessed in previous examples, such as having countably many phase transitions. We deduce that two carpets have equal intermediate dimensions if and only if the multifractal spectra of the corresponding uniform Bernoulli measures coincide.
... Instead, they exhibit self-similarity, meaning that structures repeat at different scales. Fractal dimensions provide a way to quantify this self-similarity [59][60][61][62][63]. ...
Preprint
Full-text available
It is well known that a good theory will make accurate predictions about the behavioral aspects of the phenomenon under study, suggesting experiments to test its overall explanatory power. The purpose of the present study is to provide an analysis of the recent developments in the evaluations of atoms, molecules, and materials structures by using the currently used advanced quantum mechanical and semiempirical theories. It is well known that the noninteger principal quantum numbers Slater type orbitals (NSTOs) provide a more flexible basis than integer principal quantum numbers Slater type orbitals (ISTOs) in the advanced quantum mechanical evaluations. To the best of our knowledge, the detailed research and physical interpretation of this event has not yet been done to date. The recent historical developments show that, from an application point of view, the internal structures of all the microscopic and macroscopic systems we examine have a fractal structure, which shows that the systems have been deeply redesigned. Considering these facts, this study explains the physical nature of the reason for the effectiveness of noninteger quantum number basis functions. Evidence from the results of a wide range of applications, has revealed relationships between advanced quantum theories and the fractal structure.
... The PSD analysis is suitable for fractal characterization of surfaces [9]. The fractal dimension is a self-affine feature [16]. In the case of technical surfaces, self-affine has been proven [14,15]; however, the determination of the fractal dimension of surfaces, and especially the use of fractal characterization, has many contradictions [17,18]. ...
Article
Full-text available
When describing the tribological behaviour of technical surfaces, the need for full-length scale microtopographic characterization often arises. The self-affine of surfaces and the characterisation of self-affine using a fractal dimension and its implantation into tribological models are commonly used. The goal of our present work was to determine the frequency range of fractal behaviour of surfaces by analysing the microtopographic measurements of an anodised aluminium brake plunger. We also wanted to know if bifractal and multifractal behaviour can be detected in real machine parts. As a result, we developed a new methodology for determining the fractal range boundaries to separate the nano- and micro-roughness. To reach our goals, we used an atomic force microscope (AFM) and a stylus instrument to obtain measurements in a wide frequency range (19 nm–3 mm). Power spectral density (PSD)-based fractal evaluation found that the examined surface could not be characterised by a single fractal dimension. A new method capable of separating nano- and micro-roughness has been developed for investigating multifractal behaviour. The presented procedure separates nano- and micro-roughness based on the geometric characteristics of surfaces. In this way, it becomes possible to specifically examine the relationship between the micro-geometry that can be measured in each wavelength range and the effects of cutting technology and the material structure that creates them.
... To evaluate the scaling of variability with timescale , we assume the power-law scaling behaviour often observed in geophysical time series (Cannon & Mandelbrot, 1984;Corral & González, 2019;Fedi, 2016;Lovejoy & Schertzer, 1986;Malamud & Turcotte, 1999;Pelletier & Turcotte, 1999) such that for a timescale-dependent metric S( ) and a general power-law scaling exponent a the following is approximately true: If S( ) is an estimate of the power spectrum, then the exponent a is the scaling exponent traditionally known as , whereas if S( ) is an estimate of the HSF, then the exponent a corresponds to the fluctuations exponent H; the two can be related by the approximate (or exact in the case of Gaussian processes) relation ≈ 1 + 2H (Hébert et al., 2021;Lovejoy & Schertzer, 2012). Therefore, a flat white noise-like behaviour of = 0 corresponds to H = − 0.5, reflecting the fact that averaging white noise over n points decreases its amplitude by a factor of √ n and a so-called "1 ∕ f noise", that is, when = 1, corresponds to H = 0. ...
Article
Full-text available
Global climatic changes expected in the next centuries are likely to cause unparalleled vegetation disturbances, which in turn impact ecosystem services. To assess the significance of disturbances, it is necessary to characterize and understand typical natural vegetation variability on multi‐decadal timescales and longer. We investigate this in the Holocene vegetation by examining a taxonomically harmonized and temporally standardized global fossil pollen dataset. Using principal component analysis, we characterize the variability in pollen assemblages, which are a proxy for vegetation composition, and derive timescale‐dependent estimates of variability using the first‐order Haar structure function. We find, on average, increasing fluctuations in vegetation composition from centennial to millennial timescales, as well as spatially coherent patterns of variability. We further relate these variations to pairwise comparisons between biome classes based on vegetation composition. As such, higher variability is identified for open‐land vegetation compared to forests. This is consistent with the more active fire regimes of open‐land biomes fostering variability. Needleleaf forests are more variable than broadleaf forests on shorter (centennial) timescales, but the inverse is true on longer (millennial) timescales. This inversion could also be explained by the fire characteristics of the biomes as fire disturbances would increase vegetation variability on shorter timescales, but stabilize vegetation composition on longer timecales by preventing the migration of less fire‐adapted species.
... The sandpile model proposed by Bak, Tang and Wiesenfeld in 1987, revealed that the frequent occurrence of fractal structures is the generic spatial characteristic of a dynamical systems with many spatial degrees of freedom evolve naturally [8,15,22]. They stressed the importance of this discovery with the claim "it develops complexity out simplicity in contrast with the attempt to reduce complexity to simplicity". ...
... However, the items within facets describe even more specific personality nuances that do not map onto error (Mõttus et al., 2017). Furthermore, items might be composite, in turn suggesting an analogy with fractals (Mandelbrot, 1983)-patterns within patterns within patterns. For depression, Fried (2017a) illustrated how seven common measurement scales overlap partially and are comprised of different (partly overlapping) subsets of symptoms. ...
Article
Full-text available
Many constructs in psychological science are highly abstract and considered sources of uncertainty in published research findings, which has yielded dissatisfaction as manifested in two seemingly opposing trends. First, many researchers have moved away from constructs toward specific observables and effects. Second, others have moved toward constructs, calling for sharper definitions and improved measurement. Both treat uncertainty as something to reduce. We believe that uncertainties reflect the essential complex reality of psychological phenomena. From this perspective, psychological constructs should be reformulated to better accommodate and explain psychological phenomena-including their essential complexity and uncertainties. We first describe common approaches to defining and measuring psychological constructs, and briefly discuss their history and epistemology. We then formulate construct desiderata, and we propose a reformulation of constructs as (1) composite in nature, to allow for heterogeneous content (represented as an area), versus unitary, to reflect a sharp definition and absence of heterogeneity (represented as a point), (2) organized in hierarchical systems with overlap, and (3) variable in nature as reflected in variable measurements. This reformulation is more consistent with observed variability of findings, which aligns with ongoing and newer methodological avenues that emphasize generalization efforts while preserving and facilitating the integrative and explanatory role of constructs. We close by considerations for further discussion and debate. 3 Questioning Psychological Constructs: Current Issues and Proposed Changes The notion of constructs is fundamental to theory and research in psychology and its allied disciplines. Constructs are defined operationally in empirical studies and findings are interpreted in terms of constructs. Despite the fundamental role constructs serve in research, the notion of constructs has received minimal attention in the bulk of contemporary literature. Slaney's (2017) discussion is an exception. We observe a silent trend in much of the literature characterized by moving away from constructs in psychological science with little explicit criticism of their value. This move away implies a downscaling of constructs favoring observables and notions closer to observables over unobservable abstract constructs. Those moving away from constructs seek greater specificity by staying closer to observables (e.g., Borsboom et al., 2021) and specific effects (Open Science Collaboration, 2015). This trend arises from dissatisfactions with the vagueness of constructs, variable results, lack of progress in accumulating knowledge, and numerous calls for direct rather than conceptual replications in addressing failures to replicate. Conversely, in some corners of the literature, explicit calls to move toward upscaling operational definitions (ODs) for better measurement of constructs have appeared. Most of these calls for better measurement acknowledge the problems noted above, and inspired by dissatisfactions with these problems, they recommend tightened definitions of constructs and improved measurement (e.g., Flake & Fried, 2020). Our position is neither of the two. We accept the issues outlined above as sensible reflections of uncertainty in psychological science-an essential feature of most psychological phenomena, mainly due to various types of complexities of human behavior. In response, and for the sake of discussion, we are setting up a controversy. What is to blame for these issues, the research practices or reality? We choose the latter and call for an approach and methods that accord with this reality. Koch (1992) explains that dissatisfactions are inherent to studying a complex reality, that the frustrations repeat from time-to-time in psychological science and are followed by efforts to improve methods and to define univocal concepts. Rather than directly acting on the frustrations we believe that construct abstractness and related heterogeneity of content reflect the true complexity of psychological phenomena that leads to the frustration, and that accepting complexity-based uncertainty and devising
... Fractal dimension analysis has been widely used to study the geometric arrangements of various substances and materials, including tissues during development [30][31][32][33]. Some fractals are selfsimilar, meaning they exhibit geometric similarity at any scale, and there are several techniques and mathematical approaches that can be used to generate and describe these self-similar geometries [34]. However, many natural architectures do not show self-similarity, but instead exhibit a scale-limited similar pattern, making them pseudo-fractals [35]. ...
Article
Full-text available
Spatial patterning of different cell types is crucial for tissue engineering and is characterized by the formation of sharp boundary between segregated groups of cells of different lineages. The cell−cell boundary layers, depending on the relative adhesion forces, can result in kinks in the border, similar to fingering patterns between two viscous partially miscible fluids which can be characterized by its fractal dimension. This suggests that mathematical models used to analyze the fingering patterns can be applied to cell migration data as a metric for intercellular adhesion forces. In this study, we develop a novel computational analysis method to characterize the interactions between blood endothelial cells (BECs) and lymphatic endothelial cells (LECs), which form segregated vasculature by recognizing each other through podoplanin. We observed indiscriminate mixing with LEC−LEC and BEC−BEC pairs and a sharp boundary between LEC−BEC pair, and fingering-like patterns with pseudo-LEC−BEC pairs. We found that the box counting method yields fractal dimension between 1 for sharp boundaries and 1.3 for indiscriminate mixing, and intermediate values for fingering-like boundaries. We further verify that these results are due to differential affinity by performing random walk simulations with differential attraction to nearby cells and generate similar migration pattern, confirming that higher differential attraction between different cell types result in lower fractal dimensions. We estimate the characteristic velocity and interfacial tension for our simulated and experimental data to show that the fractal dimension negatively correlates with capillary number (Ca), further indicating that the mathematical models used to study viscous fingering pattern can be used to characterize cell−cell mixing. Taken together, these results indicate that the fractal analysis of segregation boundaries can be used as a simple metric to estimate relative cell−cell adhesion forces between different cell types.
... In 1974, Oldham and Spanier [1] co-authored the first monograph on fractional calculus titled 'Applied Fractional Calculus', which paved its way towards the application of fractional calculus. In the 1970s, Mandelbrot [2] pointed out that many fractional dimensions exist in nature. Since then, the theory of fractional calculus developed rapidly, and numerous applications of fractional calculus emerged [3]- [9]. ...
Article
Full-text available
In this paper, the dynamic characteristics of fractional Duffing system are analyzed and studied by using the improved short memory principle method. This method has small amount of calculation and high precision, and can effectively improve the problem of large amount of calculation caused by the memory of fractional order. The influence of frequency change on the dynamic performance of the fractional Duffing system is studied using nonlinear dynamic analysis methods, such as Phase Portrait, Poincare Map and Bifurcation Diagram. Moreover, the dynamic behaviour of the fractional Duffing system when the fractional order and excitation amplitude changes are investigated. The analysis shows that when the excitation frequency changes from 0.43 to 1.22, the bifurcation diagram contains four periodic and three chaotic motion regions. Periodic motion windows are found in the three chaotic motion regions. It is confirmed that the frequency and amplitude of the external excitation and the fractional order of damping have a greater impact on system dynamics. Thus, attention shall be paid to the design and analysis of system dynamics.
... Fractal geometry has its unique characteristics such as selfsimilarity and scale invariance, and many porous media have fractal scale characteristics. First proposed by Mandelbrot [10] , fractal theory is widely used to describe complex geometric bodies with self-similarity. Most of porous media in nature satisfy the fractal distribution of statistical significance, and fractal theory is a potentially effective tool for analyzing the transport characteristics of random and disordered porous media [11] . ...
Experiment Findings
Full-text available
This expression of concern involved a first case of explicit data manipulation, given that the authors of the disputed paper fabricated "experimental data" in order to confirm the validity of their fractal models for transport properties of porous materials. When confronted with this allegation, they conceded, and the paper was retracted by the journal.
... The technique quantifies and correlates the uneven surface topography of the powder with a mathematical value. The concept of fractal dimension was introduced by Mandelbrot [48,49] which illustrates a technique to calculate the distance between two points on the surface of the powder. Later, other techniques such as counting box, mass-radius, pixel-dilation, or Caliper methods were used by many researchers [47]. ...
Article
Full-text available
Additive manufacturing refers to the fabrication of three-dimensional products by adding materials layer by layer to get the required shape and size. Although there are numerous additive manufacturing processes available based on the type of feedstock used, the powder-based additive method is the most popular technique as it is suitable for printing a wide range of materials such as polymers, metals, and ceramic components with superior quality. Over the years, as the technology matured, the transition from prototyping to commercial applications has gathered attention to producing metal powders with stringent and consistent quality. It is a generally accepted fact that the quality of the raw material used in manufacturing is critical in determining the physical, mechanical, and microstructural properties of the finished part which applies to the additive manufacturing processes as well. However, comprehensive knowledge of the correlation between the characteristics of the powder and the quality of additive manufactured parts is scarce. Hence, this review attempts to summarize different powder characterization techniques and understand the relationship between the powder quality and properties of additively manufactured parts by reviewing the recent literature.
... This has a large implication for the resulting rheological behaviour, since the incorporated fluid increases the enclosed volume and the roughness of the clusters changes the way they interact with each other and the surrounding fluid (Heath, A;Bahri, P;Fawell, P;Farrow, 2006;Quemada, 1998). Subsequently, fractal geometry (Barthelmes et al., 2003;Flesch et al., 1999;Heath, A;Bahri, P;Fawell, P;Farrow, 2006;Mandelbrot, 1982;Mohtaschemi et al., 2014;Yang et al., 2017) was used to describe the structure of clusters. The scaling relation between the mass of the aggregate and a characteristic length (in this case, its radius) is controlled by the mass fractal dimension, D f : ...
Thesis
Concrete is one of the most produced substances in the world. Its existence therefore comes with gargantuan economic and environmental implications. Any improvement along any stage of the material’s life cycle can consequently offer a very significant payoff. Cement -the hydraulic binder that makes the transition from a fluid to a hardened state possible- is arguably the most important and most complex component of a concrete mixture. This, in addition to the ascendance of non-traditional aggregates and chemical admixtures, as well as the increasing desire to reduce the overall quantities of cement employed in a given application, have created a number of engineering challenges. Among these challenges, rheology control is of particular importance given that it is critical during the initial fresh state and that it also plays an important role in the eventual structural performance of the hardened material. For this reason, the industry is rapidly approaching a new consensus: the information provided by traditional empirical testing methods is no longer sufficient. Consequently, the implementation of mechanistic models with reliable predictive power is seen as a new goalpost. These models are expected to provide a refined understanding of the dynamics driving rheological behaviour and ultimately to be valuable for the optimization of one or more properties of interest. This research is thus focused on the development of mechanistic models aimed at reproducing the rheological response of fresh cement pastes. To achieve this, a multiscale approach was preferred since it is known that interactive forces of molecular origin are crucial for the determination of the aggregation and breakage dynamics among the micrometre-sized granules in suspension that make up a cement paste. In turn, these dynamics decide the overall evolution of the suspension’s microstructure, which ends up driving the macroscopically observed rheological properties. In addition to these phenomena, the transient onset of hydration reactions gradually transforms the fussy colloidal network created between individual granules into a more permanent one. This eventually leads to a transition to the solid state. In order to maintain the tractability of the problem and considering the novelty of the modelling methodology proposed in this work, this part of the puzzle was not considered. Accordingly, the domain of applicability was restricted to the first hours after water addition, when these reactions are not yet significantly impacting rheology. As a starting point, a rheological and spectroscopic study of a series of cement clinker samples was performed as a data collection stage that allowed for the construction of a good picture of both typical flow behaviours as well as the phase composition of clinker granules. Subsequently, the population balance framework was leveraged to construct a rheological model capable of relating the cluster size distribution of the paste with its expected rheology. This approach has not been previously applied to the system of interest, but it is ideal for the description of the flow in highly concentrated suspensions. The built model was shown to depend on physically based parameters, and it was observed that it is capable of reproducing experimentally observed flow curves using only an initial size distribution as its input. Taking into consideration the previously stated importance of molecular interaction forces and the fact that the developed model exhibits a large dependency on parameters related with these forces, molecular dynamics was successively applied to obtain a highly detailed image of the physicochemical environment existing at the nanoscale in cement suspensions. Unlike what can be usually accomplished with other more traditional modelling techniques, the influence of the different phases present in a typical clinker was directly investigated. These results were finally used as one of the bases of a new integrated rheological model. This model estimates the expected interactive forces as a function of an experimentally measured surface phase distribution and then uses these calculations to refine the predictions provided by population balances. As in the first model proposal, the obtained transient evolution of the cluster size distribution is used to predict the macroscopic rheological profile of the paste under given experimental conditions. Overall, this thesis presents a promising way in which the multiple complex phenomena driving cement rheology can be successfully modelled and reconciled. The population balance framework in conjunction with other advanced techniques such as molecular dynamics were showed to be valuable tools that potentially open the door to a fully mechanistic understanding of these systems.
... Recently, fractal nanometric modulation has been reported 14 , according to the mathematical definition, fractals are homogeneous and self-similar geometric objects, which can be used to describe many physical phenomena 15 . Fractals exist everywhere 16 , in certain plants such as romanesco broccoli, also on the coasts, the distribution of species, the growth of trees, rivers, and lungs. ...
Article
Full-text available
Thermoelectric effects have attracted wide attention in recent years from physicists and engineers. In this work, we explore the self-similar patterns in the thermoelectric effects of mono-layer graphene-based structures, by using the quantum relativistic Dirac equation. The transfer † † Corresponding author. March 11, 2022 19:24 0218-348X 2250068 M. Miniya et al. matrix method has been used to calculate the transmission coefficient. The Landauer-Büttiker formalism and the Cutler-Mott formula were used to calculate the conductance, the Seebeck coefficient, and the power factor. We find self-similar behavior and the scale factors between generations in transport and thermoelectric properties. Furthermore, we implement these scale invariances as general scaling rules. We present a new analytical demonstration of self-similarity in the Seebeck coefficient. These findings can open outstanding perspectives for experimentalists to develop thermoelectric devices.
... In this case X has a power spectral density 1/f 2H−1 . To adequately model a 1/f process, a fractional order process has to be used such as the fractional Brownian motion model (Mandelbrot [1982]). For fractal analysis it is helpful to understand the difference between fractional Brownian motion (f Bm) and fractional Gaussian noise (f Gn). ...
Article
Full-text available
This article explores the sentiment dynamics present in narratives and their contribution to literary appreciation. Specifically, we investigate whether a certain type of sentiment development in a literary narrative correlates with its quality as perceived by a large number of readers. While we do not expect a story's sentiment arc to relate directly to readers' appreciation, we focus on its internal coherence as measured by its sentiment arc's level of fractality as a potential predictor of literary quality. To measure the arcs' fractality we use the Hurst exponent, a popular measure of fractal patterns that reflects the predictability or self-similarity of a time series. We apply this measure to the fairy tales of H.C. Andersen, using GoodReads' scores to approximate their level of appreciation. Based on our results we suggest that there might be an optimal balance between predictability and surprise in a sentiment arcs' structure that contributes to the perceived quality of a narrative text.
... With increasing particle breakage, the particle size tended to exhibit a self-similar fractal distribution. Regarding the fractal distribution of the particles, the number of particles N(d) and the particle size d satisfy the following relationship [37]: In the double logarithmic coordinate system, the number of particles and particle size should follow a straight line with a slope of -D. In the test, the number of particles could not be accurately determined, but the particle mass in each particle size range could be obtained by sieving: ...
Article
Full-text available
When granular soil particles are compressed, particles are broken, especially under high-stress conditions, and the particle size distribution is altered, which greatly increases the compressibility of particles. In this paper, through confined compression tests of gypsum and calcareous sand under a high stress, the influence of the initial void ratio and initial particle size of uniform graded sand on the yield stress and compression index of samples is explored. With increasing vertical stress, the particle size distribution transitions from an initial uniform distribution into a fractal distribution. The particle size further attains an obvious fractal distribution under a high stress, and the fractal dimension obtained at each stress level exhibits a linear relationship with the relative crushing rate. The fractal dimension depends on the particle breakage probability. According to Weibull statistics, the relationship between the fractal dimension of particle breakage and the input energy per unit mass is derived. With increasing input energy, the fractal dimension of particle breakage approaches the ultimate fractal dimension. However, for the same material, maintaining a constant maximum stress, the fractal dimension of the particles obtained after reorganization compression continues to increase.
... The measurement of 3D fractal dimension and degree of anisotropy are conducted by an algorithm implemented in Avizo [80]. The fractal dimension (i.e. an index for characterizing fractal patterns by quantifying their complexity as a ratio of the change in detail to the change in scale [81]) is used for quantification of the crack complexity, i.e. the bigger the fractal dimension is, the more complex the cracks are. As presented in Fig. 18 (c), the fractal dimension increases with increasing the number of impacts and contrasting the different microcrack types shows matrix crack > interfacial crack > transgranular crack, consistent with the crack area properties, as has been reported in previous research [37]. ...
Article
Concrete materials are frequently exposed to extreme environments, such as high confining pressures (e.g., deep underground support), dynamic loadings (e.g., natural phenomena and human-induced events), and coupled confinement and dynamic loadings. The behaviours of concrete materials under such conditions result in challenges for the diagnosis and prognosis of structural changes from local damage to catastrophic failure, which is critical to the safety and sustainability of civil infrastructures. This paper aims to explore mechanical properties and progressive fracturing of concrete materials subjected to biaxial confinement and repetitive dynamic loadings. A triaxial Hopkinson bar (Tri-HB) system is used to apply the coupled loading conditions, and obtain the dynamic stress-strain information by interpreting recorded stress-wave signals. Non-destructive evaluation (NDE) techniques, including ultrasonic measurement and synchrotron-based micro-computed tomography (micro-CT), are utilised to quantify progressively damage evolution and fracture characteristics. The digital volume correlation (DVC) and imaging processing techniques are further applied to compute volume deformation fields and to classify microcrack types (i.e., matrix crack, interfacial crack and transgranular cracks). Results show that, with increasing the number of impacts, dynamic peak stress decreases along the impact direction but increases along the lateral direction while the peak strain values increase in both directions. The microcracks firstly initiate at the middle of rear-end of the specimen, continuously propagate along the impact direction, then develop at the top and bottom of the specimen, and eventually coalesce with the occurrence of shear sliding. The observation of microcracks are well validated by ultrasonic measurement. The formation of shear bands was highly dependent on the propagation and coalescence of interfacial and matrix cracks, while transgranular cracks induced by compressive strain localization as displayed in DVC deformation fields play an essential role in the fracture energy under repetitive dynamic loadings.
... In this way, non-smooth self-similar functions were constructed in the last decades of the twentieth century. Nowadays fractal functions constitute a method of approximation of non-differentiable mappings, providing suitable tools for the description of irregular signals ( [6,[15][16][17][18][19]). Navascués [6] defined the fractal convolution of two maps defined on a real interval, on the basis of a special type of fractal interpolation function as follows: Let ∆ : t 0 < t 1 · · · < t N be any partition of the interval I = [t 0 , t N ] and let I n = [t n−1 , t n ], for all n ∈ N N := {1, 2, . . . ...
Article
Full-text available
The theory of metric spaces is a convenient and very powerful way of examining the behavior of numerous mathematical models. In a previous paper, a new operation between functions on a compact real interval called fractal convolution has been introduced. The construction was done in the framework of iterated function systems and fractal theory. In this article we extract the main features of this association, and consider binary operations in metric spaces satisfying properties as idempotency and inequalities related to the distance between operated elements with the same right or left factor (side inequalities). Important examples are the logical disjunction and conjunction in the set of integers modulo 2 and the union of compact sets, besides the aforementioned fractal convolution. The operations described are called in the present paper convolutions of two elements of a metric space E. We deduce several properties of these associations, coming from the considered initial conditions. Thereafter, we define self-operators (maps) on E by using the convolution with a fixed component. When E is a Banach or Hilbert space, we add some hypotheses inspired in the fractal convolution of maps, and construct in this way convolved Schauder and Riesz bases, Bessel sequences and frames for the space.
... In the East, mean Holder exponents _ h = −1.06 ± 0.02 point to antipersistent but slightly nonstationary behavior (9). Comparison of the Hölder exponents estimated before the year 2000 and after 2004 reveals that whereas no statistically significant changes in _ h are observed (pseudo-P > 0.05), their SD decreased by a factor of 5 and 2 in the West and Great Plains, respectively, and nearly doubled in the East (fig. ...
Article
Full-text available
Recent fires have fueled concerns that regional and global warming trends are leading to more extreme burning. We found compelling evidence that average fire events in regions of the United States are up to four times the size, triple the frequency, and more widespread in the 2000s than in the previous two decades. Moreover, the most extreme fires are also larger, more common, and more likely to co-occur with other extreme fires. This documented shift in burning patterns across most of the country aligns with the palpable change in fire dynamics noted by the media, public, and fire-fighting officials.
... A fractal is an object that has a non-integer dimension and with self-similar structures and in general they are defined by an iterative process instead of an explicit mathematical formula [1,2]. The fractal dimension of the fractal object is defined by log( ) / log( ) ...
Preprint
Full-text available
In this short note we consider a possible application of fractal theory and fractional dynamics in the physics of virus. In particular, we investigate the special case of novel Coronavirus. We discuss about some physical characteristics of this virus, we emphasize on the decreasing fractal dimension of virus as a powerful strategy for a possible nanodrug design and finally we present a fractional modified model to the dynamical model of transmission of this virus.
... Supposing Ψ = Φ (Z), Φ converges to a finite limite as Z goes to zero [37]. Therefore, equation (5) belongs to a self-similar solution of the second kind, which is defined as the power-exponents of self-similar variables cannot be determined by dimensional analysis and mathematically corresponds to a fractal [39]. ...
Preprint
Full-text available
In this Letter, a crossover of scaling laws is described as a result of the interference from self-similar variable of the higher class of the self-similarity on the dynamical impact of solid sphere onto a viscoelastic surface. All the physical factors including the size of spheres, the impact of velocity are successfully summarized to the primal dimensionless numbers which construct a self-similar solution of the second kind, which represents the balance between dimensionless numbers. The self-similar solution gives two different scaling laws by the perturbation method describing the crossover. These theoretical predictions are compared with experimental results to show good agreement. It was suggested that a hierarchical structure of similarity plays a fundamental role on crossover, which offers a fundamental insight to self-similarity in general.
... There has to be some surface quality for this part of the visual system to recognize an object. It has been shown that forms in nature have a fractal quality, that fractal images have an aesthetic quality and that the visual system has evolved to respond to natural conditions [24,25,26] . Therefore, it can be inferred that the part of the visual system that calculates balance is also most sensitive to fractal forms. ...
Preprint
Full-text available
This paper identifies a specific pattern of luminance in pictures that creates a low level non-subjective neuro-aesthetic effect and provides a theoretical explanation for how it occurs. Pictures evoke both a top-down and a bottom-up visual percept of balance. Through its effect on eye movements, balance is a bottom-up conveyor of aesthetic feelings of unity and harmony in pictures. These movements are influenced by the large effects of saliency so that it is difficult to separate out the much smaller effect of balance. Given that balance is associated with a unified, harmonious picture and that there is a pictorial effect known to painters and historically documented that does just that, it was thought that such pictures are perfectly balanced. Computer models of these pictures were found to have bilateral quadrant luminance symmetry with a lower half lighter by a factor of ~1.07 +/-~0.03. A top weighted center of quadrant luminance calculation is proposed to measure imbalance. A study was done comparing identical pictures in two different frames with respect to whether they appeared different given that the sole difference is balance. Results show that with observers, mostly painters, there was a significant correlation between average pair imbalance and observations that two identical pictures appeared different indicating at a minimum that the equation for calculating balance was correct. For those who can disregard saliency the effect is the result of the absence of forces on eye movements created by imbalance. This unaccustomed force of imbalance causes fatigue when viewing pictures carefully. A model is presented of how an equation of balance could be derived from any luminous object so that its approximate size, location and motion can be followed. Using this model the center of balance in non-rectangular pictures was determined.
... They are progressively replacing the use of distribution-dependent techniques such as probability or quantilequantile plots. The number-size (N-S) methodology was first proposed by Mandelbrot (1983) based on the power-law relationship between the magnitude of values (ρ) (such as stream sediment geochemical concentrations) and the cumulative number of samples (N) or observations at or below each value for ρ: ...
Article
In regional exploration for Au mineralization using stream sediment geochemistry, multielement analysis following cyanide leaching of bulk samples or aqua regia digestion of the <180 μm fraction are the two most common sampling and analytical methods. Using data from extensive regional stream sediment and rock chip surveys in Western Turkey a comparison is made between the efficiency of fractal modelling of various spatial and frequency-based statistical methods to isolate patterns or populations related to known mineral deposits. This includes different combinations of pre-processing raw data, such as logistic-transformations and principal component analysis, together with fractal, U-spatial statistics and singularity index modelling. Due to variations in regional geology and the effects of a substantial number of base and precious metal in the region, all approaches indicate the presence of multiple geochemical populations with clearly defined thresholds. The efficiency in detecting known deposits is higher for spatially based methods than for simpler frequency distribution-based techniques, especially if weak geochemical anomalies are excluded to limit the risk of generating false positives. Maximum efficiency (between 70 and 80%) in linking the spatial distribution of anomalous geochemical populations with known Au ± Ag mineral deposits, in both in BLEG and the <180 μm fraction, was obtained using classification based on singularity indices, or logistical-transformed data with fractal analysis or U-spatial statistics. The results demonstrate the advantage of spatially based anomaly detection methods applied to multivariate data over simpler frequency distribution methods applied to univariate data.
... With respect to ore-forming processes, fractal and multifractal concepts (Mandelbrot, 1983) have been widely used and their practical benefits have been demonstrated (e.g., Cheng et al., 1994;Cheng, 2007;Wang and Zuo, 2019). The concentration-area (C-A; Cheng et al., 1994) fractal model has been widely used to classify spatial exploration data such as geochemical anomalies. ...
Article
Identification of geochemical anomalies using singularity theory is a topic of interest in the field of mineral exploration. The ordinary singularity mapping technique does not consider the critical role of geological structures in the ore-forming environment. Therefore, models of spatially weighted singularity have been developed through the incorporation of fault systems, which might play an important role in ore-forming processes, and thus, are key factors in spatial analysis of mineralization indicators. In this study, we proposed a strengthened singularity mapping technique that was spatially enhanced and weighted by mineralization-efficient fault systems. For this, we applied distance distribution analysis to distinguish “efficient” (linked to mineralization) and “inefficient” (not linked to mineralization) fault systems, and then, used the former to generate the strengthened spatially weighted singularity models of geochemical indicator elements. The rationale is that faults with different orientations are the result of various geological processes, of which only some are linked to the ore-forming processes. We also integrated the strengthened models of geochemical anomalies by using random forest (RF) and principal component analysis (PCA) techniques to produce a stronger geochemical clue. Comparison of the integration results demonstrated that the former, a strengthened anisotropic geochemical singularity modeling technique proposed in this paper through the incorporation of efficient fault systems and RF method, is superior to the later, an existing anisotropic geochemical singularity modeling technique through the incorporation of all faults and PCA method. Data related to porphyry copper mineralization in Sarduiyeh district, Iran, was used to illustrate the procedure applied.
Article
Full-text available
Urban Traffic Congestion continuing to be major problem in urban areas and is being considered as a perineal issue for transport planners and traffic engineers for designing the mitigation measure and strategy to counter. For any mitigation measure to be proposed or designed, it is necessary to measure the congestion and identify the factors that are causing the congestion. The urban road network and the factors causing the urban traffic congestion are no more characteristic in nature. Instead, they are inconsistent and irregular in characteristics, exhibiting the scale variance when measured on smaller scales. The capacities of the road lanes are falling too short for the traffic to handle efficiently. The actual capacities of the lanes are much lower than the practical capacities. The approach of fractal analysis best fits for the analysis of features showing such characteristics. Hence, this paper presents a study where an existing link in urban road network is analyzed by the concept of fractal geometry and fractal dimensions. The complete link is divided into smaller segments and fractal dimensions are calculated for lane capacities. The fractal dimensions so determined shows that the lane capacity within a link is inconsistent and irregular.
Article
Full-text available
This paper introduces a different perspective of Neutrosophic Fractals and Neutrosophic Soft Fractals, merging the principles of Neutrosophic Logic, Soft set theory, and Fractal Geometry to address indeterminacy in complex, self-similar structures specifically the Von Koch curve and the Sierpinski triangle. It sightsees the complex qualities of Neutrosophic soft sets by incorporating attributes of falsification, indefiniteness, and truth into union and intersection operations. The research elucidates the interplay between Neutrosophic Logic and fractal geometry, leading to more precise modeling of complex systems. Proving theorems and providing examples examine the intricate interactions between membership characteristics in these fractal structures, demonstrating self-similarity. Fractal geometry is applied innovatively to improve the representation of uncertainty, indeterminacy, and falsity in Neutrosophic Logic, enhancing mathematical modeling techniques. Results show that the Sierpinski triangle provides a better representation than the Koch curve.
Article
Full-text available
Modern astronomical and advanced wireless communication systems necessitate the utilization of array antennas that provide programmable multibeam capabilities, broadband coverage, high-end coverage range, high gain, reduced side-lobe levels with broader side-lobe level angles, enhanced signal-to-noise ratio, and compact dimensions. This has led to many array antenna theories, including fractal array antennas. In order to enhance comprehension of the operational mechanisms of fractal antennas, an introductory exposition on the underlying theoretical principles is provided. This paper provides an in-depth analysis of current developments in fractal array antenna design. To better understand how fractal antenna function, a primer on the theory behind them is presented. In addition, comparative research of the present state-of-the-art in antenna miniaturisation, gain, and Bandwidth augmentation with fractal array are performed.
Preprint
Full-text available
When describing the tribological behaviour of technical surfaces, the need for full length scale microtopographic characterization often arises. The self-affine of surfaces and the characterization of self-affine by a fractal dimension and its implantation into tribological models are commonly used. The goal of our present work was to determine the frequency range of fractal behaviour of surfaces by analysing the microtopographic measurements of a brake plunger. We also wanted to know if the bifractal and multifractal behaviour can be detected in real machine parts. As a result, we developed a new methodology for determining the fractal range boundaries to separate the nano- and micro-roughness. In order to reach our goals, we used atomic force microscope (AFM) and stylus instrument to provide measurements in wide frequency range (19 nm - 3 mm). As a result of the power spectral density (PSD) based fractal evaluation, it was found that the examined surface could not be characterized by a single fractal dimension and that we developed a methodology for stopping the margin of validity of each fractal dimension, and we also developed a methodology for determining the limits of validity of certain fractal dimensions.
Article
Full-text available
In this paper, a comprehensive review of the wireless body area network is provided. A review of theWBAN architectures, standard network topologies, andWBAN communication protocols is discussed in detail. Also, the security requirements ofWBAN, security threats and types of attacks, and authentications used in WBAN are discussed. The paper also includes very detailed coverage of antenna types, antenna designs, and flexible antennas used in WBAN with some design considerations and comparisons. Some new energy harvesting technologies, materials used for energy harvesting, and energy management are also discussed. Energy harvesting and power management is an ever-growing area of research. Despite the fact that there are many nanogenerator-based energy harvesting methods, the demand for more efficient energy harvesting mechanisms is ever-increasing. The paper has an extensive discussion of energy harvesting and power management methods. Subsequently, some reviews of recent developments in wearable sensors and novel materials for developing wearable sensors are discussed. Finally, the application areas of WBAN are discussed.
Preprint
Full-text available
Filling mining technology is an important representative technology to realize green and low-carbon mining. The filling body has distinct rheological characteristics under the long-term action of formation loads and groundwater seepage. In order to study the creep characteristics of filling body under different moisture contents and reveal its aging-mechanical properties, an improved Bingham fractional creep model was established to describe the whole process of creep based on the traditional Bingham model. Based on the experimental data of gangue cemented backfill under different moisture content, the parameters of creep model are obtained by using user-defined function fitting and least square method. The results show that the improved Bingham fractional creep model can well describe the whole creep process of filling body under different moisture contents. Compared with the traditional Bingham model, the fitting degree is higher, which solves the problem that the Bingham model cannot describe the nonlinear creep stage. Model parameter α and ξ increase with the increase of axial stress and moisture content. Under the same moisture content, η gradually increases with the increase of axial stress. This work has a certain reference significance for studying the mechanical properties and creep constitutive model of filling body containing water.
Article
Objective: Automatic human alertness monitoring has recently become an important research topic with important applications in many areas such as the detection of drivers' fatigue, monitoring of monotonous tasks that require a high level of alertness such as traffic control and nuclear power plant monitoring, and sleep staging. In this study, we propose that balanced dynamics of Electroencephalography (EEG) (so called EEG temporal complexity) is a potentially useful feature for identifying human alertness states. Recently, a new signal entropy measure, called Range Entropy (RangeEn), was proposed to overcome some limitations of two of the most widely used entropy measures, namely Approximate Entropy (ApEn) and Sample Entropy (SampEn), and showed its relevance for the study of time domain EEG complexity. In this paper, we investigated whether the RangeEn holds discriminating information associated with human alertness states, namely Awake, Drowsy, and Sleep and compare its performance against those of SampEn and ApEn. Approach: We used EEG data from 60 healthy subjects of both sexes and different ages acquired during whole night sleeps. Using a 30-second sliding window, we computed the three entropy measures of EEG and performed statistical analyses to evaluate the ability of these entropy measures to discriminate among the different human alertness states. Main results: Although the three entropy measures contained useful information about human alertness, RangeEn showed a higher discriminative capability compared to ApEn and SampEn especially when using EEG within the Beta frequency band. Significance: Our findings highlight the EEG temporal complexity evolution through the human alertness states. This relationship can potentially be exploited for the development of automatic human alertness monitoring systems and diagnostic tools for different neurological and sleep disorders, including insomnia.
Article
Full-text available
We consider the statistical properties of solutions of the stochastic fractional relaxation equation and its fractionally integrated extensions that are models for the Earth's energy balance. In these equations, the highest-order derivative term is fractional, and it models the energy storage processes that are scaling over a wide range. When driven stochastically, the system is a fractional Langevin equation (FLE) that has been considered in the context of random walks where it yields highly nonstationary behaviour. An important difference with the usual applications is that we instead consider the stationary solutions of the Weyl fractional relaxation equations whose domain is -∞ to t rather than 0 to t. An additional key difference is that, unlike the (usual) FLEs – where the highest-order term is of integer order and the fractional term represents a scaling damping – in the fractional relaxation equation, the fractional term is of the highest order. When its order is less than 1/2 (this is the main empirically relevant range), the solutions are noises (generalized functions) whose high-frequency limits are fractional Gaussian noises (fGn). In order to yield physical processes, they must be smoothed, and this is conveniently done by considering their integrals. Whereas the basic processes are (stationary) fractional relaxation noises (fRn), their integrals are (nonstationary) fractional relaxation motions (fRm) that generalize both fractional Brownian motion (fBm) as well as Ornstein–Uhlenbeck processes. Since these processes are Gaussian, their properties are determined by their second-order statistics; using Fourier and Laplace techniques, we analytically develop corresponding power series expansions for fRn and fRm and their fractionally integrated extensions needed to model energy storage processes. We show extensive analytic and numerical results on the autocorrelation functions, Haar fluctuations and spectra. We display sample realizations. Finally, we discuss the predictability of these processes which – due to long memories – is a past value problem, not an initial value problem (that is used for example in highly skillful monthly and seasonal temperature forecasts). We develop an analytic formula for the fRn forecast skills and compare it to fGn skill. The large-scale white noise and fGn limits are attained in a slow power law manner so that when the temporal resolution of the series is small compared to the relaxation time (of the order of a few years on the Earth), fRn and its extensions can mimic a long memory process with a range of exponents wider than possible with fGn or fBm. We discuss the implications for monthly, seasonal, and annual forecasts of the Earth's temperature as well as for projecting the temperature to 2050 and 2100.
Article
Full-text available
This paper presents a short history from Philosophiae Naturalis Principia Matematica of Newton to skyrmions of Skyrme. It is shown that the classical mechanics does not exclude skyrmions (as topologically stable field configuration of a certain class of non-linear sigma models- for example nucleon model). In certain conditions the Newtonian Theory becomes fundamental in building of modern physics theories (as quantum mechanics, fields theories, etc.).
Article
Full-text available
Assimilating any complex economic system with a fractal, in the most general Mandelbrot’s sense, non – differentiable behaviors in its economic dynamics are analyzed. As such, economic dynamics in the form of Schrödinger – type various regimes imply “holographic implementations” of the economic processes through group invariance of SL(2R) – type. Then, by means of previous group invariance as synchronization group between any economic system entities, both the phases and the amplitudes of the entities are affected from a homographic perspective. The usual “synchronization” manifested through the delay of the amplitudes and phases of the entities of the economic system must represent here only a fully particular case. In a special case of synchronization of economic system entities, given by Riccati type gauge, period doubling, damping oscillations, self – modulation and chaotic regimes emerge as natural behaviors in the economic dynamics of the economic processes.
ResearchGate has not been able to resolve any references for this publication.