Article

Mathematical Biophysics : Physico-Mathematical Foundation of Biology /

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

3. Ed. Vol. 1. 1960. 26, 488, 15 s. -- Vol. 2. 1960. 12, 462, 15 s.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this paper, we develop an approach that is focused on a different aspect of information warfarethat is, choosing the positions of the individuals in the confrontation. It is based on the proposed in [25] model built on Rashevsky's neurological scheme [26,27]. ...
... The model is based on Rashevsky neurological scheme [26,27], which describes the formation of the reaction of an individual in response to incoming stimuli taking into account his attitude. With regard to the subject of propaganda warfare between the two parties, the reaction is the manifest position of the individual, i.e. his participation in the spread of information in support of one of the parties. ...
... Here   0 X is given number of supporters of the party in the initial moment of time, the parameters 1 0 b  , 2 0 b  characterize the intensity of the media propaganda of the parties, for clearness, in this work it is assumed that 12 bb  . The positive constants , , , a A C  are introduced in the neurological model [26,27], which provides neurological sense for these parameters. We can also propose some sociological interpretation. ...
... It is fair to say that a large fraction of human intellectual activities of the past several millennia has been concerned with explicating the nature of modeling as an essence of understanding itself (see for instance references [1][2][3][4][5][6][7][8][9][10][11][12][13][14] as a representative sample of scholarly work.) Arguably a vast majority (if not all) of these efforts can be roughly divided into four different academic or cultural trends 1 * that address four different kinds of questions. ...
... Lack of clear answers to these questions and reputation of being a descriptive "non-quantitative" science have hounded biology for all these centuries during which physicists bragged about their measurements being relevant to electricity, heat, atom and even entire Universe. Today, it is clear that the problem of measurement is not simple even in physics 12 . In very general terms, the kernel of this problem is assigning an appropriate metric space to selected representations of the system and then using a properly chosen metrics as a measure of distance between pseudo-states. ...
... See the references [24,36,40,44] for more details on these issues in both methodological and historic context. 12 Of course measuring parameters of simple (non-convoluted) mechanisms that are models in physics is easier to handle than even imagining what parameters should characterize a convoluted (complex) organic system. However, this apparent easiness of physical measurements is due to a relative facility of finding measurable parameters and not to the simplicity of the measurement problem itself. ...
Article
Full-text available
The theory of surrogacy is briefly outlined as one of the conceptual foundations of systems biology that has been developed for the last 30 years in the context of Hertz-Rosen modeling relationship. Conceptual foundations of modeling convoluted (biologically complex) systems are briefly reviewed and discussed in terms of current and future research in systems biology. New as well as older results that pertain to the concepts of modeling relationship, sequence of surro-gacies, cascade of representations, complementarity, analogy, metaphor, and epistemic time are presented together with a classification of models in a cascade. Examples of anticipated future applications of surrogacy theory in life sciences are briefly discussed.
... La vie artificielle [Rennard, 2002], c'est un monde dans lequel on parle de biomorphes, dans lequel les auteurs titrent leurs articles How I created Life in a virtual universe [Ray, 1993], dans lequel on 'apprend' sur la Vie en simulant des systèmes très artificiels. C'est également ce rapprochement entre processus physico-chimiques auto-organisés et la présence de motifs réguliers dans le vivant qui fascinait déjà Thompson d'Arcy, Nicolai Rashevsky puis Alan Turing et de nombreux autres, les poussant à proposer des causes physico-chimiques à la génération des formes du vivant et à les modéliser comme tels [D'arcy, 1917 ;Rashevsky, 1940Rashevsky, & 1948Turing, 1952 ;Hodge, 1992 ;Murray, 1993] ; c'est aussi ce qui continua de fasciner 6 des générations d'étudiants et de chercheurs qui, s'inspirant des travaux de ces pionniers de la biologie théorique, continuent de manipuler processus de réaction-diffusion, agents virtuels, transformation géométriques (morphing), etc. Obtenir par simulation une telle auto-organisation, c'est déjà obtenir un bout de vie ! Je ne peux dès lors m'empêcher de rappeler cette anecdote de Louis Bec qui racontait avoir perdu une de ses 'créatures' (artificielles) qu'il conservait 'en vie' dans un de ses 'aquariums' virtuels [Comm .SFBT, 2014]. ...
... Je souhaiterais cependant apporter une suggestion pour la structuration de l'espace aux origines du vivant : l'auto-organisation spatiale. Alan Turing en 1952 [Turing, 1952], après les travaux précurseurs de Kolmogorov et de Rashevsky [Kolmogorov, 1937 ;Rashevsky, 1940Rashevsky, et 1948, théorisa le premier l'auto-organisation spatiotemporelle de motifs chimiques stationnaires, proposant ainsi que des réactions chimiques particulières (en l'occurrence un activateur qui active sa propre formation et celle d'un inhibiteur qui, lui, inhibe la formation de l'activateur) couplées à une diffusion différentielle des espèces chimiques en question, puissent générer des motifs auto-organisés de concentration des espèces chimiques présentes. Plus largement que les motifs de Turing, de nombreux phénomènes de réactions/interactions couplés à des phénomènes de transport de la matière agissent dans la nature, causant des structurations et compartimentations : sédimentation couplée à des réactions , milieux excitables comme la réaction de Belousov-Zhabotinski ou de Bray [Bray, 1921 ;Belousov 1951Belousov et 1958Zhabotinski, 1967], trail systems (voir [Gardner, 1970] différents et indépendants (visible sur les canaux couleurs Rouge, Vert, Bleu) sont couplés à un système de réaction-diffusion de Turing. ...
Thesis
[FR] Les origines de la vie, la nature même de la matière vivante et son activité biologique, les représentations théoriques que nous en faisons pour la modéliser, l’appréhender, la contrôler, et les conséquences que l’usage de ces représentations ont sur sur notre façon de penser le vivant sont autant de considérations théoriques, méthodologiques et philosophiques que j’aborde dans ce manuscrit en m’appuyant sur mes recherches expérimentales et théoriques actuelles et passées concernant l’organisation du vivant, ses origines et son évolution. Cette réflexion et les recherches effectives que nous menons, mes étudiants, collaborateurs et moi, se placent dans le cadre de l’émergence et la complexification de la vie. Les systèmes biologiques depuis les âges prébiotiques (ex : proto-métabolismes, proto-génetique) jusqu’à nos jours (ex : réseaux de régulation génétique, organisations cellulaires, subcellulaires, réseaux de neurones) ont évolué mais dépendent toutefois de systèmes existants. L’étude des propriétés structurelles et dynamiques (complexité, robustesse) mais également évolutives, de systèmes formels comme les réseaux Booléens ou autres systèmes de vie artificielle, et la recherche d’ensembles de systèmes aux propriétés structurelles ou dynamiques communes, permet d’introduire une vision ensembliste des systèmes biologiques et penser l’évolution ou les variations de fonctionnement d’un système biologique comme des trajectoires dans un paysage morphogénétique, un meta-réseau. Le passage, évolutif ou fonctionnel, d’un système à un autre est permis par leurs proximités structurelles et dynamiques. Notre travail utilise de tels systèmes formels en tant qu’abstractions des systèmes biologiques pour étudier (i) comment ils évoluent tout en préservant des caractères et comportements ancestraux, (ii) ce qui détermine l’évolutivité de ces réseaux, leur respective robustesse vis-à-vis de changements structuraux et leur relation avec la complexité structurelle et fonctionnelle, (iii) comment des trajectoires peuvent exister sous contraintes de viabilité dans de tels paysages morphogénétiques, et (iv) comment des systèmes et leurs comportements associés peuvent se combiner au cours de ces évolutions. L’essai que je propose ici s’inscrit dans le cadre général de la biologie théorique et de la philosophie des sciences. J’y aborde des questions aussi variées que la méthode scientifique, l’humain dans les sciences, l’organisation et le fonctionnement de la vie, la nature des modèles, le contrôle, la finalité du vivant. J’y réfléchis, sur la base des recherches effectives que nous menons, à comment et pourquoi s’éloigner de la pensée mécaniste pour considérer davantage un vivant dont la nature et l’activité, depuis ses origines jusqu’au vivant actuel, sont davantage fondées sur ce que je désigne par rare, faible et amorphe, des qualificatifs qui s’éloignent de la conception que nous en avons, le vivant-machine, et des outils dont nous disposons pour l’étude du monde biologique. [EN] I address in this manuscript theoretical, methodological and philosophical considerations based on my current and past, experimental and theoretical research on the organisation of life, its origins and its evolution. Of particular interest is how the theoretical representations of the very nature of living matter and the origins of life have consequences when using modeling to understand and control living matter. These reflection and research are carried out by my students, collaborators and myself in the context of the emergence and complexification of life. Biological systems from prebiotic ages (e.g. proto-metabolisms, proto-genetics) to the present day (e.g. gene regulation networks, cellular and subcellular organisations, neural networks) have evolved but continue to depend on current systems. The study of structural and dynamical properties (complexity, robustness) but also evolutionary properties of formal systems such as Boolean networks or other artificial life systems, and the search for sets of systems with common structural or dynamical properties, allows us to introduce a holistic vision of biological systems and to think of their evolution and variations in terms of trajectories in a morphogenetic landscape, i.e. a meta-network. An evolutionary or functional path from one system to another is allowed by their structural and dynamical proximities. In our work, using such formal systems as abstractions of biological systems, we study (i) how they evolve while preserving ancestral traits and behaviours, (ii) what determines the evolvability of these networks, their respective robustness to structural changes and their relation to structural and functional complexity, (iii) how trajectories can exist under viability constraints in such morphogenetic landscapes, and (iv) how systems and their associated behaviours can combine during these evolutions. The essay I propose falls within the general framework of theoretical biology and the philosophy of sciences. I try to address questions as various as the scientific methodology, humans in science, the organisation of life, the nature of models, control, purpose in life and many others. While using machine-based formalisms such as Boolean networks, I aim at legitimate the need to move away from a mechanistic thinking and the usual conception of the biological world, i.e. Life as a Machine, and the associated theoretical and experimental tools we use to study it. I actually propose to consider an alternate view of the nature and functioning of living matter - from its origins to the present day - based more on what I call the rare, the weak and the amorphous.
... Integral Biomathics considers itself as continuation and extension of the research line traced by Rashevsky [15][16][17][18][19][20], Waddington-Goodwin [21][22][23], Varela-Maturana-Uribe [24], Rosen-Louie [25][26][27][28][29][30][31] and others [39][40][41][42]. Its core insight is that the clue to understanding living systems is their structured development as 'organic' multi-level complexes, captured by means of appropriate biomathematical and biocomputational formalisms. ...
... In particular, this will be provided through the careful examination of the main principles and characteristics of, successively, MES, WLI and WLIMES. Let us note that, in the past, CT has been criticized for its limited capability to model emergent and quantum phenomena in living systems using organizational closures (causal entailments) and modelling relations known from the works of Rashevsky [15][16][17][18][19][20] and Rosen [25][26][27][28][29] on Relational Biology and (M,R)-systems. Yet recently, CT has made significant progress with new extensions and syntheses suggested, e.g. in the research of Letelier [196], Kineman [197], Louie [30][31] and Longo [198][199], and in particular in the work of A. C. Ehresmann and J.-P. ...
Article
Full-text available
Forty-two years ago, Capra published "The Tao of Physics" (Capra, 1975). In this book (page 17) he writes: "The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts" and that, unlike 'classical' physics, the sub-atomic and quantum "modern physics" shows resonances with Eastern thoughts and "leads us to a view of the world which is very similar to the views held by mystics of all ages and traditions." This article stresses an analogous situation in biology with respect to a new theoretical approach for studying living systems, Integral Biomathics (IB), which also exhibits some resonances with Eastern thought. Stepping on earlier research in cybernetics(1) and theoretical biology,(2) IB, has been developed since 2011 by over 100 scientists, from a number of disciplines who have been exploring a substantial set of theoretical frameworks. From that effort, the need for a robust core model utilizing advanced mathematics and computation adequate for understanding the behavior of organisms as dynamic wholes was identified. At this end, the authors of this article have proposed WLIMES (Ehresmann and Simeonov, 2012), a formal theory for modeling living systems integrating both the Memory Evolutive Systems (Ehresmann and Vanbremeersch, 2007) and the Wandering Logic Intelligence (Simeonov, 2002b). Its principles will be recalled here with respect to their resonances to Eastern thought.
... В данной работе рассмотрим модели информационного противоборства [1,2], основанные на нейрологической схеме Рашевского [3]. Эти модели рассматривают простейший случай агитационной или рекламной кампании, когда речь идет о выборе индивидами одной из двух позиций по некоторому вопросу, например, какую из партий -L или R -поддержать на выборах. ...
Conference Paper
Full-text available
In this paper a discrete modification of the continuous model of information warfare based on Rashevsky’s neurological scheme is proposed. The model is obtained by replacing integro-differential equations with cellular automata. The new model now allows the influence of small groups on an individual's opinion as well as interiorization of public opinion by an individual. The simulation system based on the proposed model was used to search for optimal control in certain scenarios of information warfare, namely the problems of optimal distribution of propaganda intensity in case of one-time destabilization and delayed (or advanced) reactions to changes in propaganda intensity by the opposing side.
... Theoretical developments within mathematical biology by McCulloch and Pitts (1943) revealed one first major result: The units of cognition-neurons-could be described with a formal framework. Formal neurons were described in terms of threshold units, largely inspired by the state-of-the-art knowledge of real neurons (Rashevsky, 1960). Over the last decades, major quantitative advances have been obtained by combining neuron-inspired models with multilayer architecture (LeCun et al., 2015) and physics of neuromorphic computing (Indiveri and Liu, 2015;Markovi et al., 2020). ...
Article
Full-text available
Ordinary computing machines prohibit self-reference because it leads to logical inconsistencies and undecidability. In contrast, the human mind can understand self-referential statements without necessitating physically impossible brain states. Why can the brain make sense of self-reference? Here, we address this question by defining the Strange Loop Model, which features causal feedback between two brain modules, and circumvents the paradoxes of self-reference and negation by unfolding the inconsistency in time. We also argue that the metastable dynamics of the brain inhibit and terminate unhalting inferences. Finally, we show that the representation of logical inconsistencies in the Strange Loop Model leads to causal incongruence between brain subsystems in Integrated Information Theory.
... In the framework of D'Arcy Thompson, closely related species differ phenotypically only by a rescaling. This emphasizes the relational aspect of biological transformations which later became the central point of relational biology developed by Nicholas Rashevsky (1938) and Robert Rosen (1991). Relational biology studies biological systems and their transformations from the standpoint of 'organization of relations' which are considered as entailment relations independent of any particular physical mechanism or material realization (Louie, 2019(Louie, , 2020. ...
Article
The celebrated 1917 work “On Growth and Form” of D'Arcy W. Thompson has established a landmark for mathematical biology, introducing new perspectives of study and research in biology, providing mathematical methods to morphology of biological systems. In this brief historical essay, we recall the novelties and relevance of the work from a retrospective stance, above all pointing out the crucial role played by it in the dawning of epigenetic standpoint. The role of underlying epigenetic processes in generation of biological forms via similarity transformations is analyzed within the framework of D'Arcy Thompson. The significance of D'Arcy Thompson as a predecessor of the relational biology and of the epigenetic concepts of evolution is discussed.
... In this paper we consider models of informational confrontation [7], [8] based on Rashevsky's neurological scheme [9]. ...
... The mathematical modelling of intra-cellular biological processes has been using nonlinear ordinary differential equations since the early ages of mathematical biophysics in the 1940s and 50s (Rashevsky, 1960). A standard modelling choice for cellular circuitry is to use chemical reactions with mass action law kinetics, leading to polynomial differential equations. ...
Article
We consider a problem from biological network analysis of determining regions in a parameter space over which there are multiple steady states for positive real values of variables and parameters. We describe multiple approaches to address the problem using tools from Symbolic Computation. We describe how progress was made to achieve semi-algebraic descriptions of the multistationarity regions of parameter space, and compare symbolic results to numerical methods. The biological networks studied are models of the mitogen-activated protein kinases (MAPK) network which has already consumed considerable effort using special insights into its structure of corresponding models. Our main example is a model with 11 equations in 11 variables and 19 parameters, 3 of which are of interest for symbolic treatment. The model also imposes positivity conditions on all variables and parameters. We apply combinations of symbolic computation methods designed for mixed equality/inequality systems, specifically virtual substitution, lazy real triangularization and cylindrical algebraic decomposition, as well as a simplification technique adapted from Gaussian elimination and graph theory. We are able to determine multistationarity of our main example over a 2-dimensional parameter space. We also study a second MAPK model and a symbolic grid sampling technique which can locate such regions in 3-dimensional parameter space.
... The simplest, canonical model is based on an assembly of two-state agents description [22,23]. These are denoted as ...
Article
Full-text available
Liquid neural networks (or ‘liquid brains’) are a widespread class of cognitive living networks characterized by a common feature: the agents (ants or immune cells, for example) move in space. Thus, no fixed, long-term agent-agent connections are maintained, in contrast with standard neural systems. How is this class of systems capable of displaying cognitive abilities, from learning to decision-making? In this paper, the collective dynamics, memory and learning properties of liquid brains is explored under the perspective of statistical physics. Using a comparative approach, we review the generic properties of three large classes of systems, namely: standard neural networks (solid brains), ant colonies and the immune system. It is shown that, despite their intrinsic physical differences, these systems share key properties with standard neural systems in terms of formal descriptions, but strongly depart in other ways. On one hand, the attractors found in liquid brains are not always based on connection weights but instead on population abundances. However, some liquid systems use fluctuations in ways similar to those found in cortical networks, suggesting a relevant role for criticality as a way of rapidly reacting to external signals. This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.
... The mathematical modelling of intra-cellular biological processes has been using nonlinear ordinary differential equations since the early ages of mathematical biophysics in the 1940s and 50s (Rashevsky, 1960). A standard modelling choice for cellular circuitry is to use chemical reactions with mass action law kinetics, leading to polynomial differential equations. ...
Preprint
Full-text available
We consider a problem from biological network analysis of determining regions in a parameter space over which there are multiple steady states for positive real values of variables and parameters. We describe multiple approaches to address the problem using tools from Symbolic Computation. We describe how progress was made to achieve semi-algebraic descriptions of the multistationarity regions of parameter space, and compare symbolic results to numerical methods. The biological networks studied are models of the mitogen-activated protein kinases (MAPK) network which has already consumed considerable effort using special insights into its structure of corresponding models. Our main example is a model with 11 equations in 11 variables and 19 parameters, 3 of which are of interest for symbolic treatment. The model also imposes positivity conditions on all variables and parameters. We apply combinations of symbolic computation methods designed for mixed equality/inequality systems, specifically virtual substitution, lazy real triangularization and cylindrical algebraic decomposition, as well as a simplification technique adapted from Gaussian elimination and graph theory. We are able to determine multistationarity of our main example over a 2-dimensional parameter space. We also study a second MAPK model and a symbolic grid sampling technique which can locate such regions in 3-dimensional parameter space.
... In recent years, big data and the availability of low-cost parallel computation have led to rapidly growing interest in artificial neural networks. Although neural network models have been developed and studied since the 1930's [15] and many currently popular network models reflect ideas that have been well-established several decades ago [1], only in the recent years the convergence of computational capability, connectivity, and data have started to create visible breakthroughs in neural AI. The remarkable successes of "deep learning" now suggest that learning theorists may learn something important from neural network research and its algorithms. ...
Chapter
Full-text available
In this paper we revisit Vygotsky’s developmental model of concept formation, and use it to discuss learning in artificial neural networks. We study learning in neural networks from a learning science point of view, asking whether it is possible to construct systems that have developmental patterns that align with empirical studies on concept formation. We put the state-of-the-art Inception-v3 image recognition architecture in an experimental setting that highlights differences and similarities in algorithmic and human cognitive processes.
... The mathematical modelling of intra-cellular biological processes has been using nonlinear ordinary differential equations since the early ages of mathematical biophysics in the 1940s and 50s[28]. A standard modelling choice for cellular circuitry is to use chemical reactions with mass action law kinetics, leading to polynomial differential equations. ...
Conference Paper
We investigate models of the mitogenactivated protein kinases (MAPK) network, with the aim of determining where in parameter space there exist multiple positive steady states. We build on recent progress which combines various symbolic computation methods for mixed systems of equalities and inequalities. We demonstrate that those techniques benefit tremendously from a newly implemented graph theoretical symbolic preprocessing method. We compare computation times and quality of results of numerical continuation methods with our symbolic approach before and after the application of our preprocessing.
... In our model development we employed a 'top-down' empirical approach based on Dimensional Analysis (DA) of observed data from our Solar System. We chose DA as an analytic tool because of its ubiquitous past successes in solving complex problems of physics, engineering, mathematical biology, and biophysics [16][17][18][19][20][21]. To our knowledge DA has not previously been applied to constructing predictive models of macro-level properties such as the average global temperature of a planet; thus, the following overview of this technique is warranted. ...
Article
Full-text available
A recent study has revealed that the Earth’s natural atmospheric greenhouse effect is around 90 K or about 2.7 times stronger than assumed for the past 40 years. A thermal enhancement of such a magnitude cannot be explained with the observed amount of outgoing infrared long-wave radiation absorbed by the atmosphere (i.e. ≈ 158 W m-2), thus requiring a re-examination of the underlying Greenhouse theory. We present here a new investigation into the physical nature of the atmospheric thermal effect using a novel empirical approach toward predicting the Global Mean Annual near-surface equilibrium Temperature (GMAT) of rocky planets with diverse atmospheres. Our method utilizes Dimensional Analysis (DA) applied to a vetted set of observed data from six celestial bodies representing a broad range of physical environments in our Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a moon of Saturn), and Triton (a moon of Neptune). Twelve relationships (models) suggested by DA are explored via non-linear regression analyses that involve dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent variables. One non-linear regression model is found to statistically outperform the rest by a wide margin. Our analysis revealed that GMATs of rocky planets with tangible atmospheres and a negligible geothermal surface heating can accurately be predicted over a broad range of conditions using only two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure. The hereto discovered interplanetary pressure-temperature relationship is shown to be statistically robust while describing a smooth physical continuum without climatic tipping points. This continuum fully explains the recently discovered 90 K thermal effect of Earth’s atmosphere. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that has important theoretical implications. A key entailment from the model is that the atmospheric ‘greenhouse effect’ currently viewed as a radiative phenomenon is in fact an adiabatic (pressure-induced) thermal enhancement analogous to compression heating and independent of atmospheric composition. Consequently, the global down-welling long-wave ux presently assumed to drive Earth’s surface warming appears to be a product of the air temperature set by solar heating and atmospheric pressure. In other words, the so-called ‘greenhouse back radiation’ is globally a result of the atmospheric thermal effect rather than a cause for it. Our empirical model has also fundamental implications for the role of oceans, water vapour, and planetary albedo in global climate. Since produced by a rigorous attempt to describe planetary temperatures in the context of a cosmic continuum using an objective analysis of vetted observations from across the Solar System, these ndings call for a paradigm shift in our understanding of the atmospheric ‘greenhouse effect’ as a fundamental property of climate.
... They can be cast as infinite dimensional dynamical systems that arise as spatial discretization of continuum models. But more importantly LDEs arise naturally in modeling systems with intrinsic discrete structure, such as solidification of alloys, interactions on a single strand of DNA, cellular neural networks, propagation of pulses in myelinated axons, waves in lattice gases, dispersal in patchy media or environments, and many other examples in chemical reaction, pattern recognition, image processing, etc [6,7,8,16,17,20,30,31,32,34,37,39]. Lattice differential equations (LDEs) have been studied extensively over the past two decades, for their interesting mathematical properties and plethora of applications. ...
Chapter
Full-text available
This is an expository article on asymptotic dynamics of stochastic lattice differential equations. In particular, we investigate the long-term behavior of stochastic lattice differential equations, by using the concept of global random pullback attractor in the framework of random dynamical systems. General results on the existence of global compact random attractors are first provided for general random dynamical systems in weighted spaces of infinite sequences. They are then used to study the existence of global pullback random attractors for various types of stochastic lattice dynamical systems with white noise.
... Ensuite si la somme dépasse une valeur seuil prédénie, la sortie sera 1, sinon 0. dans les années 1930 qui suggère que le cerveau pourrait être vu comme une organisation de 0 et de 1 (la première édition de son livre Mathematecal biophysics en 1938), ils ont introduit des réseaux de neurones discrets dans le temps et avec des variables booléennes comme un modèle du fonctionnement du cerveau. Cela nous amène à la théorie connexionniste résultant des deux constatations suivantes : D'une part, la mise en commun d'unités simples permet d'acquérir de vastes capacités de traitement de l'information [Rashevsky, 1960], d'autre part, [Hebb, 1949] a introduit une règle d'apprentissage décrivant comment les interactions entre plusieurs neurones permet de modier le comportement de l'ensemble. ...
Article
Full-text available
Within the context of enaction and a global approach to perception, we focused on the characteristics of neural computation necessary to understand the relationship between structures in the brain and their functions. We first considered computational problems related to the discretization of differential equations that govern the studied systems and the synchronous and asynchronous evaluation schemes. Then, we investigated a basic functional level : the transformation of spatial sensory representations into temporal motor actions within the visual-motor system. We focused on the visual flow from the retina to the superior colliculus to propose a minimalist model of automatic encoding of saccades to visual targets. This model, based on simple local rules (CNFT and logarithmic projection) in a homogeneous population and using a sequential processing, reproduces and explains several results of biological experiments. It is then considered as a robust and efficient basic model. Finally, we investigated a more general functional level by proposing a computational model of the basal ganglia motor loop. This model integrates sensory, motor and motivational flows to perform a global decision based on local assessments. We implemented an adaptive process for action selection and context encoding through an innovative mechanism that allows to form the basic circuit for other cortico-basal loops. This mechanism allows to create internal representations according to the enactive approach that opposes the computer metaphor of the brain. Both models have interesting dynamics to study from whether a biological point of view or a computational numerical one.
... The last three decades have seen renewed emphasis on limblessness. Papers have dealt with such topics as the description of locomotion patterns (Wiedemann, 1932;Mosauer, 1932Mosauer, , 1932aBoker, 1935), the osteology and degeneration of appendages (Duerden and Essex, 1922;Essex, 1927;Sewertzoff, 1931;Stokeley, 1947), the myology (Buffa, 1904;Mosauer, 1935;Auffenberg, 1958Auffenberg, , 1961, proprioreceptive or coordinative sequences (Gray, 1946;Gray and Lissmann, 1950;Lissmann, 1950), and the theory of movement in limbless forms (Rashevsky, 1960). Other authors (Bogert, 1947;Brain, It is a pleasure to acknowledge the aid of the National Science Foundation (NSF G-9054) which supports my studies. ...
Chapter
How did all the trillions of different organisms on Earth come about? And why are so many of these organisms more complex than their bacterial ancestors? The theory by which scientists try to answer these questions is the theory of evolution. In essence, the theory uses only a few basic processes. Organisms produce (many) offspring. These inherit variable characteristics. As a result, they differ in their performance in a given environment. The differences in performance lead to non-random elimination. The result is a select group of survivors. Survivors start producing offspring again. And then the story repeats itself. The result is a pedigree, a pattern that results from underlying processes. From this simple basis, many questions about evolution can be addressed.
Chapter
Predictions based on biological evolution will always involve the descendants of organisms, the descendants of those descendants, and so on, for endless generations into the future. The characteristics of the organisms may change, as may their genes. But the offspring will always be organisms. Before there were organisms, however, there already was abiotic evolution. The operators that evolved before cells were molecules. The step from molecules to cells implied a transition from abiotic to biotic evolution. By analogy, a future dual closure could cause a shift from biotic evolution to something else. This may seem a far-fetched idea. So it is better to infer the properties of future operator types by extrapolating the operator hierarchy. In this way, we can learn more about how organisms are likely to evolve in the future. The challenge is to answer the question of who, or what, will be our successor in evolution?
Article
Full-text available
When computers started to become a dominant part of technology around the 1950s, fundamental questions about reliable designs and robustness were of great relevance. Their development gave rise to the exploration of new questions, such as what made brains reliable (since neurons can die) and how computers could get inspiration from neural systems. In parallel, the first artificial neural networks came to life. Since then, the comparative view between brains and computers has been developed in new, sometimes unexpected directions. With the rise of deep learning and the development of connectomics, an evolutionary look at how both hardware and neural complexity have evolved or designed is required. In this paper, we argue that important similarities have resulted both from convergent evolution (the inevitable outcome of architectural constraints) and inspiration of hardware and software principles guided by toy pictures of neurobiology. Moreover, dissimilarities and gaps originate from the lack of major innovations that have paved the way to biological computing (including brains) that are completely absent within the artificial domain. As it occurs within synthetic biocomputation, we can also ask whether alternative minds can emerge from A.I. designs. Here, we take an evolutionary view of the problem and discuss the remarkable convergences between living and artificial designs and what are the pre-conditions to achieve artificial intelligence.
Chapter
Thermodynamic theory predicts that the universe develops towards maximum energy dispersal. Meanwhile, complex systems continue to form. The search for an explanation of these seemingly opposing trends has inspired many scientists. The theory of nonequilibrium thermodynamics brought much progress, allowing subsystems to become more complex at the costs of external energy gradients. But energy gradients may not tell the whole story, because, while they explain the existence of cells, gradients alone cannot explain the existence of complex organisms such as plants, tigers, or humans. Contributing to our understanding of the relationships between complexity and thermodynamics, this study focuses on a hierarchical subset of all complex systems. The systems in this subset have formed, in a step-by-step way, through a series of “dual-closure” processes. Every system produced through dual closure is called an “operator,” and their stringent complexity hierarchy is called the “operator hierarchy.” It is demonstrated that the operators can be grouped into three major classes with fundamentally different thermodynamics: (1) abiotic operators resulting from condensation reactions, (2) organisms resulting from contained autocatalysis and competition, and (3) neural network organisms driven by autocatalysis, learning, and competition. To these three groups a fourth group of rapidly evolving systems that are not operators can be added: “artifacts” made by organisms, notably humans. While normally being viewed as the result of self-organization, the design of artifacts may in fact be the product of “allo-organization.”KeywordsOperatorhierarchyO-theoryThermodynamicsBig evolutionSystem scienceHierarchy theory
Article
Elongated snake-like bodies associated with limb reduction have evolved multiple times throughout vertebrate history. Limb-reduced squamates (lizards and snakes) account for the vast majority of these morphological transformations, and thus have great potential for revealing macroevolutionary transitions and modes of body-shape transformation. Here we present a comprehensive review on limb reduction, in which we examine and discuss research on these dramatic morphological transitions. Historically, there have been several approaches to the study of squamate limb reduction: (i) definitions of general anatomical principles of snake-like body shapes, expressed as varying relationships between body parts and morphometric measurements; (ii) framing of limb reduction from an evolutionary perspective using morphological comparisons; (iii) defining developmental mechanisms involved in the ontogeny of limb-reduced forms, and their genetic basis; (iv) reconstructions of the evolutionary history of limb-reduced lineages using phylogenetic comparative methods; (v) studies of functional and biomechanical aspects of limb-reduced body shapes; and (vi) studies of ecological and biogeographical correlates of limb reduction. For each of these approaches, we highlight their importance in advancing our understanding, as well as their weaknesses and limitations. Lastly, we provide suggestions to stimulate further studies, in which we underscore the necessity of widening the scope of analyses, and of bringing together different perspectives in order to understand better these morphological transitions and their evolution. In particular, we emphasise the importance of investigating and comparing the internal morphology of limb-reduced lizards in contrast to external morphology, which will be the first step in gaining a deeper insight into body-shape variation.
Preprint
Full-text available
The energetic cost of transport for walking is highly sensitive to speed but relatively insensitive to changes in gravity level. Conversely, the cost of transport for running is highly sensitive to gravity level but not much to speed. Gait optimization with a minimally constrained bipedal model predicts a similar differential energetic response for walking and running even though the same model parameters and cost function are used for both gaits. This challenges previous assertions that the converse energetic responses are due to fundamentally different energy saving mechanisms in each gait. Our results suggest that energetics of both gaits are highly influenced by dissipative losses occurring as leg forces abruptly alter the center of mass path. The observed difference in energetic consequence of the performance condition in each gait is due to the effect the movement strategy of each gait has on the dissipative loss. The optimization model predictions are tested directly by measuring metabolic cost of human subjects walking and running at different speeds in normal and reduced gravity using a novel reduced gravity simulation apparatus. The optimization model also predicts other, sometimes subtle, aspects of gait such as step length changes. This is also directly tested in order to assess the fidelity of the model’s more nuanced predictions.
Article
Full-text available
Рассмотрены непрерывные модели информационного противоборства, основанные на традиционной нейрологической схеме. На их основе с использованием метода замены дифференциальных соотношений клеточным автоматом разработан дискретный вариант модели информационного противоборства. С ее помощью проведено моделирование агитационной кампании двух партий, на основе предложенной модели построена имитационная система, при помощи которой проведен ряд вычислительных экспериментов. В рамках этих экспериментов показано, что макродинамика новой модели соответствует макродинамике исходной, при том что дискретная модель обладает более широкой областью применимости. Для некоторых задач противоборства двух партий в рамках агитационной кампании получены результаты, аналогичные тем, которые дает непрерывная модель. Дискретная модель позволила исследовать задачу оптимального использования одной из сторон однократной дестабилизации хода агитационной кампании. В рамках этого исследования были получены оригинальные результаты, в частности - наличие критического значения коэффициента влияния общественного мнения на мнение индивида, определяющего, в какой период времени одной из сторон выгоднее повышать уровень интенсивности своей пропаганды.
Article
Full-text available
Systems sciences address issues that cross-cut any single discipline and benefit from the synergy of combining several approaches. But interdisciplinary integration can be challenging to achieve in practice. Scientists with different disciplinary backgrounds often have different views on what count as good data, good evidence, a good model, or a good explanation. Accordingly, several scholars have reported on challenges encountered in interdisciplinary settings. This chapter outlines how some of the challenges play out in systems biology where disciplinary ideals and domain specific practices sometime collide. We focus on tensions arising due to differences in epistemic standards between modellers with a background in physics or systems engineering, on one hand, and experimenters with a background in molecular biology on the other. We propose that part of the problem of interdisciplinary integration can be understood as the result of unfounded "disciplinary imperialism" on both sides, in which standards from one discipline are uncritically applied to new domains without recognition of other valid or complementary perspectives. We suggest that addressing and explicating the disciplinary background for the different views can help facilitate interdisciplinary collaboration in science as well as serve to improve science education.
Preprint
Liquid neural networks (or ''liquid brains'') are a widespread class of cognitive living networks characterised by a common feature: the agents (ants or immune cells, for example) move in space. Thus, no fixed, long-term agent-agent connections are maintained, in contrast with standard neural systems. How is this class of systems capable of displaying cognitive abilities, from learning to decision-making? In this paper, the collective dynamics, memory and learning properties of liquid brains is explored under the perspective of statistical physics. Using a comparative approach, we review the generic properties of three large classes of systems, namely: standard neural networks (''solid brains''), ant colonies and the immune system. It is shown that, despite their intrinsic physical differences, these systems share key properties with standard neural systems in terms of formal descriptions, but strongly depart in other ways. On one hand, the attractors found in liquid brains are not always based on connection weights but instead on population abundances. However, some liquid systems use fluctuations in ways similar to those found in cortical networks, suggesting a relevant role of criticality as a way of rapidly reacting to external signals.
Chapter
Nicolas Rashevsky was a pioneer in applying mathematics to biology. He is perhaps best known for his creation of neural net models and his applications of these models to a wide variety of problems in physiology and psychology.
Article
Full-text available
The goal of this paper is to advance an extensible theory of living systems using an approach to biomathematics and biocomputation that suitably addresses self-organized, self-referential and anticipatory systems with multi-temporal multi-agents. Our first step is to provide foundations for modelling of emergent and evolving dynamic multi-level organic complexes and their sustentative processes in artificial and natural life systems. Main applications are in life sciences, medicine, ecology and astrobiology, as well as robotics, industrial automation, man-machine interface and creative design. Since 2011 over 100 scientists from a number of disciplines have been exploring a substantial set of theoretical frameworks for a comprehensive theory of life known as Integral Biomathics. That effort identified the need for a robust core model of organisms as dynamic wholes, using advanced and adequately computable mathematics. The work described here for that core combines the advantages of a situation and context aware multivalent computational logic for active self-organizing networks, Wandering Logic Intelligence (WLI), and a multi-scale dynamic category theory, Memory Evolutive Systems (MES), hence WLIMES. This is presented to the modeller via a formal augmented reality language as a first step towards practical modelling and simulation of multi-level living systems. Initial work focuses on the design and implementation of this visual language and calculus (VLC) and its graphical user interface. The results will be integrated within the current methodology and practices of theoretical biology and (personalized) medicine to deepen and to enhance the holistic understanding of life.
Chapter
During the past five or six decades, ‘complexity’ has been defined in many different ways. Owing to the too many definitions of complexity, the difference between ‘complex’ and ‘complicated’ problems and systems has become unclear and difficult to determine. The following is possibly the golden rule for distinguishing ‘complex’ from ‘complicated’ problems and systems. Complicated problems originate from causes that can be individually distinguished; they can be addressed piece-by-piece; for each input to the system there is a proportionate output; the relevant systems can be controlled and the problems that they present admit permanent solutions. On the other hand, complex problems and systems result from networks of multiple interacting causes that cannot be individually distinguished; they must be addressed as entire systems, that is, cannot be addressed in a piecemeal way; they are such that small inputs may result in disproportionate effects; the problems that they present cannot be solved once and for ever, but require systematic management, and typically any intervention merges into new problems as the result of the actions taken to deal with them; and the relevant systems cannot be controlled – the best one can do is influence them, learn to “dance with them” as Donella Meadows aptly said.
Chapter
This chapter is a subjective description of some important landmarks along the route that has brought the field of terrestrial (primarily) mammalian locomotion to its current position. It sets the stage for the discussion of newly arising opportunities presented in this book. The book focuses on the role of mechanics in understanding animal locomotion and, particularly, that of terrestrial mammals. To this end, the chapter traces the author's personal impression of key conceptual breakthroughs in the fields of animal or human biomechanics and physiology, robotics and the fundamental mechanics. The author adds his interpretation of events and the stimulus that led to them. The chapter includes discussions on the ancients and the contemplation of motion, European Renaissance, era of technological observation, physiology and mechanics of terrestrial locomotion, and gait studies. The biological cost of locomotion is also discussed.
Chapter
Brains can be considered as goal-seeking correlation systems that use past experience to predict future events so as to guide appropriate behavior. Brains can also be considered as neural signal processing systems that utilize temporal codes, neural timing architectures operating on them, and time-domain, tape-recorder-like memory mechanisms that store and recall temporal spike patterns. If temporal memory traces can also be read out in faster-than-real-time, then these can serve as an advisory mechanism to guide prospective behavior by simulating the neural signals generated from time courses of past events, actions, and the respective hedonic consequences that previously occurred under similar circumstances. Short-term memory stores based on active regeneration of neuronal signals in networks of delay paths could subserve short-term temporal expectancies based on recent history. Polymer-based molecular mechanisms that map time-to-polymer chain position and vice versa could provide vehicles for storing and reading out permanent, long-term memory traces.
Article
Full-text available
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Chapter
The methods of biological cybernetics are used to observe living systems. The models of transport of substances through the biomembrane are discussed. Compartmental models of living systems in control theory are considered. The problem of the “minimal cell” is discussed. The place and role of models of the transport of substances in synthetic and systems biology are discussed. The “one ion—a transport system” algorithm and the game approach, which were previously proposed by the authors to model the transport of ions in cells, are described.
Chapter
Inaugurating as a simple, plain cell membrane in the primeval unicellular prokaryotes and progressing to the most advanced respiratory systems of the endothermic-homeothems, i.e., the bronchioalveolar lung of mammals and the parabronchial one of birds, the designs of gas exchangers have occurred based on remarkably similar bioengineering principles. The gas exchangers have developed under dynamic environmental conditions, especially those of shifting O2 and CO2 levels (Sects. 1.2 and 1.3). In its broadest context, respiration comprises spatiotemporally coordinated biomechanical, biophysical, behavioral, and physiological processes. Together, they effect movement of two vectorial quantities in opposite directions – influx of O2 from the environment into the organism and efflux of CO2 to the outside. More specifically, external respiration entails the acquisition of O2 and in derived animals its transport through properly configured airways and vasculature while internal respiration involves the utilization of O2 at the cellular level, specifically in the mitochondria, to generate energy mainly in form of ATP. Carbon dioxide (CO2) and water (H2O) are the secondary products of internal respiration.
Chapter
Until recently, there were no satisfactory models to account for complex physiological structures or processes that do not have characteristic scales of length and/or time. The concept of fractal offers new insights into multiple scaled structures such as the bronchial and coronary tree, His-Purkinje system and chordae tendineae as well as into the broadband, inverse power-law spectra associated with normal electrophysiological dynamics. In a broader biological context the notion of a fractal distribution may have implications regarding error-tolerance and evolution. These ideas are discussed and some supporting mathematical analysis and data are presented.
Chapter
Full-text available
That speech is the most highly developed motor skill possessed by all of us is a truism; but how is this truism to be understood? Although the investigation of speech production and that of motor behavior have proceeded largely independently of each other, they share certain conceptions of how skilled movements are organized. Thus, regardless of whether one refers to movement in general or to speech as a particular instance, it is assumed that for coordination to occur, appropriate sets of muscles must be activated in proper relationships to others, and correct amounts of facilitation and inhibition have to be delivered to specified muscles. That the production of even the simplest movement involves a multiplicity of neuromuscular events overlapping in time has suggested the need for some type of organizing principle. By far the most favored candidates have been the closed-loop servomechanism accounts provided by cybernetics and its allied disciplines, and the formal machine metaphor of central programs. The evidence for these rival views seems to undergo continuous updating (e.g., Adams, 1977; Keele, 1981) and so will not be of major concern to us here. It is sufficient to point out the current consensus on the issue, namely, that complex sequences of movement may be carried out in the absence of peripheral feedback, but that feedback can be used for monitoring small errors as well as to facilitate corrections in the program itself (e.g., Keele, 1981; Miles & Evarts, 1979).
Article
Full-text available
The electrodynamics of action potentials represents the fundamental level where information is integrated and processed in neurons. The Hodgkin-Huxley model cannot explain the nonstereotyped spatial charge density dynamics that occur during action potential propagation. Revealed in experiments as spike directivity, the non-uniform charge density dynamics within neurons carry meaningful information and suggest that fragments of information regarding our memories are endogenously stored in structural patterns at a molecular level and are revealed only during spiking activity. The main conceptual idea is that under the influence of electric fields, efficient computation by interaction occurs between charge densities embedded within molecular structures and the transient developed flow of electrical charges. This process of computation underlying electrical interactions and molecular mechanisms at the subcellular level is dissimilar from spiking neuron models that are completely devoid of physical interactions. Computation by interaction describes a more powerful continuous model of computation than the one that consists of discrete steps as represented in Turing machines.
Article
Tissue engineering and regenerative medicine (TERM) remain to be one of the fast growing fields, which cover a wide scope of topics of both basic and applied biological researches. This overview article summarized the advancements in applied researches of TERM area including stem cells mediated tissue regeneration, material science and TERM clinical trial. These achievements demonstrated the great potentials of clinical regenerative therapy of tissue/organ disease or defect via stem cells and tissue engineering approaches.
Article
Full-text available
mathematical models of two-dimensional objects or “organisms” are introduced. The objects are abstractions, created by the combination of pattern-theoretic generators. They are deformed by what are shown to be infinitesimal contact transformations. Analytical methods are utilized to determine some of the resulting forms and a computer is programmed to display graphically the developing patterns.
Article
A study has been made to incorporate the basics of biological growth in the mathematical representation while the GRID formalism is expected to be improved and for it to become a more realistic one on a detailed level. Some of the improvements that have been made include: the form of the switching function α(ξ,t) in genetic terms; the application of inference algorithms to many real growth images; the use of 3D with a focus on computational feasibility; the extension of both the thermodynamic limit and its use for inference; the mathematical analysis of probabilistic limit theorems for GRID; the need for a detailed specification and data exploration for pathogenesis; and finally, a deeper analysis for the construction of Darcyans.
Article
In this paper we first prove a rather general theorem about existence of solutions for an abstract differential equation in a Banach space by assuming that the nonlinear term is in some sense weakly continuous. We then apply this result to a lattice dynamical system with delay, proving also the existence of a global compact attractor for such system.
Article
I describe how stochastic dynamic programming (SDP), a method for stochastic optimization that evolved from the work of Hamilton and Jacobi on variational problems, allows us to connect the physiological state of organisms, the environment in which they live, and how evolution by natural selection acts on trade-offs that all organisms face. I first derive the two canonical equations of SDP. These are valuable because although they apply to no system in particular, they share commonalities with many systems (as do frictionless springs). After that, I show how we used SDP in insect behavioral ecology. I describe the puzzles that needed to be solved, the SDP equations we used to solve the puzzles, and the experiments that we used to test the predictions of the models. I then briefly describe two other applications of SDP in biology: first, understanding the developmental pathways followed by steelhead trout in California and second skipped spawning by Norwegian cod. In both cases, modeling and empirical work were closely connected. I close with lessons learned and advice for the young mathematical biologists.
Conference Paper
Full-text available
This paper gives a brief presentation of history of Soft Computing considered as a mix of three scientific disciplines that arose in the mid of the 20th century: Fuzzy Sets and Systems, Neural Networks, and Evolutionary Computation. The paper shows the genesis and the historical development of the three disciplines and also their meeting in a coalition in the 1990s.
Article
A previous study (Bull. Math. Biophysics,30, 735–749) is generalized to the case of active transport, which acts together in general with ordinary diffusion. The basic results obtained are the same except for an additional important conclusion. In principle it is possible to obtain sustained oscillations even when the secretions of the different glands do not affect the rates of formation or decay of each other at all, but affect the “molecular pumps,” which are responsible for the active transports in various parts of the system. Thus no biochemical interactions need necessarily take place between then-metabolites to make sustained oscillations possible in principle. This is an addition to a previous finding (Bull. Math. Biophysics,30, 751–760) that due to effects of the secreted hormones on target organs, non-linearity of biochemical interactions is not needed for production of sustained oscillations.
Article
For acyclic systems the center of a graph has been known to be either a single vertex of two adjacent vertices, that is, an edge. It has not been quite clear how to extend the concept of graph center to polycyclic systems. Several approaches to the graph center of molecular graphs of polycyclic graphs have been proposed in the literature. In most cases alternative approaches, however, while being apparently equally plausible, gave the same results for many molecules, but occasionally they differ in their characterization of molecular center. In order to reduce the number of vertices that would qualify as forming the center of the graph, a hierarchy of rules have been considered in the search for graph centers. We reconsidered the problem of "the center of a graph" by using a novel concept of graph theory, the vertex "weights," defined by counting the number of pairs of vertices at the same distance from the vertex considered. This approach gives often the same results for graph centers of acyclic graphs as the standard definition of graph center based on vertex eccentricities. However, in some cases when two nonequivalent vertices have been found as graph center, the novel approach can discriminate between the two. The same approach applies to cyclic graphs without additional rules to locate the vertex or vertices forming the center of polycyclic graphs, vertices referred to as central vertices of a graph. In addition, the novel vertex "weights," in the case of acyclic, cyclic, and polycyclic graphs can be interpreted as vertex centralities, a measure for how close or distant vertices are from the center or central vertices of the graph. Besides illustrating the centralities of a number of smaller polycyclic graphs, we also report on several acyclic graphs showing the same centrality values of their vertices. © 2013 Wiley Periodicals, Inc.
Article
Lagerung ruhender Gerstensamen zwischen Trocknung und Bestrahlung führte bei unverändertem Wassergehalt zu einer Verringerung ihrer Strahlenempfindlichkeit. Zur Erklärung dieses Verhaltens mußte angenommen werden, daß der vorangegangene Trocknungsvorgang eine reversible Sensibilisierung der Samen bewirkt hatte, die dann mit dem Aufhören des Wasserentzugs unwirksam wurde. Eine solche Sensibilisierung trat unabhängig davon auf, ob die Samen nach der Bestrahlung sauerstoffhaltiges oder sauerstofffreies Wasser aufgenommen hatten. Nach Trocknung und Lagerung der Samen unter Vakuum wurden ähnliche Erscheinungen beobachtet wie nach den entsprechenden Vorbehandlungen an Luft. Die Sensibilisierung der Samen durch den Vorgang der Trocknung und ihre Aufhebung während der Lagerung lassen sich deshalb nicht auf Änderungen der Sauerstoffkonzentration am Ort des Strahlenschadens zurückführen.
Article
The concentration gradient of O2 outside cells and oxygen-consuming particles and fibres with immobilized glucose oxidase (EC 1.1.3.4) has been directly measured with oxygen microelectrodes. Measurements have been performed in systems with different relative velocities between particles and solvent. In systems with a relative velocity equal to zero (unstirred systems), the thickness of the diffusion layer was found to be approximately the radius of the particles, and much greater than the radius of the fibres. This is in agreement with the results predicted by theoretical analysis. The thickness of the diffusion layer was not negligible, even at the highest relative velocity (1.5 x 10−4 ms−1 used in this study.
Article
Full-text available
This paper describes optimality principles for the design of an engineering bifurcating-tube tree consisting of the convection and diffusion zones to attain the most effective gas transport. An optimality principle is formulated for the diffusion zone to maximize the total diffusion mass-transfer rate of gas across tube walls under a constant total-volume constraint. This optimality principle produces a new diameter distribution for the diffusion zone in contrast to the classical distribution for the convection zone. In addition. this paper gives a length distribution for an engineering tree based on an optimality principle for minimizing the total weight of the tree under constraints of a finite surface and elastic criteria for structural stability. Furthermore, the optimum branching angles are evaluated based on local optimality principles for a single bifurcating-tube branch.
ResearchGate has not been able to resolve any references for this publication.