Mathematical and Computer Modelling

Published by Elsevier
Print ISSN: 0895-7177
Publications
In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies.
 
We have extended our previously developed 3D multi-scale agent-based brain tumor model to simulate cancer heterogeneity and to analyze its impact across the scales of interest. While our algorithm continues to employ an epidermal growth factor receptor (EGFR) gene-protein interaction network to determine the cells' phenotype, it now adds an implicit treatment of tumor cell adhesion related to the model's biochemical microenvironment. We simulate a simplified tumor progression pathway that leads to the emergence of five distinct glioma cell clones with different EGFR density and cell 'search precisions'. The in silico results show that microscopic tumor heterogeneity can impact the tumor system's multicellular growth patterns. Our findings further confirm that EGFR density results in the more aggressive clonal populations switching earlier from proliferation-dominated to a more migratory phenotype. Moreover, analyzing the dynamic molecular profile that triggers the phenotypic switch between proliferation and migration, our in silico oncogenomics data display spatial and temporal diversity in documenting the regional impact of tumorigenesis, and thus support the added value of multi-site and repeated assessments in vitro and in vivo. Potential implications from this in silico work for experimental and computational studies are discussed.
 
Identification of detailed features of neuronal systems is an important challenge in the biosciences today. Cilia are long thin structures that extend from the olfactory receptor neurons into the nasal mucus. Transduction of an odor into an electrical signal occurs in the membranes of the cilia. The cyclic-nucleotide-gated (CNG) channels which reside in the ciliary membrane and are activated by adenosine 3',5'-cyclic monophosphate (cAMP) allow a depolarizing influx of Ca(2+) and Na(+) and are thought to initiate the electrical signal.In this paper, a mathematical model consisting of two nonlinear differential equations and a constrained Fredholm integral equation of the first kind is developed to model experiments involving the diffusion of cAMP into cilia and the resulting electrical activity. The unknowns in the problem are the concentration of cAMP, the membrane potential and, the quantity of most interest in this work, the distribution of CNG channels along the length of a cilium. A simple numerical method is derived that can be used to obtain estimates of the spatial distribution of CNG ion channels along the length of a cilium. Certain computations indicate that this mathematical problem is ill-conditioned.
 
We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.
 
A new method to generate a bifurcating distributive system is presented. The method utilizes random points inside a given area and is sensitive to the global and local concentrations of the points. The algorithm is highly efficient compared to the current area-halving algorithms.
 
The inner ear contains sensory organs which signal changes in head movement. The vestibular sacs, in particular, are sensitive to linear accelerations. Electron microscopic images have revealed the structure of tiny sensory hair bundles, whose mechanical deformation results in the initiation of neuronal activity and the transmission of electrical signals to the brain. The structure of the hair bundles is shown in this paper to be that of the most efficient two-dimensional phased-array signal processors.
 
Recently we developed a model composed of five impulsive differential equations that describes the changes in drinking patterns (that persist at epidemic level) amongst college students. Many of the model parameters cannot be measured directly from data; thus, an inverse problem approach, which chooses the set of parameters that results in the "best" model to data fit, is crucial for using this model as a predictive tool. The purpose of this paper is to present the procedure and results of an unconventional approach to parameter estimation that we developed after more common approaches were unsuccessful for our specific problem. The results show that our model provides a good fit to survey data for 32 campuses. Using these parameter estimates, we examined the effect of two hypothetical intervention policies: 1) reducing environmental wetness, and 2) penalizing students who are caught drinking. The results suggest that reducing campus wetness may be a very effective way of reducing heavy episodic (binge) drinking on a college campus, while a policy that penalizes students who drink is not nearly as effective.
 
Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system.
 
We propose a model for HCMV infection in healthy and immunosuppressed patients. First, we present the biological model and formulate a system of ordinary differential equations to describe the pathogenesis of primary HCMV infection in immunocompetent and immunosuppressed individuals. We then investigate how clinical data can be applied to this model. Approximate parameter values for the model are derived from data available in the literature and from mathematical and physiological considerations. Simulations with the approximated parameter values demonstrates that the model is capable of describing primary, latent, and secondary (reactivated) HCMV infection. Reactivation simulations with this model provide a window into the dynamics of HCMV infection in (D-R+) transplant situations, where latently-infected recipients (R+) receive transplant tissue from HCMV-naive donors (D-).
 
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods.
 
A new variable structure longitudinal controller is designed and analyzed for an autonomous intelligent vehicle. The proposed controller not only guarantees individual vehicle stability but also platoon stability. Moreover, the achieved platoon stability is proven to be robust with respect to vehicle parameter uncertainties and unknown time varying disturbances. Explicit transient bounds are obtained which indicate ways of choosing controller parameters for comfortable driving
 
Under the assumption that each model operates at an update rate proportional to the model's assumed dynamics, a multirate interacting multiple model (MRIMM) algorithm is briefly introduced. Based on the multirate IMM, its distributed version DMRIMM algorithm for multiplatform tracking is proposed. The MRIMM algorithm is first employed to perform each local/platform estimation. A global filter is then constructed to perform a fusion of MRIMM estimations. The advantages of low computation loads and performance improvement are demonstrated through Monte Carlo simulations
 
A model is presented for estimating the average response time of parallel programs. It is assumed that the underlying system has a number of processors and all the processors have the same speed. The system is represented as a state transition diagram in which each state represents the number of processes in the system. A state transition can occur in every Δ t time units. If the system has U processors and i processes, each process will receive Δ t ×min (1, U / i ) processor time before the number of processes in the system changes. A process is terminated when it receives the required processor time. A program leaves the system when all the corresponding processes are terminated. Methods based on the model are developed to estimate the average response time. Several examples are given to demonstrate these methods
 
Recently, there has been a growing interest in modeling financial timeseries using fractional ARIMA models with stable innovations; see, for example, [1]. In this paper, the corresponding nonparametric problem for regression with fractional ARIMA noise is studied. A recursive algorithm for estimating time varying parameters is given. It is also shown that a number of existing algorithms are special case of this proposed algorithm.
 
This paper describes a multibody dynamics approach to the modeling of rotorcraft systems and reviews the key aspects of the simulation procedure. The multibody dynamics analysis is cast within the framework of nonlinear finite element methods, and the element library includes rigid and deformable bodies as well as joint elements. No modal reduction is performed for the modeling of flexible bodies. The structural and joint element library is briefly described. The algorithms used to integrate the resulting equations of motion with maximum efficiency and robustness are discussed. Various solution procedures, static, dynamic, stability, and trim analyses, are presented. Postprocessing and visualization issues are also addressed. Finally, the paper concludes with selected rotorcraft applications.
 
A semi-implicit numerical model for the three-dimensional Navier-Stokes equations on unstructured grids is derived and discussed. The governing differential equations are discretized by means of a finite difference-finite volume algorithm which is robust, very efficient, and applies to barotropic and baroclinic, hydrostatic and nonhydrostatic, and one-, two-, and three-dimensional flow problems. The resulting model is relatively simple, mass conservative, and unconditionally stable with respect to the gravity wave speed, wind stress, vertical viscosity, and bottom friction.
 
We consider a network of three identical neurons whose dynamics is governed by the Hopfield's model with delay to account for the finite switching speed of amplifiers (neurons). We show that in a certain region of the space of (α, β), where α and β are the normalized parameters measuring, respectively, the synaptic strength of self-connection and neighbourhood-interaction, each solution of the network is convergent to the set of synchronous states in the phase space, and this synchronization is independent of the size of the delay. We also obtain a surface, as the graph of a continuous function of τ = τ(α, β) (the normalized delay) in some region of (α, β), where Hopf bifurcation of periodic solutions takes place. We describe a continuous curve on such a surface where the system undergoes mode-interaction and we describe the change of patterns from stable synchronous periodic solutions to the coexistence of two stable phase-locked oscillations and several unstable mirror-reflecting waves and standing waves.
 
Q-conditional symmetries of the classical Lotka-Volterra system in the case of one space variable are completely described and a set of such symmetries in explicit form is constructed. The relevant non-Lie ans\"atze to reduce the classical Lotka-Volterra systems with correctly-specified coefficients to ODE systems and examples of new exact solutions are found. A possible biological interpretation of some exact solutions is presented.
 
This paper presents a detailed analysis of computational complexity of Multiple Hypothesis Tracking (MHT). The result proves that the computational complexity of MHT is dominated by the number of hypotheses. Effects of track merging and pruning are analyzed also. Certain common design parameters of MHT, such as thresholds, are also discussed in detail. The results of this paper provide a guidance for selecting parameters in an MHT tracker and predicting its performance.Among the design parameters discussed in this paper, track merging appears to be the most important way for controlling the computational complexity of MHT. Thresholds for track deletion are also critical. If not all measurements are allowed to initiate new tracks, the number of new tracks can also be used for tuning the computation requirement of MHT, but it is not as significant as thresholds.
 
A new estimator for the index α of symmetric stable Paretian distributions is proposed. In addition to being numerically simple, fast, and reliable, simulations suggest that even in small samples, the estimator is unbiased for α ∈ [1, 2], and almost exactly normally distributed. Standard errors are also given, the magnitudes of which are shown to depend only on sample size and not on the true value of α.
 
In this paper, we show how the introduction of a new primitive constraint over finite domains in the constraint logic programming system CHIP allows us to find very good solutions for a large class of very difficult scheduling and placement problems. Examples on the cumulative scheduling problem, the 10 jobs × 10 machines problem, the perfect square problem, the strip packing problem and the incomparable rectangles packing problem are given, showing the versatility, the efficiency and the broad range of application of this new constraint. We point out that no other existing approach can address simultaneously all the problems discussed in this paper.
 
This article examines the finite sample behaviour of time and frequency domain versions of the tests of Robinson (1994) for testing roots in the unit circle, with integer or fractional orders of integration. In finite samples, the two versions differ, in some cases considerably. We show analytically that the difference between the two statistics is Op(T−1/2). Several Monte Carlo experiments are conducted, and a small empirical application, illustrating this problem, is also carried out at the end of the article.
 
Physicists working in two-dimensional quantum gravity invented a new method of map enumeration based on computation of Gaussian integrals over the space of Hermitian matrices. This paper explains the basic facts of the method and provides an accessible introduction to the subject.
 
A detailed analysis of mass non-conservation in the proximity of thermal contact discontinuities, when solving 1-D gas dynamic flow equations with finite difference numerical methods, is carried out in this paper. A wide spectrum of finite difference numerical methods has been applied to solve such conditions. Thermal contact discontinuities are very common in current diesel engines due to back-flow in the intake valves during the valve overlap period. Every method has been shown to be incapable of correctly solving the problem raised, displaying (or revealing) a different behavior. Taking as base line these analyses a study regarding mesh size reduction in ducts has been also performed. This solution becomes suitable since it leads to making mass conservation problems disappear. Nevertheless, most extended calculation structure in 1D gas dynamic models is not advised due to the increase of computational effort required. Thus, a new calculation structure for solving governing equations in ducts is suggested. This proposed calculation structure is based on independent time discretisation of every duct according to its CFL stability criterion. Its application to thermal contact discontinuities points out its advantages with regard to computational demand as the calculation time of every duct is adapted to its mesh size.
 
The fluid structure interaction mechanism in vascular dynamics can be described by either 3D or 1D models, depending on the level of detail of the flow and pressure patterns needed for analysis. A successful strategy that has been proposed in the past years is the so-called geometrical multiscale approach, which consists of coupling both 3D and 1D models so as to use the former only in those regions where details of the fluid flow are needed and describe the remaining part of the vascular network by the simplified 1D model.In this paper we review recently proposed strategies to couple the 3D and 1D models, and within the 3D model, to couple the fluid and structure sub-problems. The 3D/1D coupling strategy relies on the imposition of the continuity of flow rate and total normal stress at the interface. On the other hand, the fluid–structure coupling strategy employs Robin transmission conditions. We present some numerical results and show the effectiveness of the new approaches.
 
A convection-dispersion model for the uptake and elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in the liver is presented. The model is adapted from the general dispersion model of Roberts and Rowland and includes the dynamics of TCDD interaction with two intracellular proteins, the Ah receptor and cytochrome P450 IA2. A “well-mixed” compartment was added to describe the venous blood concentration of TCDD. The result is a nonlinear system of seven coupled partial and ordinary differential equations with time delays.
 
In this work, we present a bibliography on retrial queues which updates the bibliography published in this journal in Artalejo (1999) [7]. The bibliography is focused on the progress made during the last decade 2000–2009. For the sake of completeness, a few papers published in 1999, and non-cited in Artalejo (1999) [7], have also been included. To keep the length manageable we have excluded conference proceedings, theses, unpublished reports, and works in languages other than English.The bibliography is structured in the following way:Section 1 lists the recent specific book (Artalejo and Gomez-Corral (2008)) [1] and other monographs including any chapter or section devoted to retrial queues.Section 2 lists survey papers and bibliographic works.Section 3 lists papers published in scientific journals.Section 4 includes a few forthcoming papers already in press.The author hopes that this bibliography could be of help to anyone contemplating research on retrial queues.
 
In 2003, the United States launched a pre-emptive strike against Iraq which was largely defended by the Bush Administration as an act to protect national security. In the months leading up to the attack, however, the US was still in the decision-making process — should we work with the UN on enforcing sanctions? go in only with Allied support? launch a pre-emptive strike with mainly US forces? During this time, the Analytic Network Process [T.L. Saaty, The Analytic Network Process, Fundamentals of Decision Making and Priority Theory, second ed., RWS Publications, Pittsburgh, 2001] was used to determine the best course of action. Working with the UN to ensure weapons inspections was found to be the best choice; the model showed that other alternatives, such as a pre-emptive attack on Iraq or attacking Iraq with Allied help would increase the possibility of such risks as increased oil prices, increased terrorism, decreased domestic support for the war, and high economic costs of sustaining the war itself.
 
We study a large class of infinite variance time series that display long memory. They can be represented as linear processes (infinite order moving averages) with coefficients that decay slowly to zero and with innovations that are in the domain of attraction of a stable distribution with index 1 < α < 2 (stable fractional ARIMA is a particular example). Assume that the coefficients of the linear process depend on an unknown parameter vector β which is to be estimated from a series of length n. We show that a Whittle-type estimator βn for β is consistent (βn converges to the true value β0 in probability as n → ∞), and, under some additional conditions, we characterize the limiting distribution of the rescaled differences .
 
In this paper, information theoretic inference methodology for system modeling is applied to estimate the stationary distribution for the number of customers in single server queueing systems with service capacity utilized by a finite population. The customers demand i.i.d. service times. Three different models are considered. In Model I, a customer who finds the server busy can be queued, whereas in Models II and III, any customer finding the server busy upon arrival will make repeated attempts to enter service until he eventually finds the server free. Models II and III differ in the retrial policy. Numerical examples illustrate the accuracy of the proposed maximum entropy estimation when it is compared with the classical analysis.
 
This paper considers the utility of a stochastic model of carcinogenesis, proposed earlier by Yakovlev and Polig, in quantitative analysis of the incidence of radiation-induced osteosarcomas in beagels injected with various amounts of 239Pu. The original version of the model failed to provide a good fit to our experimental data. The model has been generalized by incorporating a simple mechanism of lesion elimination, which is likely to be mediated by the immune system. Two versions of the model were developed: the first version ( Model 1 ) assumed malignant cells to be a target for the immune attack, while in Model 2 initiated cells were assumed to be such a target. Model 2 was rejected by the likelihood ratio test, thereby indicating that the competing model provides a more plausible explanation of the experimental data. Since in experiments with incorporated radionuclides the dose rate varies with time, dose-rate effects cannot be observed directly, and one must rely on mathematical models. The results of our numerical experiments show that, depending on the time of observation, both the direct and the inverse dose-rate effects may manifest themselves even at a fixed total dose level.
 
The spatio-temporal dynamics of a prey-predator community is described by two reaction-diffusion equations. It is shown that for a class of initial conditions the spatio-temporal system dynamics resembles a “phase transition” between a regular and an irregular phase, separated by a moving boundary. A simple approach to specify spatio-temporal chaos is proposed.
 
The aim of this paper is to investigate the conditions required to optimize the amount of chemotherapeutic drug absorbed by a solid tumour through a network of blood vessels. This work is based on a study of vascular networks generated from a discrete mathematical model of tumour angiogenesis, which describes the formation of a capillary network in response to chemical stimuli released by a solid tumour. Simulations of blood flow in the vasculature connecting the parent vessel to the solid tumour are then performed by adapting modelling techniques from the field of petroleum engineering to this biomedical application
 
This paper deals with a computational study for evaluating the capability of 2D numerical simulation for predicting the vortical structure around a quasibluff bridge deck. The laminar form, a number of BANS equation models, and the LES approach are evaluated. The study was applied to the deck section of the Great Belt East Bridge. The results are compared with wind-tunnel data and previously conducted computational simulations. Sensitivity of the results with regard to the computational approaches applied for each model is discussed. Finally, the study confirms the importance of safety-barrier modelling in the analysis of bridge aerodynamics.
 
Certain decimal numbers have special characteristics unlike those of most others. These sometimes have enormous physical significance, or very interesting mathematical/scientific properties, or both at the same time. The present article attempts to explore the significance of the decimal number 2n, the logarithm to base 2, and the binary equivalence (representation) corresponding to 2n both in the physical/natural world and in the pure mathematical environment, specifically in the area of computer/computational science. Also, among the number systems in different bases, the status of the base 2n, specifically for n=1, in the realm of computational/computer science is also stressed.
 
In this paper, we are interested in some basic investigations of properties of the relaxation schemes first introduced by Jin and Xin [1]. The main advantages of these schemes are that they neither require the use of Riemann solvers nor the computation of nonlinear flux Jacobians. This can be an important advantage when more complex models are considered where it is not possible to perform analytical calculations of Jacobians and/or when considering fluids with nonstandard equation of state. We apply the schemes (relaxing and relaxed) to a certain two-phase model where Jacobians cannot in general be calculated analytically. We first demonstrate that the original relaxation schemes of Jin and Xin produce a poor approximation for a typical mass transport example which involves transition from two-phase flow to single-phase flow. However, by introducing a slight modification of the original relaxation model by splitting the momentum flux into a mass and pressure part, we obtain some flux splitting relaxation schemes which for typical two-phase flow cases yield a more accurate and robust approximation.
 
A natural 3D extension of the Steiner chains problem, original to the authors of this article, where circles are substituted by spheres, is presented. Given three spheres such that either two of them are contained in (or intersect) the third one, chains of spheres, each one externally tangent to its two neighbors in the chain and to the first and second given spheres, and internally tangent to the third given sphere, are considered. A condition for these chains to be closed has been stated and the Steiner alternative or Steiner porism has been extended to 3D. Remarkably, the process is of symbolic-numeric nature. Using a computer algebra system is almost a must, because a theorem in the constructive theory in the background requires using the explicit general solution of a non-linear algebraic system. However, obtaining a particular solution requires computing concatenated processes involving trigonometric expressions. In this case, it is recommended to use approximated calculations to avoid obtaining huge expressions.
 
We propose in this paper a fractional-step A-ψ scheme for a quasi-magnetostatic 3D eddy current model by means of finite-element approximations. Bounds for continuous and discrete error in finite time are given, and it is verified that provided the time step τ is sufficiently small, the proposed algorithm yields for finite time T an error of in the L2-norm for the magnetic field H(= v∇ × A), where h is the mesh size.
 
Solving the time-dependent Maxwell equations in an unbounded domain requires the introduction of artificial absorbing boundary conditions (ABCs) designed to minimize the amplitude of the parasitic waves reflected by the artificial frontier of the domain of computation. The construction of such ABCs needs to perform a rigorous mathematical and numerical analysis, in order to obtain a well-posed problem, from a mathematical point of view, and a stable algorithm, from a numerical point of view. In a previous study, Joly and Mercier (1989) [8] have proposed a new second-order ABC for Maxwell’s equation in dimension 3, well adapted to a variational approach. In this paper, we present how to apply the second-order ABC proposed in [8] in the framework of a finite element method.
 
For polynomials P(x) = Anxn + An−1xn−1 + ⋯ + A1x + A0 in a real scalar x, but with coefficients Aj that are rectangular matrices, a generalization of Newton's divided difference interpolatory scheme is developed. Instances of P(x) at nodes xi may be interpreted as slices of a digital 3D object. Mathematica code for this machinery is given and its effectiveness illustrated for progressively-transmitted renderings. Analysis, with supporting Mathematica code, is extended to a piecewise matrix polynomial situation, to produce practicable software for a PC-based computational system. Two experiments about 3D progressive imaging, employing a 6 Mbyte data base consisting of 93 CT slices of a human head, are discussed along with PC-based performance evaluation. How a 3D object is decomposed into 2D subsets in preparation for progressive transmission, as well as their selected ordering for transmission, are seen to affect quality of the emerging reconstructions. Extension to 4D objects is also discussed briefly, to provide introduction to, for example, application of matrix polynomial machinery within the field of functional magnetic resonance imaging.
 
The problems of wave diffraction by a plane angular screen occupying an infinite 45 degree wedge sector with Dirichlet and/or Neumann boundary conditions are studied in Bessel potential spaces. Existence and uniqueness results are proved in such a framework. The solutions are provided for the spaces in consideration, and higher regularity of solutions are also obtained in a scale of Bessel potential spaces.
 
We present generalized Rogers-Ramanujan identities which relate the Fermi and Bose forms of all the characters of the superconformal model SM(2, 4ν). In particular, we show that to each bosonic form of the character there is an infinite family of distinct fermionic q-series representations.
 
Here we develop the Delta Fractional Calculus on Time Scales. Then we produce related integral inequalities of types: Poincaré, Sobolev, Opial, Ostrowski and Hilbert–Pachpatte. Finally, we give inequalities’ applications on the time scale R.
 
A general scheme for parallel simulation of individual-based, structured population models is proposed. Algorithms are developed to simulate such models in a parallel computing environment. The simulation model consists of an individual model and a population model that incorporates the individual dynamics. The individual model is a continuous time representation of organism life history for growth with discrete allocations for reproductive processes. The population model is a continuous time simulation of a nonlinear partial differential equation of extended McKendrick-von Foerster-type.As a prototypical example, we show that a specific individual-based, physiologically structured model for Daphnia populations is well suited for parallelization, and significant speed-ups can be obtained by using efficient algorithms developed along our general scheme. Because the parallel algorithms are applicable to generic structured populations which are the foundation for populations in a more complex community or food-web model, parallel computation appears to be a valuable tool for ecological modeling and simulation.
 
Air quality leading up to the compressor face of a fighter aircraft determines the engine performance considerably. A deficiency in the quality could lead to flutter or stall in the engines. In this study, two statistical methods; the Taguchi Method (TM) and the Analysis of Variance (ANOVA) are used to evaluate airflow quality through the intake via fighter aircraft maneuvers. The three factors associated directly with aircraft maneuverability are the Mach number (M), Angles of Incident (α) and Sideslip (β). Desirable air quality can be described as having high pressure recoveries as well as low distortion at the Aerodynamic Interface Plane (AIP). The intake studied is the port side F-5E duct. Results show that an increase in the Mach number affects the streamwise diffusion of the fluid more than the changing the angles of attack and sideslip, resulting in lower pressure recovery. The secondary flow formation in the streamwise direction is unable to dissipate and increases in strength with increasing Mach number. The curvature in the z-axis is more pronounced than that existing in the x-axis, leading to the formation of more adverse pressure gradients forming and hence greater secondary flow strength. This results in a more distorted flow leading to the AIP. This observation is in tandem with the values of the DC (60) readings obtained. The F-5E’s Taguchi’s Method results show that Mach number had the greatest effect on pressure recovery, and AOA affected distortion most considerably. Results from ANOVA show that Factors A, B and C and Interactions AC and BC affect the distortion of airflow. However, Factor B or the angle of attack affects this distortion most significantly.
 
Millimeter-wave (MMW) systems are high frequency wireless systems with a center frequency of around 60 GHz. In this article we propose an adaptive channel–superframe allocation (ACSA) scheme for such a system and evaluate its throughput and delay performance. The ACSA algorithm is designed to serve real-time (RT) and non-real-time (NRT) flows separately in different channels instead of serving them in different times. We also propose to change the sliced superframe of IEEE 802.15.3 to an adaptive unsliced superframe in order to decrease the TCP round-trip time. We compared IEEE 802.15.3 MAC with ACSA MAC, which shows that the throughput and delay could be improved in ACSA MAC. We observed significant improvement in the throughput of NRT flows via the better distribution of bandwidth in ACSA MAC. The channel access delay is also improved by providing an unsliced superframe. In brief, the simulation results also support the analysis of the proposed adaptive channel–superframe allocation algorithm, which could generally improve the quality of service (QoS) in MMW systems.
 
According to previous work, the performance of the Distributed Coordination Function (DCF) (i.e., the basic access method of the IEEE 802.11 protocol) is far from optimum due to use of the binary exponential backoff (BEB) scheme as its collision avoidance mechanism. There has been considerable discussion of DCF issues and its performance analysis. However, most schemes assume an ideal channel, which is contrary to realistic wireless environments. In this paper, we present a simple yet pragmatic distributed algorithm, designated the density based access method (DBM), which allows stations to dynamically optimize the network throughput based on run-time measurements of the channel status. Our simulation results demonstrate that the DBM is highly accurate. The performance in terms of throughput and fairness is nearly optimal by use of the proposed scheme.
 
In this paper, a deterministic inventory model with time-dependent backlogging rate is developed. The demand rate is a power function of the on-hand inventory down to a certain stock level, at which the demand rate becomes a constant. We prove that the optimal replenishment policy not only exists but also is unique. Furthermore, we provide simple solution procedures for finding the maximum total profit per unit time. Numerical examples have also been given to illustrate the model and we conclude the paper with suggestions for possible future research.
 
In this article, we present a survey of important results related to the mean and median absolute deviations of a distribution, both denoted by MAD in the statistical modelling literature and hence creating some confusion. Some up-to-date published results, and some original ones of our own, are also included, along with discussions on several controversial issues.
 
A thermodynamic theory of thermoelastic bodies at cryogenic temperatures is developed in the framework of a gradient generalization of thermodynamics with internal state variables. Compatibility of model equations with second law of thermodynamics is investigated. Finally, the corresponding theory for rigid bodies, developed in a series of papers by Kosiński and coworkers is obtained as a particular case of the present one.
 
Top-cited authors
Lester Ingber
  • Physical Studies Institute LLC
Thomas L. Saaty
  • University of Pittsburgh
Yves Cherruault
Ravi Agarwal
  • Texas A&M University - Kingsville
Hsu-Shih Shih
  • Tamkang University