Co-simulation techniques have evolved significantly over the last 10 years. System simulation and hardware-in-the-loop testing are used to develop innovative products in many industrial sectors. Despite the success of these simulation techniques, their efficient application requires a systematic approach. In practice the integration and coupling of heterogeneous systems still require enormous efforts. At this point in time no unified process for integration and simulation of DCP-based co-simulation scenarios is available. In this article we present ProMECoS, a process model for efficient, standard-driven distributed co-simulation. It defines the necessary tasks required to prepare, instantiate and execute distributed co-simulations according to the DCP standard. Furthermore, it enables the exploitation of front-loading benefits, thus reducing the overall system development effort. ProMECoS is based on the IEEE 1730 standard for Distributed Simulation Engineering and Execution Process. It adopts the artefacts of the DCP specification, and defines additional process artefacts. The DCP specification and its associated default integration methodology were developed by a balanced consortium in context of the ITEA 3 project ACOSAR. The DCP is compatible to the well-adopted FMI standard. Therefore both standards can be used together for seamless development using models, software, and real components. ProMECoS provides the necessary guidance for efficient product development and testing.
Ride-hailing, as a popular shared-transportation method, has been operated in many areas all over the world. Researchers conducted various researches based on global cases. They argued on whether car-hailing is an effective travel mode for emission reduction and drew different conclusions. The detailed emission performance of the ride-hailing system depends on the cases. Therefore, there is an urgent demand to reduce the overall picking up distance during the dispatch. In this study, we try to satisfy this demand by proposing an optimization method combined with a prediction model to minimize the global void cruising distance when solving the dispatch problem. We use Didi ride-hailing data on one day for simulation and found that our method can reduce the picking up distance by 7.51% compared with the baseline greedy algorithm. The proposed algorithm additionally makes the average waiting time of passengers more than 4 minutes shorter. The statistical results also show that the performance of our method is stable. Almost the metric in all cases can be kept in a low interval. What is more, we did a day-to-day comparison. We found that, despite the different spatial-temporal distribution of orders and drivers on different day conditions, there are little differences in the performance of the method. We also provide temporal analysis on the changing pattern of void cruising distance and quantity of orders on weekdays and weekends. Our findings show that our method can averagely reduce more void cruising distance when ride-hailing is active compared with the traditional greedy algorithm. The result also shows that the method can stably reduce void cruising distance by about 4000 to 5000 m per order across one day. We believe that our findings can improve deeper insight into the mechanism of the ride-hailing system and contribute to further studies.
Humans’ activities in urban areas put a strain on local water resources. This paper introduces a method to accurately simulate the stress urban water demand in Germany puts on local resources on a single-building level, and scalable to regional levels without loss of detail. The method integrates building geometry (CityGML), building physics, census, socio-economy and meteorological information to provide a general approach to assessing water demands that also overcome obstacles on data aggregation und processing imposed by data privacy guidelines. Three German counties were used as validation cases to prove the feasibility of the presented approach: on average, per capita water demand and aggregated water demand deviates by less than 7% from real demand data. Scenarios applied to a case region Ludwigsburg in Germany, which takes the increment of water price, aging of the population and the climate change into account, show that the residential water demand has the change of −2%, +7% and −0.4% respectively. The industrial water demand increases by 46% due to the development of economy indicated by GDP per capita. The rise of precipitation and temperature raise the water demand in non-residential buildings (excluding industry) of 1%.
Abstract Simulation of coupled carbon‐climate requires representation of ocean carbon cycling, but the computational burden of simulating the dozens of prognostic tracers in state‐of‐the‐art biogeochemistry ecosystem models can be prohibitive. We describe a six‐tracer biogeochemistry module of steady‐state phytoplankton and zooplankton dynamics in Biogeochemistry with Light, Iron, Nutrients and Gas (BLING version 2) with particular emphasis on enhancements relative to the previous version and evaluate its implementation in Geophysical Fluid Dynamics Laboratory's (GFDL) fourth‐generation climate model (CM4.0) with ¼° ocean. Major geographical and vertical patterns in chlorophyll, phosphorus, alkalinity, inorganic and organic carbon, and oxygen are well represented. Major biases in BLINGv2 include overly intensified production in high‐productivity regions at the expense of productivity in the oligotrophic oceans, overly zonal structure in tropical phosphorus, and intensified hypoxia in the eastern ocean basins as is typical in climate models. Overall, while BLINGv2 structural limitations prevent sophisticated application to plankton physiology, ecology, or biodiversity, its ability to represent major organic, inorganic, and solubility pumps makes it suitable for many coupled carbon‐climate and biogeochemistry studies including eddy interactions in the ocean interior. We further overview the biogeochemistry and circulation mechanisms that shape carbon uptake over the historical period. As an initial analysis of model historical and idealized response, we show that CM4.0 takes up slightly more anthropogenic carbon than previous models in part due to enhanced ventilation in the absence of an eddy parameterization. The CM4.0 biogeochemistry response to CO2 doubling highlights a mix of large declines and moderate increases consistent with previous models.
The paper demonstrates how a system dynamics approach can support strategic planning of health care services and can in particular help to balance cost-effectiveness considerations with budget impact considerations when assessing a comprehensive package of stroke care interventions in Singapore. A population-level system dynamics model is used to investigate 12 intervention scenarios based on six stroke interventions (a public information campaign, thrombolysis, endovascular therapy, acute stroke unit (ASU), out-of-hospital rehabilitation, and secondary prevention). Primary outcomes included cumulative discounted costs and quality-adjusted life years (QALYs) gained, as well as cumulative net monetary benefit by 2030. All intervention scenarios result in an increase in net monetary benefit by 2030; much of these gains were realized through improved post-acute care. Findings highlight the importance of coordination of care, and affirms the economic value of current stroke interventions.
Crop models can be used to estimate yield, water requirements and plant nutrition
requirements under different conditions. This study examines the performance of the
SSM-iCrop2 model in terms of predicting tuber yield, phenological stage and water
requirement of potato (Solanum tuberosum L.) under changing climate circumstances
in Iran. Simulation of potato growth, tuber yield and water requirement for cultivars
commonly grown in Iran (Agria, Marfona, Sante and Arinda) was performed using
the SSM-iCrop2 model. Data from different field experiments in major potato-producing
provinces were used for parameterization and evaluation. The parameterization
results of the SSM-iCrop2 model showed that two maturity groups (early and late
maturity) were determined with thermal units of 1100 and 1500 °C day−1,
respectively, in the important potato-producing provinces. The model was evaluated
based on independent experimental data which were not used for parameterization
step. The observed tuber yield and water requirement ranged between 2013 and
5902 g m−2 and 3523 and 8547 m3 ha−1 with an average of 3542 g m−2 and
6178 m3 ha−1, respectively. The simulated tuber yield and water requirement varied
in the range of 2489 to 5881 g m−2 and 2200 to 7149 m3 ha−1 with an average of
3607 g m−2 and 5901 m3 ha−1, respectively. Also, the evaluation results indicated that
the correlation coefficient (r), root mean square error (RMSE) and coefficient of
variation (CV) for the simulated versus observed tuber yield and water requirement were 0.80, 543 g m−2 and 14% and 0.85, 944 m3 ha−1 and 15%, respectively.
Therefore, the model can be used to estimate potential tuber yield, yield gap and
the effects of climate change.
Abstract The development of multicore hardware has provided many new development opportunities for many application software algorithms. Especially, the algorithm with large calculation volume has gained a lot of room for improvement. Through the research and analysis, this paper has presented a parallel PO-Dijkstra algorithm for multicore platform which has split and parallelized the classical Dijkstra algorithm by the multi-threaded programming tool OpenMP. Experiments have shown that the speed of PO-Dijkstra algorithm has been significantly improved. According to the number of nodes, the completion time can be increased by 20–40%. Based on the improved heterogeneous dual-core simulator, the Dijkstra algorithm in Mi Bench is divided into tasks. For the G.72 encoding process, the number of running cycles using “by function” is 34% less than using “divided by data,” while the power consumption is only 83% of the latter in the same situation. Using “divide by data” will reduce the cost and management difficulty of real-time temperature. Using “divide by function” is a good choice for streaming media data. For the Dijkstra algorithm, the data is data without correlation, so using a simpler partitioning method according to the data partitioning can achieve good results. Through the simulation results and the analysis of the results of real-time power consumption, we conclude that for data such as strong data correlation of streaming media types, using “divide by function” will have better performance results; for data types where data correlation is not very strong, the effect of using “divide by data” is even better.
The compressibility of the vapour–liquid phase is indispensable in simulating liquid hydrogen or liquid nitrogen cavitating flow. In this paper, a numerical simulation method considering compressibility and combining the thermal effects of cryogenic fluids was developed. The method consisted of the compressible thermal cavitation model and RNG k–ε turbulence model with modified turbulent eddy viscosity. The cavitation model was based on the Zwart–Gerber–Belamri (ZGB) model and coupled the heat transfer and vapour–liquid two-phase state equations. The model was validated on cavitating hydrofoil and ogive, and the results agreed well with the experimental data of Hord in NASA. The compressibility and thermal effects were correlated during the phase change process and compressibility improved the accuracy of the numerical simulation of cryogenic cavitating flow based on thermal effects. Moreover, the thermal effects delayed or suppressed the occurrence and development of cavitation behaviour. The proposed modified compressible ZGB (MCZGB) model can predict compressible cryogenic cavitating flow at various conditions.
Modern aerospace applications require lightweight materials with exceptionally high strength and sti_ness. Carbon nanotube (CNT)-reinforced composites have great potential in addressing these requirements. However, one critical factor limiting the potential of CNT-reinforced composites is the limited load transfer capability between CNTs through a polymer matrix, which arises due to low CNTpolymer interfacial shear strength at a molecular scale. While molecular dynamics (MD) simulations can be employed to investigate the CNT-polymer interface, such simulations are computationally expensive. It is thus intractable to explore a suffciently large design space for interface modifications and optimization using MD simulations alone, motivating the use of surrogate models to effciently map CNT-polymer interfacial con_gurations to interfacial strength. In this paper, we develop a surrogate machine learning model trained with MD models of functionalized CNT-epoxy and the corresponding interfacial shear strength. The proposed ML model consists of (i) a feature representation of CNT-epoxy based on radial distribution functions; (ii) a convolutional neural network model to map feature representations to a metric of interfacial shear strength; and (iii) a data augmentation method to maximize the use of limited training data. The proposed ML model can predict the CNT pullout force quite well despite a large variation associated with the pullout force.
Objective
Simulating electronic health record data offers an opportunity to resolve the tension between data sharing and patient privacy. Recent techniques based on generative adversarial networks have shown promise but neglect the temporal aspect of healthcare. We introduce a generative framework for simulating the trajectory of patients’ diagnoses and measures to evaluate utility and privacy.
Materials and Methods
The framework simulates date-stamped diagnosis sequences based on a 2-stage process that 1) sequentially extracts temporal patterns from clinical visits and 2) generates synthetic data conditioned on the learned patterns. We designed 3 utility measures to characterize the extent to which the framework maintains feature correlations and temporal patterns in clinical events. We evaluated the framework with billing codes, represented as phenome-wide association study codes (phecodes), from over 500 000 Vanderbilt University Medical Center electronic health records. We further assessed the privacy risks based on membership inference and attribute disclosure attacks.
Results
The simulated temporal sequences exhibited similar characteristics to real sequences on the utility measures. Notably, diagnosis prediction models based on real versus synthetic temporal data exhibited an average relative difference in area under the ROC curve of 1.6% with standard deviation of 3.8% for 1276 phecodes. Additionally, the relative difference in the mean occurrence age and time between visits were 4.9% and 4.2%, respectively. The privacy risks in synthetic data, with respect to the membership and attribute inference were negligible.
Conclusion
This investigation indicates that temporal diagnosis code sequences can be simulated in a manner that provides utility and respects privacy.
for download please visit:
https://authors.elsevier.com/c/1c3Ms_8dCXj1Jc
Naturally ventilated buildings' performance relies on their design and climatic conditions. Thus, when investigating the natural ventilation potential (NVP), the meteorological data plays a crucial role. This study discusses NVP assessment under two approaches: a general evaluation, using only meteorological data; and a specific investigation through building simulation. Both analyses included in-situ measured data and open-access weather file. When compared, the climatic data presented some differences in the frequency of wind direction/speed values. Still, they showed similarities in the hourly data, which were perceived in the NVP general analysis. A multi-zone Airflow Network model was employed for specific NVP examinations. Following a structured calibration process, using measured data from an experimental campaign, the validated model presented a final total Goodness-of-fit (GOF) of 2.98%. Subsequently, an annual building simulation was carried out with a hybrid operation mode. The number of occupied hours with natural ventilation and heating loads presented the most significant difference to the detriment of the weather file used, depending on the zone. A greater discrepancy in thermal comfort evaluation was perceived according to the adaptive standard employed. The paper discusses some challenges faced at the NVP analyses and the calibration process, ending with some suggestions on how to address these issues.
Shallow-water temperature is an essential factor in various ecosystems, particularly in paddy fields, as damage to paddy rice due to high temperatures may severely affect grain yields. Here, a calculation scheme of paddy field water temperature is proposed to calculate water temperature dynamics in paddy fields and comprises three components: (1) the two-layer heat balance model, which is adopted to calculate water temperature, whereby the solution to the heat balance of paddy water considers the effect of the vegetation layer; (2) a novel method of estimating the plant growth status parameter calculated from thermal storage variations of paddy water to quantify the effect of vegetation layers on water temperature; and (3) a series of correction methods for meteorological parameters to ensure that the parameters measured far from the paddy field are suitable for the model. Estimation of the plant growth status parameter obtained from model validation showed that the proposed method has generality under different weather conditions but with the same rice cultivars. Water temperature calculations showed high consistency, yielding root mean square error values between the measured and calculated water temperatures of 1.65 and 0.91 °C in the full observation period and during the nighttime heatwave period, respectively. Hence, the proposed calculation scheme simplifies the simulating conditions using only the common meteorological factors from the existing weather stations and fundamental paddy factors and simulation of the thermal conditions of paddy fields. This effectively reduces the cost of water temperature calculations.
In 2015, 2016, nine chlorine releases experiments took place at Dugway Proving Ground (Utah, USA). Five trials conducted in 2015 featured a grid of CONEX shipping containers around the release tank to simulate an idealized urban area. The trials were a great occasion to collect data describing the dynamics of “high molecular weight” species in an idealized urban environment.
This paper proposes a comparison of the PMSS (Parallel Micro-Swift Micro-Spray) suite against the experimental data from four of the trials, following the choice made by an international model inter-comparison team. PMSS combines two MPI parallelized urban modelling subsystems: (1) PSWIFT phenomenologically accounts for 3D wind flow and turbulence around buildings, and (2) PSPRAY simulates air-based contaminant dispersion in urban areas using a stochastic Lagrangian Particle Dispersion Model.
For this comparison, PMSS is deployed first on a hypothetic configuration and compared to the Code_Saturne CFD model, and then using the data of the trials 1, 5, 6 and 7 performed during JRII (Jack Rabbit II) 2015 and 2016. The initial momentum due to the tank discharge and the dense gas and droplet effects of the pressurized chlorine release are modelled using a dedicated radial source term jet and the Glendening equations. In trials 1 and 5, the influence of the CONEX shipping containers on the wind flow and the dispersion is explicitly considered. While spread through a large interval, the chlorine concentrations are computed both in the near field and in the far field (up to 11 km) using PMSS nesting capabilities. The results of the model are compared with the experimental data and sensitivity tests are also presented.
This study explores the pattern and formation mechanism of tourists’ spatiotemporal behaviour by modelling, which is crucial for tourism transportation planning and management. The tourism utility maximization principle and tourism demand spillover effect are introduced to explain personal spatiotemporal behaviour. Based on the mathematical description of agent behaviour and simulation environment, an Agent Based Tourist Travel Simulation Model (ABTTSM) is systematically established to include an evaluation of the impact of a new high-speed rail operation in a region of high tourist attraction. Novel spatiotemporal data from social media is employed to test the simulation results. It is found that the transfer probability matrices of the simulation results and social media data are highly correlated and, as a consequence, the tourism circle division is almost unanimous. This means the ABTSM can effectively simulate tourists’ spatiotemporal behaviour and be applied in the planning and management of tourism and transportation.
Testing biochemical data by simulation
Can a bacterial cell model vet large datasets from disparate sources? Macklin et al. explored whether a comprehensive mathematical model can be used to verify or find conflicts in massive amounts of data that have been reported for the bacterium Escherichia coli , produced in thousands of papers from hundreds of labs. Although most data were consistent, there were data that could not accommodate known biological results, such as insufficient output of RNA polymerases and ribosomes to produce measured cell-doubling times. Other analyses showed that for some essential proteins, no RNA may be transcribed or translated in a cell's lifetime, but viability can be maintained without certain enzymes through a pool of stable metabolites produced earlier.
Science , this issue p. eaav3751
In the early architectural design and energy saving reconstruction of building, it is essential need the typical meteorological year(TMY)for building performance analysis. The accuracy of the typical meteorological year data is directly impact the energy simulation results. In order to obtain accurate TMY data for construction engineering applications, the paper does the following work: (1) Reviewed the progress of the typical meteorological year; (2) Several TMY generation methods were compared using the same raw weather data, discusses the applicability of these methods for the cities of typical climate zones in China. Based on the most recent and comprehensive first-hand ground observation data, this paper adopts three methods to generate TMY for the typical city covering the climate zoning of China. Result shows different TMY generation method has applicability in different region, and through the simulation of building energy consumption, the comprehensive method has obvious advantages. It shows that in a region with diverse regional climate characteristics such as China, one certain method for selecting TMY is not enough, and comprehensive methods should be proposed to meet the needs of different building types and energy consumption simulation.
Field operation and preparation (FO & P) processes in the outages of nuclear power plants (NPPs) involve tedious team coordination processes. This study proposed a predictive NPP outage control method through computer vision and data-driven simulation. The proposed approach aims at automatically detecting abnormal human/team behaviors and predicting delays during outages. Abnormal human/team behaviors, such as prolonged task completion and long waiting time, could induce delays. Timely capturing these field anomalies and precisely predicting delays is critical for guiding schedule updates during outages. Current outage control relies heavily on manual observations and experience-based field adjustments, which require extensive management efforts. Real-time field videos that capture abnormal human/team behaviors could provide information for supporting the prognosis of abnormal FO & P processes. However, manual video analysis could hardly provide timely information for diagnosing delays. Previous studies show the potentials of using real-time videos for capturing field anomalies. These studies fell short in examining automatic video analysis in compact work environments with significant occlusions. Besides, limited studies revealed how the captured field anomalies trigger delays during outages.
Computer vision techniques have the potential for automating field video analysis and detections of prolonged task completions and long waiting times. This paper aims at automating the integrated use of 1) real-time computer vision and spatial analysis algorithms, and 2) data-driven simulations of FO & P processes for supporting predictive outage control. The authors first use the video-based human tracking algorithm to detect human/team behaviors from field videos. Then, the authors formalized detailed human-task-workspace interactions for establishing a simulation model of FO & P processes during outages. The simulation model takes the field anomalies captured from videos as inputs to adjust model parameters for achieving reliable predictions of workflow delays. Major observations show that 1) task delays often occur at the initial stage of the workflow, and 2) waiting line accumulates due to excessive resource sharing during handoffs at the middle stage of the workflow. The simulation results show that tasks on the critical-path are more sensitive to these anomalies and cause up to 5.53% delays against the as-planned schedule.
Background
Two common issues may arise in certain population-based breast cancer (BC) survival studies: I) missing values in a survivals’ predictive variable, such as “Stage” at diagnosis, and II) small sample size due to “imbalance class problem” in certain subsets of patients, demanding data modeling/simulation methods.
Methods
We present a procedure, ModGraProDep, based on graphical modeling (GM) of a dataset to overcome these two issues. The performance of the models derived from ModGraProDep is compared with a set of frequently used classification and machine learning algorithms (Missing Data Problem) and with oversampling algorithms (Synthetic Data Simulation). For the Missing Data Problem we assessed two scenarios: missing completely at random (MCAR) and missing not at random (MNAR). Two validated BC datasets provided by the cancer registries of Girona and Tarragona (northeastern Spain) were used.
Results
In both MCAR and MNAR scenarios all models showed poorer prediction performance compared to three GM models: the saturated one (GM.SAT) and two with penalty factors on the partial likelihood (GM.K1 and GM.TEST). However, GM.SAT predictions could lead to non-reliable conclusions in BC survival analysis. Simulation of a “synthetic” dataset derived from GM.SAT could be the worst strategy, but the use of the remaining GMs models could be better than oversampling.
Conclusion
Our results suggest the use of the GM-procedure presented for one-variable imputation/prediction of missing data and for simulating “synthetic” BC survival datasets. The “synthetic” datasets derived from GMs could be also used in clinical applications of cancer survival data such as predictive risk analysis.
In the domain of optimal control for building HVAC systems, the performance of model-based control has been widely investigated and validated. However, the performance of model-based control highly depends on an accurate system performance model and sufficient sensors, which are difficult to obtain for certain buildings. To tackle this problem, a model-free optimal control method based on reinforcement learning is proposed to control the building cooling water system. In the proposed method, the wet bulb temperature and system cooling load are taken as the states, the frequencies of fans and pumps are the actions, and the reward is the system COP (i.e., the comprehensive COP of chillers, cooling water pumps, and cooling towers). The proposed method is based on Q-learning. Validated with the measured data from a real central chilled water system, a three-month measured data-based simulation is conducted under the supervision of four types of controllers: basic controller, local feedback controller, model-based controller, and the proposed model-free controller. Compared with the basic controller, the model-free controller can conserve 11% of the system energy in the first applied cooling season, which is greater than that of the local feedback controller (7%) but less than that of the model-based controller (14%). Moreover, the energy saving rate of the model-free controller could reach 12% in the second applied cooling season, after which the energy saving rate gets stabilized. Although the energy conservation performance of the model-free controller is inferior to that of the model-based controller, the model-free controller requires less a priori knowledge and sensors, which makes it promising for application in buildings for which the lack of accurate system performance models or sensors is an obstacle. Moreover, the results suggest that for a central chilled water system with a designed peak cooling load close to 2000 kW, three months of learning during the cooling season is sufficient to develop a good model-free controller with an acceptable performance.
The freud Python package is a library for analyzing simulation data. Written with modern simulation and data analysis workflows in mind, freud provides a Python interface to fast, parallelized C++ routines that run efficiently on laptops, workstations, and supercomputing clusters. The package provides the core tools for finding particle neighbors in periodic systems, and offers a uniform API to a wide variety of methods implemented using these tools. As such, freud users can access standard methods such as the radial distribution function as well as newer, more specialized methods such as the potential of mean force and torque and local crystal environment analysis with equal ease. Rather than providing its own trajectory data structure, freud operates either directly on NumPy arrays or on trajectory data structures provided by other Python packages. This design allows freud to transparently interface with many trajectory file formats by leveraging the file parsing abilities of other trajectory management tools. By remaining agnostic to its data source, freud is suitable for analyzing any particle simulation, regardless of the original data representation or simulation method. When used for on-the-fly analysis in conjunction with scriptable simulation software such as HOOMD-blue, freud enables smart simulations that adapt to the current state of the system, allowing users to study phenomena such as nucleation and growth.
Program summary
Program Title: freud
Program Files doi: http://dx.doi.org/10.17632/v7wmv9xcct.1
Licensing provisions: BSD 3-Clause
Programming language: Python, C++
Nature of problem: Simulations of coarse-grained, nano-scale, and colloidal particle systems typically require analyses specialized to a particular system. Certain more standardized techniques – including correlation functions, order parameters, and clustering – are computationally intensive tasks that must be carefully implemented to scale to the larger systems common in modern simulations.
Solution method: freud performs a wide variety of particle system analyses, offering a Python API that interfaces with many other tools in computational molecular sciences via NumPy array inputs and outputs. The algorithms in freud leverage parallelized C++ to scale to large systems and enable real-time analysis. The library’s broad set of features encode few assumptions compared to other analysis packages, enabling analysis of a broader class of data ranging from biomolecular simulations to colloidal experiments.
Additional comments including restrictions and unusual features:
1.freud provides very fast, parallel implementations of standard analysis methods like RDFs and correlation functions.
2.freud includes the reference implementation for the potential of mean force and torque (PMFT).
3.freud provides various novel methods for characterizing particle environments, including the calculation of descriptors useful for machine learning. The source code is hosted on GitHub (https://github.com/glotzerlab/freud), and documentation is available online (https://freud.readthedocs.io/). The package may be installed via pip install freud-analysis or conda install -c conda-forge freud.
Understanding how information systems (IS) architecture evolves and what outcomes can be expected from the evolution of IS architecture presents a considerable challenge for both research and practice. The evolution of IS architecture is marked by management’s efforts to keep local and short-term IS investments in line with enterprise-wide and long-term objectives, so they often employ coercive mechanisms to enforce enterprise-wide considerations on local actors. However, an organization is shaped by a multitude of heterogeneous local actors’ actions that pursue their own, sometimes conflicting, goals, norms, and values. This study offers a theory-informed simulation model that explores how IS architecture evolves and with what outcomes in various types of organizations. The simulation model is informed by institutional theory to capture various types of organizations that are characterized by different combinations of coercive, normative, and mimetic pressures, and by complex adaptive systems theory to capture the emergent character of IS architecture’s evolution. First, we outline the insights from simulation experiments. Then, building on the simulation model and theoretical insights, we discuss implications for both research and practice.
Gerotors are compact, inexpensive and robust pumps generally used in hydraulic systems.
Owing to their characteristic operation at low pressure (up to 30 bar), these units can be
modelled using relatively simple tools considering only their fluid dynamics aspects.
However, some commercially available Gerotors operate at higher pressures so that material deformation effects cannot be neglected under such conditions. This paper presents an
omni-comprehensive simulation approach for Gerotor units that considers the fluid–structure interaction effects at the lateral lubricating gaps and related to the contacts between
the rotors. The numerical tool consists of different submodels which allow the evaluation
of the fluid dynamic features of the flow through the pump considering micro-motion
effects of the rotors. This paper particularly details the novel aspects of the simulation
approach, which is the introduction of submodels that account for fluid–structure interaction effects: the lateral lubricating gap models and the rotor contact models, both based on
the solution of the Reynolds equation considering the deformation of the solid parts.
For the model validation, a commercially available pump able to operate above 100 bar
was tested at the authors’ research center. The comparison between the simulation results
and the experimental data clearly show the good accuracy of the model, in terms of volumetric flow and port pressure pulsation. In particular, the model allows predicting both the
volumetric and the torque efficiency of the unit. Furthermore, the simulation results permit
to interpret the reasons for instances of wear present in the unit.
During Parker Solar Probe ’s first orbit, the solar wind plasma was observed in situ closer than ever before, the perihelion on 2018 November 6 revealing a flow that is constantly permeated by large-amplitude Alfvénic fluctuations. These include radial magnetic field reversals, or switchbacks, that seem to be a persistent feature of the young solar wind. The measurements also reveal a very strong, unexpected, azimuthal velocity component. In this work, we numerically model the solar corona during this first encounter, solving the MHD equations and accounting for Alfvén wave transport and dissipation. We find that the large-scale plasma parameters are well reproduced, allowing the computation of the solar wind sources at Probe with confidence. We try to understand the dynamical nature of the solar wind to explain both the amplitude of the observed radial magnetic field and of the azimuthal velocities.
The understanding of the hierarchical importance of the factors which influence sugarcane yield can subsidize its modeling, thus contributing to the optimization of agricultural planning and crop yield estimates. The objectives of this study were to identify and ordinate the main variables that condition sugarcane yield, according to their relative importance, as well as to develop mathematical models for predicting sugarcane yield by using data mining (DM) techniques. For this, three DM techniques were applied in the analyses of databases of several sugar mills in the state of São Paulo, Brazil. Meteorological and crop management variables were analyzed through the following DM techniques: random forest; boosting; and support vector machine, and the resulting models were tested through the comparison with an independent data set. Finally, the predictive performances of these models were compared with the performance of a simple agrometeorological model, applied in the same data set. The results allowed to conclude that, within all the variables assessed, the number of cuts was the most important factor considered by all DM techniques. The comparison between the observed yields and those estimated by the DM models resulted in a root mean square error (RMSE) ranging between 19.70 and 20.03 t ha⁻¹, which was much better than the performance of the Agroecological Zone Model, which presented RMSE ≈ 34 t ha⁻¹.
A model simulation study on effects of intervention measures in Wuhan COVID-19 epidemic