Science topic
Model Development - Science topic
Explore the latest questions and answers in Model Development, and find Model Development experts.
Questions related to Model Development
A cutoff frequency in the high frequency range (microwaves, etc.) is usually defined for band-pass, low-pass, and high-pass components in an easy way when they can be modeled with a circuit. On the other hand, when testing materials by means of antennas, or trying to have an effective value for permittivity and permeability of a component, the experimental results should be processed starting from the S-parameters to obtain the effective quantities. The models developed by Nicholson-Ross-Weir and re-elaborated at NIST or by technology providers like Rodhe and Schwarz and Keysight in their instruments, make use of a cutoff known value derived directly from the device under test (DUT) or from a material under test (MUT), which are distributed structures. How this datum can be determined before using the formulas in both situations, namely DUT and MUT?
Can I get updated allometric models developed for Ethiopia to estimate above-ground and below-ground biomass?
In a model developed by me, most of the variables estimates are insignificant at 95% significance level.
Where could I find python script examples or information in general where heterogeneous reactor kinetic models have been developed? In particular, I am trying to replicate a theoretical model to describe the catalytic process of biogas dry reforming.
I have developed a logistic regression based prognostic model in Stata. Is there any way to develop an app using this logistic regression equation (from Stata)?
Most of the resources I found require me to develop the model from scratch in Python/R and then develop the app using streamlit/Shiny etc.
However, I am looking for a resource where I could use the coefficients and intercept values from Stata based model rather than build model from scratch in python.
Hey everybody!
I'm implementing a Bayesian Negative Binomial using STATA 17. Because of some colinearity or convergence issues, I needed to put my variables in different blocks in the modeling process. Yet, it is a bit confusing to choose the most optimum number of blocks (and also their exact set of variables) for a model. Do you have any idea about it?
Apart from that, what criteria do you suggest (DIC, Acceptance rate, Efficiency, variables significance, etc.) for comparing models developed using various number of blocks?
I appreciate any help you can provide in advance.
Hi. I am struggling with the mouse animal model. I found that the subcutaneous tumors are extremely uneven in my mode. I injected the same volume of tumor cells (human cell line) suspension s.c. in balb/c null mouse and the size of the tumor show uneven after 1-2weeks. This has a great impact on the following treatment. So, I wonder what is the best practice for subcutaneous tumor inoculation? Or, are there any ways to prevent uneven tumor growth?
Thanks a lot!
Hello,
I was reading the following paper:
Oxygen transfer model development based on activated sludge and clean water in diffused aerated cylindrical tanks.
and I found that the authors have calculated Reynolds number and Froude number using the equations attached. What they did is that they used the volumetric flow rate of the air instead of the slip velocity of the two phases to calculate the Reynolds and Froude number.
The second question is:
What is the significant of Froude number in an aeration tank?
Thank you very much.
I am trying to establish a target forest revenue model using 21 years past revenue generated. a former model developed tend to overestimate the figures, so i can discard the old and use the new which is close and based to past generated revenue
Hi!
I'm trying to develop a prediction model for a type of cancer. My end-point is cancer specific survival. I have around 180 events, and an event-per-parameter analysis gave that I can consider about 9-10 parameters to include in the model.
Now, I have dozens of candidate variables, many of which are collinear and some are more truly independent from each other. Some variables are well known risk factors for cancer death, some are more novel but still merit more research.
In my field there is a well-established risk classification system that uses cancer histological grade, stage, lymph node status and tumor size. This classifies cases into 4 risk categories, with increasing risk of death. These four variables have not previously been included in a survival model and there is no published survival regression formula/function with beta-coefficients and intercept for a model with this variables included. Instead, the four risk categories are based on mostly clinical reasoning, expert opinion, experience, and survival rates of the four groups.
My question is if when I'm developing my model if I should include this four variables as separate independent variables, and also add another 4-5 candidate variables that I want to investigate or can/should I include these four variables as a singel composite independent four-tier categorical variable and thus save up degrees of freedom to include more candidate variables? What pros and cons are there with each approach?
Hi,
I am developing Deep learning model(s) for a binary classification problem. The DL works with a reasonable accuracy. Is there a reliable way to extract features from DL models built with 'Keras' pipeline? It seems that the feature contribution are distributed among several layers.
Thank You,
Partho
I am looking for BL/6 mice to develop HCC model. or anyone already working on such model, i need guidance from that PI.
Hi,fellows
Have you heard of SLIP (Shallow Landslides Instability Prediction)model? This model is a simplified physical model developed by the University of Parma for the evaluation of the safety factor of slopes which are potentially at risk of a soil slip.
If you know it,could you share how to download this model
In the process of energy model development i need the O&M cost of different transformers according to voltage levels (i.e. EHV, HV, MV, LV). If any one have some idea or know some documents regarding this please share with me.
Thank you for your help soon.
A PLAXIS user-defined code was written about 10 years ago by the Geo-Group at McGill in association with a research project related to modeling excavations in natural clays. The soil model takes into account the inherent and induced anisotropy, hardening, and bonding degradation effects under both loading and unloading stress conditions. The model was implemented into PLAXIS finite element program, successfully calibrated and used to simulate the construction of a tunnel in sensitive clay material.
Please let me know if you would be interested in incorporating the source code (written in Fortran 90) into PLAXIS to analyze a similar problem or modify the code to fit your own needs.
A version of the source code along with the associated paper are now posted on RG at:
I am a beginner to the DSSAT model and getting deeply interested in this model. Most DSSAT model users begin to entitle their journal articles, book chapters, proceedings etc., using the phrase " Modelling of...". To the best of my knowledge, I think this is wrong b/se they are not modellers (model developers), but they are model users (secondary) instead. Therefore, they must have begun by " Simulation of...." or "Model simulation of ..." etc. This is becoming a trend (knowingly or unknowingly) in most cases and is confusing young researchers in the modelling era. Am I right?
I have some questions about the main principles in drug resistant cell models development.
After determination of IC50 whatever cytotoxic assay is used. I knew there are 2 schools; those who prefer to do a dose-escalation of anti-cancer drug concentrations, and this consumed a lot of time to develop (increase the drug concentration gradually.)
The second, those who prefer to start with a high dose (higher than IC50) and then keep changing the media (drug-free) every week/ 2 weeks.
So how can the one be sure in these weeks (while putting a drug-free media), all cells won't lose the resistance they develop or in other words, why we didn't keep putting drug to cells every week? In order not to kill them all or just mimics what happened in the clinic?
In drug resistant models development, Is there transient vs stable phenotype of drug resistance? How can we really distinguish?
And also there are any methods except for what I said?
Thank you!
The Boundary Equilibrium generative Adversarial Networks (BEGAN) is supposed to have partially controlled the diversity of generator by influencing the discriminator. Is there any model developed after that which has better capability to control the diversity of generator?
I invite you all to give your views on the following finding
Experiment Findings Conceptual HydroGIS Model development framework
Dear Scholars,
I working on my PhD at the moment and part of my model development involve The Elaboration Likelihood Model (ELM) . I have gone through several research article but unable to find a solid one which talks critically when The Elaboration Likelihood Model (ELM) was used in advertising. If anyone happen to comes across one can you please share it with me.
Regards
For instance, in a situation where we are measuring the motivations behind the ratings and reviews of the products on e-commerce portals, the normative beliefs may not be important at all (my presumption) since it is an individual act on my internet device (cell phone or laptop or desktop) wherein the external/social influence may play no role at all. So, in such a scenario can we test only behavioral beliefs in a model, based upon TRA? What would be the justification for skipping the normative beliefs from the model altogether? Please share any relevant citations.
Although I have successfully experienced to run the simulation regarding a model which has been already developed in OpenModelica in the environment of MATLAB, it is very challenging to do so with a model developed by means of another library such as SolarTherm. I was wondering if someone already had done this procedure and like to transfer me his/her experience in this field.
Best regards
Navid
#OpenModelica #MATLAB #SolarTherm
1) Check the job diagnostics.
Open the odb and select tools>job diagnostics. Job diagnostics gives all warnings and errors, as well as residual and contact information. One of the most useful features is the highlight selection in viewport check box. In the warnings tab, the user can see the location of numerical singularities and zero pivots (if applicable), which may give an idea of what causes these warnings.In the residuals tab, the node with the largest residual can be visualised. Looking at this node for the iteration that convergence difficulties arise, often shows the region of the model that is causing problems. Is anything unexpected happening in this region? In the contact tab the location of the maximum contact force error and the maximum penetration error can be viewed. If contact is causing the problems, this will likely show where.
2) Pay attention to warning messages.
Look at when the warning is issued and whether it is likely to point towards the problem. For example, if the solver tries a first attempt with a big increment and gives a warning related to a negative eigenvalue and then cuts back the time increment and obtains convergence in the next increments without any difficulties or warnings, it is likely that the warning was simply a consequence of trying a too big time step. If the warning message repeats itself and repeated cut-backs occur, it may indicate a stability issue (see point 6).
Some warnings are very specific, others can occur with different underlying causes and require more experience to work out the problem.
3) Check boundary conditions
One cause of non-convergence is inadequate boundary conditions. Unreasonable boundary conditions can lead to local extreme deformations. A model can also be over or under constrained. With an under constraint, not all rigid body motion is suppressed, leading to one or more degrees of freedom with zero stiffness and usually zero-pivot warnings. Over constraints also tend to cause zero-pivot warnings. Though Abaqus checks for over constraints and tries to solve them, this is not always possible, for example if the over constraint starts occurring after some time due to contact. It is recommended to check all warning messages related to over constraints. Do not assume Abaqus will correctly resolve the over constraint, but correctly define the constraint yourself. Also, look at the location of zero-pivot warnings (are there over or under constraints there?).
4) Check contact
Contact is also a major contributor to convergence difficulties. Come to think of it, this is not so strange, as the onset of contact gives a discontinuity in the force-displacement relationship, which increases the difficulty of finding a solution with Newton’s method. That’s why Abaqus uses separate severe discontinuity iterations when contact is changing.
One possible source of contact non-convergence is the initial state of the contact. If a problem relies on the contact for stability and initially no contact is present, the simulation may have trouble starting. This is especially the case load control is used: basically a load is applied to something without a stiffness and rigid body motion can occur. (Initially) using displacement control to ensure contact occurs usually resolves the convergence issues. Abaqus also offers contact stabilization to help automatically control rigid body motion in static problems before contact.
This can be defined within contact controls, by using automatic stabilization. It is necessary to specify that the contact controls are to be used in the interaction definition. With automatic stabilization, damping is applied when the surfaces are close to each other but not in contact, so there is a resistance to displacement of the loaded part and rigid body motion is no longer possible. Because this is meant to allow surfaces to get into contact, the damping is ramped down, by default to 0, during the step in which it is applied. It is recommended to check if viscous dissipation is not too large, e.g. compare ALLSD to ALLIE. The techniques to resolve instabilities mentioned in point 6 can also be applied.
Another potential source of contact non-convergence is that no contact is defined for surfaces which are actually in contact, which can lead to unrealistic results, very large deformations and non-convergence. A self-contact can for example easily be overlooked. This normally does not happen when Abaqus’ powerful general contact is applied.
5) Check material definition
Convergence issues can occur when the stress in the material does not increase when the strain increases (the stiffness is not positive). This could happen when experimental data including damage are used to define the model, without including a damage model. Check the (maximal) stresses and strains in the model to see whether damage is expected to occur.
If Abaqus’ material fitting options for hyperelastic models are used, there may be limits to the stability of the material. By right-clicking on the material and selecting ‘evaluate’ it is possible to view the stability limits calculated by Abaqus.
When a plastic material model is used and the loading reaches the end of the defined curve, Abaqus extrapolates the curve with a horizontal line: the (plastic) strain can increase, but the stress does not (perfect plasticity). The stiffness is zero in this case. If this happens in a single element, often the simulation will run without problems. When large parts of the model undergo perfect plasticity it can become a problem. This often indicates the load is too much for the material.
6) Include damping to resolve instabilities
Possibly the most common cause of non-convergence is the presence of an instability. One of the principles of model development, is that a model should not be more complex than necessary to describe the behaviour of interest. With this in mind, it seems reasonable to decrease the complexity of a model by assuming it behaves statically, when the process is slow. Interestingly, however, this simplification can make the model more difficult to solve. In general, the behaviour of material under load is described by Newton’s second law:
F= m x a (force equals mass times acceleration.)
When static behaviour is assumed, the acceleration equals zero, so the sum of all forces must equal zero: there is force equilibrium. The static assumption is valid when the system moves from one equilibrium state to the next and all in between states are also in equilibrium. But is this always the case?
Take the example of two parts not initially in contact with load control. Why is this situation possible in reality? Because the initial displacement of the loaded part will be determined by its inertia. The inertia, the effect we had simplified out, actually stabilizes the problem. Including some kind of inertia or damping effect can often help to obtain a converged solution. There are several methods to do this.
And if nothing works?
Try Explicit. Though the simulation may take long, in some extremely non-linear cases it is just not realistic to obtain a converged solution with Abaqus/Standard. With Abaqus/Explicit, at least you can be certain you won’t have any convergence issues.
And who knows? It may be more efficient to let your computer spend more time solving your actual problem, than to keep modifying the model hoping that this last change will be the trick that gets you to the end of the step.
Are you ready to throw the towel in the ring and need help with your convergence challenge?
Hello, I am looking for the model (developed in Comsol Multiphysics) for the salt rejection using membrane/nanoparticles.
Thanks
As N-methyl-N-nitrosourea is a carcinogen used to induced mammary tumor in rats for breast cancer model development and we ran out of carcinogen what could be the possible alternative or the promoters that could help us in tumor induction imparting no other side-effects?
I have a dataset showing the CTR degradation of a commercial optocoupler 4N35 under a gamma field. My aim is to find a data driven/hybrid degradation model on which Particle filter will be applied for Remaining Useful Life Prediction. How do we go about this model development is my primary question .Other questions related to the same topic are
a) Is it necessary to make a new model for each component or a parametric linear/ exponential degaradation model can be used for estimation + RUL ?
b) What Data driven methods are preferredm E.g ARIMA
Any pointers or link to literature will be highly appreciated
Two PLS regression models are developed from a set of spectral calibration data. For each of the model RMSE of cross validation (RMSECV) and RMSE of prediction (RMSEP) is calculated. Which model (out of these two) should finally be chosen (i.e. model with lowest RMSECV or RMSEP, or somewhare in between)?
Anyone suggesting the number of urban air monitoring stations required for land-use regression model development?
Hi,
Can anybody please suggest me if there are open access data for the cyber security specially for energy delivery systems or smart grid systems? If there are data from PMU or testbeds that would be very helpful. I want to use the data only for my model developing purpose for my PhD work.
Thanks,
Md Ariful Haque
PhD candidate,
Computational Modeling and Simulation Engineering,
Old Dominion University
I am going to build causal loop diagrams about obesity and participants are not English speakers (Arabic participants).
Can anyone suggest software that supports Arabic language?
THX
Hello,
Does anyone have questionnaire for Information Acceptance model developed by Erkan and Evans (2016)? I'm trying to apply it in hotel booking intention and struggling with developing a good questionnaire. Thank you!
I am working on a Data Mining Project that will involve using Neural Network to predict some atmospheric variables. The works involves Big Atmospheric Data and am trying to adopt a data structure that will be suitable.
For the descriptor based QSAR model development, we need IC50 value. so my question is:- how can we obtain or calculate IC 50 value for hypothetical structure ?
Dear all,
My research is on ICT influence in family functioning. I adopted a research model developed by Hertlein. Totally there are 3 main factors and 6 sub factors (for 2 main factors, i.e. 2x3=6) are present in the model. But no standard questions are found out for the factors mentioned in the model. So i developed the questionnaire for the factors. I did exploratory factor analysis and got the required result. When i proceeded to do the SEM model, amos is not giving the model fit value. So I am confused to use amos or Smart PLS. Because at some books i read that smart PLS is suitable when there is no established questionnaire . I have a doubt whether lack of established questionnaire is the reason for no fit in amos?
Because in my case the relationship between factors are present in the model but only the established questionnaires are not present.
In my research i have one independent and 7 dependent variable. I want to check the influence independent variable on the dependent variable.
So Kindly suggest me which software to proceed. Amos or Smart PLS?
Thanks in advance
Regards,
J Dinesh Kumar.
What are the main components for designing a Projectification process model for developing corporate entrepreneurship?
Dear researchers, What software tools are best used for teaching mathematics (construction of dynamic models, the development of spatial thinking, etc.). What are the advantages and disadvantages of using Geogebra? Please recommend some in your comments. Sincerely
For countries which are less developed(developing countries), urban traffic has a small kind of transport modes(that means, most transport mode alternatives are not too much saturated in the city).
In such a case, mode choice will not be a willingness based rather it is obligatory based.
so, is it realistic to do transport mode choice model in urban traffic of developing countries?
I'm looking for the publication or document-specification, where I can learn about main characteristics of scenarios/models, developed by the above-listed research centres. First of all, I mean the methodology and input data, which were utilized under elaboration of the models for climate change.
I came up with this topic for my research proposal
I am using Multi-Gene Genetic programming model developing an equation for a given problem.How can I determine relative importance value of input parameter ?
Innovation itself is not easy to measure but innovation in research, implementation and investment projects is measured instrumentally. The simplest method in the absence of advanced measurement models is the development of a scoring system based on the diagnosed determinants of innovation, for which point scales are created. In this way, innovation, i.e. a qualitative concept, acquires a quantitative dimension and can be quantified. The indicated quantitative dimension can be used to evaluate innovation as a subject of the transaction and a key factor of production in innovative startups and technology companies in which research works are carried out and new innovative technological, process, product and other solutions are created.
I invite you to the discussion
My research area is Entrepreneurship. I have a new research proposal on how Entrepreneurs can finance their projects through crowd funding. Literature search revealed that U.S.A is already adopting this novel finance model. Therefore, I need to collaborate with researcher(s) from U.S.A, Europe or other International organizations, for research grants/aids especially in area of data collection since I am comparing the model in developed and developing countries.
urban lakes are the most abused . They are the natural preserves/reserves in the urban context. But the development of lakes are done for recreational purposes with little concern to the natural processes. a more comprehensive approach would serve bothe the natural process and the recreational aspect. any suggestions of study in this line of thought.
Please can you tell me the coordinates ( x, y and z from bregma ) to inject the cells for orthotopic GBM Model development using U87 cell line?
I want to examine the relationship between A and B. I did some qualitative work and found that there are some variables that are worth examining as mediators and moderators. Indeed, examining these variables is my contribution. The problem is that the structure followed in my uni as as follows:
1- intro
2- Q and O
3- Literature
4- Model development
5- Qual
6- Quan
Now, when I present the model in the section of “Model development” the moderators and mediators look as if they come from “no where”! I tried to mention that these variables were added based on the qualitative findings and more explanation would be provided in Qualitative section. However, I got some feedback suggesting that the structure was difficult to follow and rather confusing! Additionally, I found myself repeating some sections when I discuss the mediators and moderators in Model Development section and in Qualitative section again!
Any suggestions of who can I structure my thesis, please!
I'm developing a bio-optical model for water quality monitoring. As much as I would like to follow standard procedure, the availability of laboratory instruments, time and financial constraints have limit my model development to water quality parameters measured using AAQ Rinko and water spectral reflectance data measured using Ocean Optics. In order to develop my optical model, I'm planning to use the chlorophyll a and turbidity (NTU) measurements from the AAQ and statistically relate it to water reflectance from the ocean optics. However, as the two instruments differ in variable type , one being discrete (AAQ measurements) while the other being continuous (spectral measurements), the development of statistical relationship could be difficult or even impossible. With this, I would like to ask possible methodologies to transform the variables to comparable variable types or suggest related studies that have developed a relationship between these measurements (AAQ and ocean optics hyperspectral measurements. Thank you!
I developed a regression model using remote sensing data to predict volume at plot level. I had 120 sample plots (105 plots of 50 x 50 m; 15 plots of 25 x 50 m in size).
If I want to apply the model at grid/pixel level (10 x 10 m size), what would be the best method to scale down the model to pixel level?
Suppose a variable A depends directly on B,C variables and indirectly on D,E variables. At the same time B depends on D, C and D depends on E.
So, initially this model is developed by using observed data. Then, if I want to see the effect on A by changing E, is it possible to do so? If possible can I use Amos software to do that?
Thank you very much in advance
How do I create a model for Zener diode in Simulink? Is it possible using a MATLAB file, or configuring an existing diode model, and if so, then how do I proceed?
The most promising combination of models for freight transport forecasting and evaluation in my view is a fast and relatively straight forward policy analysis model, developed as a system dynamics model and/or a model integrating outputs from other more detailed models.
Can you share research papers related to this.
Hi,
I'm looking for a global scale stem area index data in order to test a new model development which improves stem area index representation in earth system models. Here the stem area index represent for any above ground biomass that does not assimilate carbon- thus, branches, stems and dead leaves are all included. Thanks a lot.
Ming
Many small animals are sacrificed for scientific research. Though they are bred separately for such purpose, but the pain received by the animals during experiment and loss of resources for such activity should better be avoided.
The software based alternatives of use of animals in such purposes are discussed for more than a decade.
What is the present status of use of this technology?
What are the problems in adopting this technology?
Dear friends,
I am submitting a manuscript with following highlights. Would you please take look to them and give me your opinion?
· Energy flux and mixing process mechanistic model, i-Tree Cool River, was assessed in steady and unsteady conditions.
· The warming effect of urban storm sewer on the river during storm events.
· The cooling effect of riparian vegetation shading and subsurface inflow in the summertime.
· Linear interpolation, Gaussian Elimination function, in C++ for matrix operation.
Best,
Reza
I would like to model the development of AFm phases in low water environments with the addition of metakaolin, but i don't know how to input non crystalline materials in the GEMs software.
I need to implement multilateral model in my research, which focusing on generation revenue. I need more mathematical models to develop my innovative economic transaction. I would be grateful if you could recommend practical references such as a paper/ journal of this modelling.
Hi, am a maasters student in SDSU, I am currently looking for a course project to do for the semester in my modeling and simulation class using matlab.
i find your topic intersting. Do you have a publication or updates or report on it. i need to know from the literature review and introductory part, to how the mathematical model is developed. I f u used numerical methods, and simulink. I havent used simulink before. but i intend to learn. Hopefully every detail and support will count. thanks
What precise value of coefficient of friction to be used in all the stages of rolling a wire rod ,for 3D model development? Whats the method to calculate the value of coefficient of friction for all stands?
What is the minimum concentration of carbon dioxide or ion of carbonate that leads to occur the carbonation in concrete ? or the concentration of Ca( OH)2.
As the article (Carbonation in concrete infrastructure in the context of global climate
change – Part 1: Experimental results and model development) byS. Talukdar a,⇑, N. Banthia a, J.R. Grace
Dears, In membrane ultrafiltration, i am seeing the fouling models are well established for constant pressure operation rather than constant flux. I guess the constant pressure models cannot be used used for constant flux, simply based on the J = TMP/u*R relation. Could you please suggest me any dedicated model developed for constant flux? Or let me know if there is a way to convert the constant pressure model.
Request help in updating myself with the latest developments in DEA. Are there any comparable models that are developed beyond dea. SFA, DEA are productivity assesment tools. Kindly suggest any developments in the area of productivity measurement.
I am looking for a model to estimate the solar radiation of a place using its meteorological data as input like maximum and minimum temperature, humidity, wind direction etc..
I want to develop speed of vehicle using multiple linear regression model. For instance, I have 457 of sampling. I used 300 of sampling for model development and keep 157 sampling to check model accuracy. Based on proposed model, I compute predicted speed. Then which test can I perform to compare predicted and observed speed?
Dear Researcher,
i am developing a Regression Model(Prediction) at a location
Situation A:
Number of n 1100, variable 7.
Situation B:
Number of n 2600, Variable 9.
Question is, do i have to have same number of N at the same location, or the Central limit theorems (CLT) is applicable when dealing with large data for model development ( for normality and validity of the data)
Sediment bed load transport. Except the particle characteristics will there be any other changes?
Hi all. For my final thesis I’ve to evaluate a model to predict the in-plane errors of a plunger approximated with a cylindrical shape. In particular, I’ve to analyse the shape deformation. For this reason, I collected a lot of data both in closed form and open form. The model will be created in the open one. What kind of model can I use in order to create a suitable model?
I will send my data if someone can help me. Thank you.
I've been working with rainfall-runoff modeling via HEC-HMS. In the simulation, when I tried to optimize my model with observed discharge flow, the difference is so high. My model is simulating over estimation as compared to observed flows. I utilized one year daily rainfall data for a drainage basin with the same year discharge data for the model optimization. Kindly let me know, if there is any possible solution for this.
I need some help to integrate BIM Server with open source BCF server (as a plugin in wordpress).
I am involved in a research project to develop an integrated water resource management model for Dublin City. Selection of a software package is crucial to the model development, and strong justification has to be made to back such selection. The project suggests the use of Water Evaluation and Planning (WEAP) software package, however before proceeding forward with I would like to ensure that WEAP strongly stands out against other available software. I am putting this question to investigate and narrow possible software that of comparable performance and scope of WEAP, to go a deeper tradeoff and analysis to draw the best fit for the project? Any suggestions or papers of help to this. Main emphasis are on the ability to place demand issue on equal footing as supply side, represent water quality and ecosystem preservation and requirement consideration!
Is there any Model developed for the interpretation of the Detritus on foreland basin's provenance by using the DZ U-Pb geochronological data? I have DZ U-Pb data and want to extrapolate it in modelling, if available.
Hi, I want to try and model custom particle shapes (cubes, tetrahedrons, etc) make them capable of self-assembly, and then test the larger structures they form by applying forces such as shear, strain, pressure, etc. Currently exploring MatLab, OpenFOAM & Fluidix
Can a model about an organizational phenomenon be build in a way so that it has some antecedent variables, mediating variables and consequent variables? Also, though all the variables in the proposed model are important but mediating variables are of greatest importance. Is this a good conceptualization to be tested empirically?