Science topic

Computational Modeling - Science topic

Explore the latest questions and answers in Computational Modeling, and find Computational Modeling experts.
Questions related to Computational Modeling
  • asked a question related to Computational Modeling
Question
3 answers
Most pharmacy students rated the use of complex molecular modelling or computational tools as useful for improving engagement and learning outcomes. Further, it significantly improves students' understanding of pharmacological concepts necessary for competency in medication management, based on significant improvements in post-test scores. 
My concern is that if all the above mentioned/stated positive outcomes generated through the use of these pedagogical tools in some ways or other help to reduce the cognitive load or not? Do we have reasons to believe this? I am trying to establish the link between how 2D or 3D visualisation technologies and their relation to cognitive load management.
Relevant answer
Answer
Thanks, Muhammed Yaseen Alzahawi , I am of the same opinion, however, I am more interested in the underlying rationale.
Dear Wajd Alkhawaldeh, thanks for your elaboration on the query. With my limited understanding and what the literature outlined, it depends on the VSA (Visual-spatial abilities) capability of the individuals. Individuals with high VSA are able to process spatially complex procedures while still retaining sufficient cognitive resources to benefit from using 3D visualisation while learning. With low VSA, individuals utilise more cognitive resources when performing a spatially complex task. This results in an increase in cognitive load as they learn. Therefore, in individuals with low VSA, it may be possible to compensate for the higher cognitive demands by improving instructional methods.
  • asked a question related to Computational Modeling
Question
1 answer
I want to do computational modelling of microstructure followed by property prediction. However, I don't know where to start and which methods to learn as primary level.
Relevant answer
Answer
You should firstly collect data and then you can use different regression approaches such as machine learning, PCA, etc.
  • asked a question related to Computational Modeling
Question
1 answer
  • I was wondering which model is the best fit to dopamine neuron? introducing some references also is very appreciated!
Relevant answer
  • asked a question related to Computational Modeling
Question
5 answers
Does anyone know any simple and inexpensive proof that with the help of a spectrophotometer allows me to evaluate the biological activity of several components of an essential oil on the same enzyme?I am trying to correlate computational models (docking)(see attached document) and biological activity of compounds of essential oils
Relevant answer
Answer
I think you are asking about how to tell if two or more different inhibitors in the essential oil are binding to the enzyme simultaneously in different sites.
To answer this question, you will have to fractionate the oil to purify the individual inhibitory components. This would probably be done using some type of chromatography. Individual fractions from the chromatography would then be tested for inhibition of the enzyme in an enzyme activity assay.
Once you have the individual inhibitors purified, you will be able to use steady-state kinetics, equilibrium binding, and/or structural biology to determine whether any pair of inhibitors can bind at the same time.
  • asked a question related to Computational Modeling
Question
4 answers
I would like to study the apo form (lipid-free) of a protein that only has been crystallized with lipids. I want to explore if it is possible to generate with a molecular dynamic a reasonable structure, making subtraction of lipids in several steps until obtaining the apo form. Likewise, I don't know if, during the molecular dynamic trajectory, it is possible to disappear lipids. I am thinking of using programs like GROMACS, AMBER, etc.
Relevant answer
Answer
You need to remove lipid before MD simulation. You can not delete or add any atom/residue/molecule during and after MD simulation, as it will destroy your trajectory data.
  • asked a question related to Computational Modeling
Question
4 answers
I want to start some chemical reaction calculations, but I need some basic reference to get started. Thank you in advance. Sincerely.
Relevant answer
Answer
When dealing with QM calculations on chemical reactions, one basic problem is thermochemistry. The evaluation of thermodynamic parameters allows a theoretical determination of enthalpy, free energy, and entropy variations associated with the reaction and hence reaction constant. If this is your goal, I suggest strongly this paper, available for Gaussian but whose concept can be easily generalised:
I hope this may be a good starting point (as it was for me!).
  • asked a question related to Computational Modeling
Question
5 answers
Hi,
I have three separate pdb (protein databank) files that each represent a different segment of my protein (ex. pdb1 has residues 1-20, pdb2 has residues 20-80 and pdb3 has residues 80-100) and I want to connect these to form one pdb file (pdb4 to have residues 1-100). Also note that I want to keep the secondary structure defined in my "middle" pdb file (pdb2). Can Pymol "attatch" my 3 pdb files? If so, how can I do this? I'm new to Pymol so any help would be greatly appreciated.
Thanks,
Thomas
Relevant answer
Answer
Superimpose using spdbv or pymol or you can stitch both the fragments using Python programming.
  • asked a question related to Computational Modeling
Question
4 answers
Computational modelling
Machine learning-Artificial neural networks ,Support vector machine etc
  • asked a question related to Computational Modeling
Question
4 answers
I am working on a synthetic biology project in which I am redesigning the pathways in photosynthesis of Synechococcus elongatus to become more efficient, evaluated by increase in biomass. We want to computationally model this in R but we are struggling to find packages that can help us with this. I've looked at deSolve::aquaphy which models C and N assimilation but photosynthesis rate is an input and we want to model this also.
  • asked a question related to Computational Modeling
Question
6 answers
Hi, I've been an experimentalist throughout my career and I want to add to my knowledge and expertise in Materials Science by delving into computation, modeling and simulation. However, I'm very confused about where to begin as it appears that field of modeling and simulation is quite vast and expansive. If I had to choose a starting material type, I'd say composites and modeling of failure modes of composites. Can anyone guide me on where to begin? I'm a complete novice when it comes to anything computational. Where do I begin, if I'm starting from scratch.
Relevant answer
Answer
  • asked a question related to Computational Modeling
Question
4 answers
Greetings all,
I am investigating the topic of emotional contagion. While there are numerous papers that refer to it as having the potential to spread similarly to an infectious disease, I have not yet been able to find any computation models that try to predict the contagion rate.
Essentially what I'm looking for is "are there any computational models that can predict the potential viral impact of a negative mood".
Thanks in advance for any assistance.
Relevant answer
Answer
Dear Erika, there is something better than Rt that has many limitations, in the case of mood the major limitations is the supposed ergodicity at the basis of Rt that is seldom verified in case of epidemics and never verified by mood spreading that happens on networks or, in any case, with very different spreading potential of the different 'infected' units.
This is why you should look at the papers related to the spreading of a signal across a social network, see for example:
Ciao
Alessandro
  • asked a question related to Computational Modeling
Question
1 answer
What has changed since this Pezzulo et al Paper?
Relevant answer
Answer
One place to get a sense (census?) of things related to the Pezzulo paper is to do a forward literature search: https://scholar.google.com/scholar?cites=14415125872999652537&as_sdt=400005&sciodt=0,14&hl=en
244 citations in 10 years. You can get a sense of the movement in the field by classifying these citations in various ways.
  • asked a question related to Computational Modeling
Question
4 answers
All,
I am trying to equilibrate an Ice 1h crystalline structure to 250K in LAMMPS. When I tried to do this at 250K directly, the structure lost it's crystalline shape and became disordered post equilibration.
Based on the advise given by previous researchers who have worked with ice structures in LAMMPS, this should be done in small incremental steps from 10, 100, 150, 200, to finally 250K using a 0.5fs timestep. A minimum of 60ps relaxation is recommended at each temp.
Can anyone kindly help me understand what a script for this part of the simulation will look like? Below I post, the first few steps of the equilibration using fix npt (chosen after careful consideration of other ensembles and trials with fix nvt) as per my understanding. Please let me know if this looks correct. Any insight is appreciated.
#First step equilibration at 50K
timestep 0.5
velocity all create 50 34455 dist gaussian mom yes rot yes
fix 1 all shake 1e-6 10 1000 b 1 a 1
fix 2 all npt temp 50 50 $(100.0*dt) iso 0 0 $(1000.0*dt)
thermo_style custom step temp pe ke etotal press vol density
thermo 1000
run 120000
unfix 2
#Second step equilibration to 100K
fix 2 all npt temp 50 100 $(100.0*dt) iso 0 0 $(1000.0*dt)
thermo_style custom step temp pe ke etotal press vol density
thermo 1000
run 5000
unfix 2
#Third step equilibration at 100K
fix 2 all npt temp 100 100 $(100.0*dt) iso 0 0 $(1000.0*dt)
thermo_style custom step temp pe ke etotal press vol density
thermo 1000
run 120000
unfix 2
Relevant answer
Answer
Hi Abhay Vincent , I'm not sure if you have been able to resolve this yet, but there are few things that need to be clear:
1) How do you initialize the configuration, since dealing with water would be different from initializing your particles in any specific lattice.
2) What potential are you using? You can go through sample water simulation scripts here in the tutorial by Christopher Brien: https://sites.google.com/a/ncsu.edu/cjobrien/tutorials-and-guides/working-with-water-in-lammps
Once, you have that, npt ensemble will allow you to linearly ramp up the temperatures. Ideally, I have worked with cooling rates of 1e0 to 1e2 K/ps. You might want to reduce your heating rate but do check the potential you're using and the way the configuration is defined for your simulation.
Dealing with H and O atoms separately along with an SPC or TIP3P potential will also help you in identifying other effects, that may or may not be the target of your current analysis.
  • asked a question related to Computational Modeling
Question
3 answers
I am looking for a research paper about the mathematical or computational modelling of protein oxidation (caused by reactive oxygen species).. I would really appreciate that if someone helps me with this.
Relevant answer
  • asked a question related to Computational Modeling
Question
3 answers
Hi, I am trying to create a computation model of acoustic plane wave propagation through multiple layers of fluid. What should be the appropriate boundary conditions in my fluid-fluid interface? Thank you.
Relevant answer
Answer
The pressure and the normal components of the velocity have to be continuous across the boundaries. See for example in Fundamentals of Acoustics by Kinsler, Frey, Coppens, and Sanders Chapter 6.
  • asked a question related to Computational Modeling
Question
7 answers
We have used DOT to dock DNA to proteins but increasingly need to examine computational models for RNA docking onto proteins. Would appreciate advice and examples where this has proven successful.
Relevant answer
Answer
  • asked a question related to Computational Modeling
Question
5 answers
I have been working on a project that requires the use of a protein-protein docking structure to generate a computational model of a protein interaction. I used pyDockWEB and have been trying to find some literature to help me understand their scoring method, but one question I have is that when I add restrictions to the model, the total binding energy is around -115, compared to just submitting the PDB files for a global search, the best model has a total binding energy of closer to -20, which seems crazy considering the amount of orientations they evaluate and can't create a model with any lower energy. Can anyone tell me why such a discrepancy exists?
Relevant answer
Answer
It is certainly possible that you simply need to use different software (as suggested by the other answers), or perhaps the scoring functions used by restrained vs. global docking are different for this particular docking algorithm (making it difficult to truly compare the different outputs); however, I think there is an important point to address in your question regarding the two sampling approaches (i.e. restrained vs. global).
If you perform a restrained search such that you are sampling fewer completely new protein-protein docking orientations, then your iterations are used to optimize the local interface. In other words, you are sampling deeply in a confined region of space. If you perform a completely global search, then you will identify many more completely different protein-protein docking orientations, but none of them will likely have optimal local interactions. In other words, you are sampling broadly in a large region of space.
If you have data that you can use for restraints (e.g. known protein-protein contacts, co-evolved residues for a conserved interaction, experimental mutagenesis, NMR chemical shifts, etc.), then use them as it will almost certainly improve the prediction. If you do not have any knowledge or data to guide you, and you are truly performing "blind" docking, then I recommend you take the different solutions from the blind/global docking experiments and choose a subset with which to perform restrained docking (e.g. by identifying different clusters, ranking by energy, fitting to known data, etc.).
If you are looking to have more control over your docking experiments than a webserver provides, I strongly encourage using the Rosetta macromolecular modeling software package. There are many protocols available and extensive online documentation. Rosetta is very useful for a number of tasks including protein-protein docking.
I hope this helps. Best,
Ben
  • asked a question related to Computational Modeling
Question
4 answers
I am starting a new lab this fall at Florida International University (FIU). Our lab will focus on understanding the neurocognitive processes that allow for the emergence of cognitive control (how the human brain/mind monitors and adapts itself overtime to achieve task goals). Moreover, we will seek to understand how this system develops across adolescence, and relations to social behavior and social anxiety. Towards this end, methods that I currently employ, include: (single-trial) ERP analyses, time-frequency analyses of EEG (power and phase relations), source-localization of EEG, traditional fMRI approaches (GLM-based), and basic computational modeling (drift-diffusion models). Our lab is currently purchasing a high-density EEG system and FIU houses an fMRI scanner.
I am seeking collaborators that may or may not currently work in the fields of psychology or neuroscience, but that have at least some expertise in one or more of the following domains: data science, advanced signal processing, machine learning, computational modeling, graph-theoretic/network analysis. I am most interested in finding collaborators that can help generate the best science; location, status, affiliation, or degrees earned are not important. I also intend to take on at least one PhD student this fall and welcome responses from prospective students.
The scientific goal of this collaboration will be to combine skillsets in order to test novel hypotheses regarding the human cognitive control system, its developmental trajectory across adolescence, and relations to social behavior and social anxiety. At a practical level, we would seek to produce high-impact publications and to generate pilot data for pursuing collaborative grant proposals. Depending on the situation, initial funding may be available for potential collaborators, consultants, or contractors.
For examples of recent studies that will inform the work in our lab, please refer to the following publications:
Relevant answer
Answer
Congratulations on completing your PhD program, I will be happy to collaborate in your study. I am a research assistant at the Rush Alzheimer's Disease Center and will be able to get some large pool of data, and analysis if that will help.
  • asked a question related to Computational Modeling
Question
4 answers
I am starting a new lab this fall at Florida International University (FIU). Our lab will focus on understanding the neurocognitive processes that allow for the emergence of cognitive control (how the human brain/mind monitors and adapts itself overtime to achieve task goals). Moreover, we will seek to understand how this system develops across adolescence, and relations to social behavior and social anxiety. Towards this end, methods that I currently employ, include: (single-trial) ERP analyses, time-frequency analyses of EEG (power and phase relations), source-localization of EEG, traditional fMRI approaches (GLM-based), and basic computational modeling (drift-diffusion models). Our lab is currently purchasing a high-density EEG system and FIU houses an fMRI scanner.
I am seeking collaborators that may or may not currently work in the fields of psychology or neuroscience, but that have at least some expertise in one or more of the following domains: data science, advanced signal processing, machine learning, computational modeling, graph-theoretic/network analysis. I am most interested in finding collaborators that can help generate the best science; location, status, affiliation, or degrees earned are not important. I also intend to take on at least one PhD student this fall and welcome responses from prospective students.
The scientific goal of this collaboration will be to combine skillsets in order to test novel hypotheses regarding the human cognitive control system, its developmental trajectory across adolescence, and relations to social behavior and social anxiety. At a practical level, we would seek to produce high-impact publications and to generate pilot data for pursuing collaborative grant proposals. Depending on the situation, initial funding may be available for potential collaborators, consultants, or contractors.
For examples of recent studies that will inform the work in our lab, please refer to the following publications:
Relevant answer
Answer
hi George Buzzell, I am interested in this. Please check your DM.
  • asked a question related to Computational Modeling
Question
3 answers
Hello,
How to quantify the molecule induction effect? What parameters obtained by quantum chemical analysis of a molecule will help evaluate the molecule induction effect?
Relevant answer
Answer
I reckon that cDFT should be a reasonable alternative to separate induction from charge transfer:
DOI: 10.1021/acs.jctc.6b00155
J. Chem. Theory Comput. 2016, 12, 2569−2582
  • asked a question related to Computational Modeling
Question
3 answers
Say I am interested in examining individual differences in cognition and behavior and am interested in how specific survey scores and parameters predict/covary with performance on a task. How would I analyze this data based on the literature?
Are there conventional methods for analyzing differences in psychological phenomenon across individuals? Is that exactly what uni/multivariate statistics is for? Or are there alternative methods? Are there where advanced statistics comes in?
Is it more compelling and/or informative to analyze individual differences in a single subject design, an aggregate model/submodel (GLM), or as a dynamical system?
What does the basic and current literature say? What papers or books explicitly discuss this?
Thanks,
JD
Relevant answer
Answer
I do recommend to take into consideration AI methods. More specifically machine learning methods like k-NN --- stands for k nearest neighbors --- might bring deeper insight into your problematic.
What I recommend is to study similar problems already resolved by this method and similar ML methods and try to understand the way of implementation into your specific problem.
The best way to tackle the problem is to use Python and relevant libraries that already contain all necessary ML methods. When you are novel in the field, try to find someone around you who us familiar with Python programming. He/she can speed up your learning curve substantially.
Some review papers on this topic would be an excellent start.
  • asked a question related to Computational Modeling
Question
11 answers
Dear Colleaugues,
There was a proposal of a researcher who asked me to write down a paper which will more or less extend the poster (link bellow) published at ICCB 2016 in Prague. This all happened by an accident. I did not like to come at a conference without some presentation. Hence, I did quickly put the ideas that are resonating in my head for years on that poster to allow other researchers to benefit from it. Surprisingly, this poster is getting a great deal of attention. Therefore, I am thinking about to write down a review (prescription) how to design sel-organizing and emergennt systems with a rich example aparatus. If you like the idea then wisit the poster and let there a comment about it (bellow the poster).
The whole project is meant as a service to the community of biological and medical researchers who would like to know more but have no time to study mathematics and programming in depth.
All the best at your research,
Jiri
Relevant answer
Answer
Yes, especially the application of general theory of CAS that require further development in relation to the specific conditions of management of the organizations.
Kind regards,
Dragoljub
  • asked a question related to Computational Modeling
Question
2 answers
Knowing the best computational modeling for simulation of Masonry ARCH with BACK-FILL using Discrete Element Method (DEM)
Relevant answer
Answer
3DEC software is a very useful tool for discrete element modelling of masonry walls.
  • asked a question related to Computational Modeling
Question
1 answer
I would like to ask whether there exists a computational model that explains the complex interactions between Brainstem structures and Limbic system?
Relevant answer
Answer
Dear Dr Tomic,
I like your question. I am sharing with you the following video link:
Notice also the very close anatomical intimacy and the hugging between limbic system and ventricular system (CSF mysterious waves)
Regards,
Omer
  • asked a question related to Computational Modeling
Question
5 answers
I am new to hyper-elastic modelling.I have performed curve-fitting for Neo-Hookean and Mooney-Rivlin model using experimental data obtained from shear test. Now I am required to approximate the material parameters of Holzapfel-Gassar-Ogden model.From this literature, https://pdfs.semanticscholar.org/d166/eab636ec656cec025a28671285f0909c17bc.pdf I could find the equation 3.27 but have few questions,
In equation 3.27, how to find the value of (B + m ⊗ m), I4 and I1 ?
My material is an incompressible biological tissue.
How can we find the material parameters of HGO model ?
Relevant answer
Answer
Hello Libin,
as the equation is nonlinear, you cannot use a simple least squares estimator anymore, because the parameters you want to find are not linearly separable.
So if you want to fit measured data to this model, you have a nonlinear least squares problem. There are several ways to solve this problem:
1. This is the simplest one: use the gradient descend method (also known as backpropagation in ANN). This method only finds a local optimal solution (depending on the function and initialization point), so there might be a set of parameters, which fits the problem better. ( https://en.wikipedia.org/wiki/Gradient_descent )
2. You can also use the Levenberg-Marquardt algorithm, which converges faster than the gradient descend method, but also only finds a local optimum ( https://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm )
4. Use a global optimization algorithm like the particle swarm optimization or an evolutionary algorithm, which can find the global optimal set of parameters with a very high probability. ( https://en.wikipedia.org/wiki/Particle_swarm_optimization )
Your optimization goal for1,2 and 4 is to minimize the residual sum of squares ( https://en.wikipedia.org/wiki/Residual_sum_of_squares ), which is defined by the deviation of the measured data compared to the model output with the actual parameter set.
  • asked a question related to Computational Modeling
Question
4 answers
Hello
I decided to work on computational model of motivation for my project. I have problem to find a proper dataset, would you help me how can I find this?
Best Regards
Fateme Saberi
Relevant answer
Answer
It can be hard to find a specific dataset to use for make computational models or to even experiment on it.
The following list does contain great datasets for experimentation and it contains a description as well, usage examples and in some cases the used codes. It is focused on machine learning problems:
1. Kaggle datasets. This is by far my favourite dataset location. Basically, each dataset is a small community where you can find discussions about the data, find some public code or create your own projects in Kernels. Sometimes you can find notebooks with algorithms that solve the prediction problem in this specific dataset.
2. Google's datasets search engine. This is other of my favourite dataset locatios. The last year, Google launched this great service that consists in a toolbox that can search for datasets by name. Their main objective is to unify thousands or maybe millions of different repositories for datasets around the world and make that data discoverable.
3. Microsoft datasets. Also the last year, Microsoft launches its “Microsoft Research Open Data”. It contains a data repository in the cloud dedicated to facilitating collaboration across the global research community. It offers a considerable amount of datasets that were used in published research studies.
4. Github collection. This is a other great source of datasets organized by topics, such as Economics, Finance, Education, Transportation, among others. Most of the datasets listed there are free, but you should always check the licensing requirements before using any dataset.
5. Some government datasets. Finally, it’s easy to find government-related datasets as well. Many countries have shared datasets to the public as an exercise of transparency. For instance:
There are more datasets in some universities webpages that you can search. In conclusion, there is a global direction towards making more data available to the research community. Those new datasets’ communities will continue to grow and make the data easily accessible so that the crowdsource and the computer science community can continue to innovate fast and bring more creative solutions to every single problem.
  • asked a question related to Computational Modeling
Question
2 answers
Hello all,
I am working on computational modeling of bone internal microstructure. I was wondering if any sample micro-CT data sets available/shared
Thanks,
Relevant answer
Answer
Go through some links i hve given, you can find here.....
you find click here link then click on here text.....
  • asked a question related to Computational Modeling
Question
4 answers
I want to model the interaction between different oxidation states of iron and a ligand for optimization and calculation of energy, and wonder how I can define these states with Spartan, Gaussview,....
All what I can do is defining whether the complex is tetrahedral or octahedral, but not the oxidation state. Any ideas?
Relevant answer
Answer
Thanks Martin
I have tried this option, and hope the calculations will be fine.
  • asked a question related to Computational Modeling
Question
5 answers
Hi,
    I am R.Thiruchelvi currently working red algae on my research.My research is to characterisation of sulphated polysaccharides from red algae and to study antitumor activity both invitro and invivo.
   I have come across the computational modelling in algae research. May i know in deep where i have yo start?  What concepts involve in modelling of algae research?
  Can u explain so that it will be very useful for me to update in this field.....
Relevant answer
Hello, u can analyze the red algae or polysaccharides for GC-MS analysis. Meanwhile obtain the tumour responsible protein of your target interest and retrieve from PDB database, and determine it's active site through Q-site finder, and dock the Logan's and protein using Autudock, pardock, Gemdock or patchdock softwares. And interpret the binding energies.
All the best
  • asked a question related to Computational Modeling
Question
2 answers
I am performing a sequence of modeling steps using Spartan which takes a long time, and wonder if I can pause the calculations, and run again from the end point.
Relevant answer
Answer
Dr. Magnus
Thanks for your reply.
Yes. I follow the same steps for geometry optimisation using Spartan Student version 6; however, I am performing conformer distribution computations which I can't find how to pause and continue later.
  • asked a question related to Computational Modeling
Question
4 answers
I could only export stress at nodal/integration points. But, I want to find the volume average of the stress over the element. I was wondering can elemental stress be obtained directly from ABAQUS?
Relevant answer
Answer
By coading u can do it
algorithm which was the base of my proper algorithm for python abaqus features extraction (I attach part of it). You should take into acccount that stresses are determined at integration points, not at the nodes. For viewing them at nodes, abaqus uses an extrapolation algorithm.
Therefore, my advice is to extract it at integration points or at the element centroid (this is the way used on my algorithm).
I hope you'll find it useful, it took some hard work. You'll only need to add the file writing on it.
# Get ABAQUS interface
from abaqus import *
from abaqusConstants import *
# Open the odb
myOdb = session.openOdb(name=odbPath)
session.viewports['Viewport: 1'].setValues(displayedObject=myOdb)
# Get the frame repository for the step, find number of frames (starts at frame 0)
frames = myOdb.steps[stepName].frames
numFrames = len(frames)
# Isolate the instance, get the number of nodes and elements
myInstance = myOdb.rootAssembly.instances[instanceName]
numNodes = len(myInstance.nodes)
numElements = len(myInstance.elements)
for el in range(0,numElements):
   # Isolate current and previous element's stress field
Stress=myOdb.steps[stepName].frames[fr].fieldOutputs['S'].getSubset(region=myInstance.elements[el],position=CENTROID, elementType='C3D8H').values
   sz = len(Stress)
   for ip in range(0,sz):
      Sxx = Stress[ip].data[0]
      Syy = Stress[ip].data[1]
      Szz = Stress[ip].data[2]
      Sxy = Stress[ip].data[3]
      Sxz = Stress[ip].data[4]
      Syz = Stress[ip].data[5]
myOdb.close()
  • asked a question related to Computational Modeling
Question
10 answers
We believe that in the future Data Science Methods will be considered as a powerful tool, which supports an interchange between experiment, computer simulation, and engineering calculation. We invite those who are interested in the creation of Multifactor Computational Models of the Energetic Materials Combustion and Detonation to joint research.
Relevant answer
Answer
I have received it and will study. I will inform you in the nearest time.
  • asked a question related to Computational Modeling
Question
1 answer
This is needed for a computational model (multi-agent) of behaviour of smokers in NSW Australia.
Relevant answer
Answer
Borland and colleagues did wonderful work on this topic so if you are interested have a look at the following papers:
1. Borland R, Partos TR, Yong H-H, Cummings KM, Hyland A. How much unsuccessful quitting activity is going on among adult smokers? Data from the International Tobacco Control Four Country cohort survey. Addiction. 2012;107(3):673-82.
2. Herd N, Borland R. The natural history of quitting smoking: findings from the International Tobacco Control (ITC) Four Country Survey. Addiction. 2009;104(12):2075-87.
3. Herd N, Borland R, Hyland A. Predictors of smoking relapse by duration of abstinence: findings from the International Tobacco Control (ITC) Four Country Survey. Addiction. 2009;104(12):2088-99.
4. Partos TR, Borland R, Yong HH, Hyland A, Cummings KM. The quitting rollercoaster: how recent quitting history affects future cessation outcomes (data from the International Tobacco Control 4-country cohort study). Nicotine Tob Res. 2013;15(9):1578-87.
  • asked a question related to Computational Modeling
Question
1 answer
In order to model how energy distributions on nerve endings become percepts about surfaces and objects, there is a hard 'binding problem'. There are no computational models of binding that I know of. Will this project attempt one?
Relevant answer
Answer
If your question concerns binding or rather visual links, in brief and very general (broad) sense the response is yes. However, the proposition goes much deeper and attempts to resolve visual generalization dilemma using relative positional potentials of visible objects. In our approach, time and energy challenge more classical models following visual feature aspects of objects, which indeed requires various unrealistic prerequisites (e.g., initial conditions, sampling,....). Any assumption based on pre-processes of sensory data fall short in resolving hard problems of visual general transformations. This includes relation binding.....
  • asked a question related to Computational Modeling
Question
3 answers
Dear all,
Please provide some good help / reference documents or texts to understand the Finite Element Analysis using getDP tool .It would be great if anyone can provide the detailed step-by-step procedure to work on the getDP tool as I am a beginner in the field of Finite Element Analysis.
Thanks,
Raghu
Relevant answer
Answer
refer the book by Logan..first course in finite element method..
  • asked a question related to Computational Modeling
Question
3 answers
Greetings,
Can you recommend any modern article on interactions and steric forces between two or multiple magnetic nanoparticles with brushes in a nonpolar solvents? preferably computational model and more mathematical/physical.
Thank you,
Relevant answer
Answer
Dear Rahman,
I see that you look for steric effects that the chemists that we collaborate use for obtaining the nanoparticles complementing the sol-gel technique. Perhaps if you look for the papers of Arturo Lopez-Quintela you sure would find applications.
  • asked a question related to Computational Modeling
Question
1 answer
Factors
· Formalization
· Structure
· A first attempt
· A simple transducer
· The ultimate computing model
· transition function
· auxiliary Memory
Relevant answer
Answer
Factors
· Formalization
· Structure
· A first attempt
· A simple transducer
· The ultimate computing model
· transition function
· auxiliary Memory
I found these are factors
  • asked a question related to Computational Modeling
Question
5 answers
I'd like to analyse asymptotic data using the nlme package in R but cannot figure out how to specify two crossed random effects (instead of nested random effects).
The data comes from a computational modelling study. The dependent variable is test accuracy as predicted by time (iteration) and the random intercepts should be network and trial. I'm using the following code for the base model:
base_model <- nlme(accuracy ~ SSlogis(iter, Asym, xmid, scal),
data = test,
fixed = list(Asym ~ 1, xmid ~ 1, scal ~ 1),
random = Asym + xmid + scal ~ 1 | network,
start = initialParams)
I know that it's easier to specify nested random effects in nlme so I tried to create a dummy variable (with the same value for all observations) to then specify two random effects, which are both nested in the dummy variable. However, I couldn't really figure out the syntax for that either.
I'd be happy to hear any thoughts on that. Or if you have an idea on how to analyse the data without using nlme, please let me know.
Thanks a lot in advance.
Relevant answer
Answer
The R script below illustrates the nested versus non-nested (crossed) random effects functionality in the R packages lme4 and nlme. Note that crossed random effects are difficult to specify in the nlme framework. Thus, I've included a back-of-the-envelope (literally a scanned image of my scribble) interpretation of the 'trick' to specifying crossed random effects for nlme functions (i.e., nlme and lme).
## This script illustrates the nested versus non-nested
## random effects functionality in the R packages lme4 (lmer)
## and nlme (lme).
library("lme4")
library("nlme")
data("Oxide")
Oxide <- as.data.frame(Oxide)
## In the Oxide data, Site is nested in Wafer, which
## is nested in Lot. But, the latter appear crossed:
xtabs(~ Lot + Wafer, Oxide)
## So, create a variable that identifies unique Wafers
Oxide$LotWafer <- with(Oxide, interaction(Lot,Wafer))
## For continuous response 'Thickness',
## fit nested model E[y_{ijk}] = a + b_i + g_{ij}
## for Lot i = 1:8 and Wafer j = 1:3 and Site k = 1:3
## where b_i ~ N(0, \sigma_1)
## g_{ij} ~ N(0, \sigma_2)
## and b_i is independent of g_{ij}
## The following four models are identical:
## lme4
lmer(Thickness ~ (1 | Lot/Wafer), data=Oxide)
lmer(Thickness ~ (1 | Lot) + (1 | LotWafer), data=Oxide)
## Note: the equivalence of the above formulations makes
## clear that the intercept indexed by Wafer within Lot
## has the same variance across Lots.
## nlme
lme(Thickness ~ 1, random= ~1 | Lot/Wafer, data=Oxide)
lme(Thickness ~ 1, random=list(~1|Lot, ~1|Wafer), data=Oxide)
## Note: the second formulation illustrates that lme assumes
## nesting in the order that grouping factors are listed. I
## think that this was a poor implementation decision, and
## that the latter should indicate non-nested grouping.
## Fit non-nested model E[y_{ijk}] = a + b_i + g_j
## for Lot i = 1:8 and Wafer j = 1:3 and Site k = 1:3
## where b_i ~ N(0, \sigma_1)
## g_j ~ N(0, \sigma_2)
## and b_i is independent of g_j
## lme4
lmer(Thickness ~ (1 | Lot) + (1 | Wafer), data=Oxide)
lmer(Thickness ~ (1 | Wafer) + (1 | Lot), data=Oxide)
## nlme: There is no 'easy' way to do this with the nlme package,
## and I couldn't get this to work at all with the nlme function.
## This is a trick that gives a random slope for each level of the
## grouping variables, which are indexed by the levels of
## a dummy grouping variable with only one group. We also
## specify, for each grouping factor, that covariance
## matrix is proportional to the identity matrix.
Oxide$Dummy <- factor(1)
Oxide <- groupedData(Thickness ~ 1 | Dummy, Oxide)
lme(Thickness ~ 1, data=Oxide,
random=pdBlocked(list(pdIdent(~ 0 + Lot),
pdIdent(~ 0 + Wafer))))
The image below is my interpretation of the nlme (lme) trick for non-nested (crossed) random effects. The idea is to assign a random slope (no intercept) to each level of the grouping factors, which are each indexed by the levels of a dummy variable with that has exactly one level. The pdIdent function ensures that these random effects are uncorrelated and common variance. The pdBlocked function specifies that the random effects are also independent across the two grouping factors.
Simulation-based power analysis using proportional odds logistic regression programming, R, simulation, statistics
Consider planning a clinicial trial where patients are randomized in permuted blocks of size four to either a 'control' or 'treatment' group. The outcome is measured on an 11-point ordinal scale (e.g., the numerical rating scale for pain). It may be reasonable to evaluate the results of this trial using a proportional odds cumulative logit model (POCL), that is, if the proportional odds assumption is valid. The POCL model uses a series of 'intercept' parameters, denoted , where is the number of ordered categories, and 'slope' parameters , where is the number of covariates. The intercept parameters encode the 'baseline', or control group frequencies of each category, and the slope parameters represent the effects of covariates (e.g., the treatment effect).
A Monte-Carlo simulation can be implemented to study the effects of the control group frequencies, the odds ratio associated with treatment allocation (i.e., the 'treatment effect'), and sample size on the power or precision associated with a null hypothesis test or confidence interval for the treatment effect.
In order to simulate this process, it's necessary to specify each of the following:
control group frequencies
treatment effect
sample size
testing or confidence interval procedure
Ideally, the control group frequencies would be informed by preliminary data, but expert opinion can also be useful. Once specified, the control group frequencies can be converted to intercepts in the POCL model framework. There is an analytical solution for this; see the link above. But, a quick and dirty method is to simulate a large sample from the control group population, and then fit an intercept-only POCL model to those data. The code below demonstrates this, using the polr function from the MASS package.
## load MASS for polr()
library(MASS)
## specify frequencies of 11 ordered categories
prbs <- c(1,5,10,15,20,40,60,80,80,60,40)
prbs <- prbs/sum(prbs)
## sample 1000 observations with probabilities prbs
resp <- factor(replicate(1000, sample(0:10, 1, prob=prbs)),
ordered=TRUE, levels=0:10)
## fit POCL model; extract intercepts (zeta here)
alph <- polr(resp~1)$zeta
As in most other types of power analysis, the treatment effect can represent the minimum effect that the study should be designed to detect with a specified degree of power; or in a precision analysis, the maximum confidence interval width in a specified fraction of samples. In this case, the treatment effect is encoded as a log odds ratio, i.e., a slope parameter in the POCL model.
Given the intercept and slope parameters, observations from the POCL model can be simulated with permuted block randomization in blocks of size four to one of two treatment groups as follows:
## convenience functions
logit <- function(p) log(1/(1/p-1))
expit <- function(x) 1/(1/exp(x) + 1)
## block randomization
## n - number of randomizations
## m - block size
## levs - levels of treatment
block_rand <- function(n, m, levs=LETTERS[1:m]) {
if(m %% length(levs) != 0)
stop("length(levs) must be a factor of 'm'")
k <- if(n%%m > 0) n%/%m + 1 else n%/%m
l <- m %/% length(levs)
factor(c(replicate(k, sample(rep(levs,l),
length(levs)*l, replace=FALSE))),levels=levs)
}
## simulate from POCL model
## n - sample size
## a - alpha
## b - beta
## levs - levels of outcome
pocl_simulate <- function(n, a, b, levs=0:length(a)) {
dat <- data.frame(Treatment=block_rand(n,4,LETTERS[1:2]))
des <- model.matrix(~ 0 + Treatment, data=dat)
nlev <- length(a) + 1
yalp <- c(-Inf, a, Inf)
xbet <- matrix(c(rep(0, nrow(des)),
rep(des %*% b , nlev-1),
rep(0, nrow(des))), nrow(des), nlev+1)
prbs <- sapply(1:nlev, function(lev) {
yunc <- rep(lev, nrow(des))
expit(yalp[yunc+1] - xbet[cbind(1:nrow(des),yunc+1)]) -
expit(yalp[yunc] - xbet[cbind(1:nrow(des),yunc)])
})
colnames(prbs) <- levs
dat$y <- apply(prbs, 1, function(p) sample(levs, 1, prob=p))
dat$y <- unname(factor(dat$y, levels=levs, ordered=TRUE))
return(dat)
}
The testing procedure we consider here is a likelihood ratio test with 5% type-I error rate:
## Likelihood ratio test with 0.05 p-value threshold
## block randomization in blocks of size four to one
## of two treatment groups
## dat - data from pocl_simulate
pocl_test <- function(dat) {
fit <- polr(y~Treatment, data=dat)
anova(fit, update(fit, ~.-Treatment))$"Pr(Chi)"[2] < 0.05
}
The code below demontrates the calculation of statistical power associated with sample of size 100 and odds ratio 0.25, where the control group frequencies of each category are as specified above. When executed, which takes some time, this gives about 80% power.
## power: n=50, OR=0.25
mean(replicate(10000, pocl_test(pocl_simulate(50, a=alph, b=c(0, log(0.25))))))
The figure below illustrates the power associated with a sequence of odds ratios. The dashed line represents the nominal type-I error rate 0.05.
Sharing link;
  • asked a question related to Computational Modeling
Question
3 answers
I want to know what according to you is one among the best methods or concept in complex mathematics which you use or used to exploit or simplify problems or use to build computational models.
Relevant answer
Answer
Hi,
There are many applications in Civil Engineering that depends on complex variables and theories. These include Seepage under dams and flow nets. There is an excellent book by Prof. Harr from Purdue University on this subject. Also, the dynamic response of structures due to external loadings such winds and earthquakes also involves complex variables.
Amir Al-Khafaji
  • asked a question related to Computational Modeling
Question
3 answers
Dear researcher,
In regard to disease modeling methods,
A. Is there a precise taxonomy of disease modeling methods? If not, what are your suggestions?
B. How do expand these methods with details in a hierarchical structure?
For example:
1- in vivo model
    1-1 animal model
          1-1-1 rat model
          1-1-2 mice model
2- in vitro model
3- in silico model
    3-1 mechanistic model
          3-1-1 integrative model
          3-1-2 causal model
    3-2 quantitative model
          3-2-1 statistic model
.....
Best regards
Relevant answer
Answer
As a neurologist, i can give my opinion for neurological diseases. There is no hierarchical modeling for any of the neurological diseases. the neurological diseases are extremely complicated and there is no single model that can capture the entire complexity of a particular neurological disease. instead of hierarchy a collective modeling that represent the various parts of the neurological disease is used. these are all on the same plane. the pyramidal system with the peak representing the model of the entire disease process has not shown to be feasible. whenever such a "peak" was considered to represent a neurological disease, new information was found that showed that the "peak" was incomplete. thanks.
  • asked a question related to Computational Modeling
Question
22 answers
We developed an application for Mathematical and Computational Modelling of the Tumor Growth Based on Evolutionary Aspects of Hypoxia and Acidosis on the Microenvironment of Cancer Cells.
This application has the potential to be used in the study the growth pattern of the different cancer phenotypes, the clonal evolution of tumor, the tumor heterogenicity, Multi Drug Resistance and interactions of cancer cells and their microenvironment.You can find more details about the project in the link below.
I would like to ask, how we can label and define each cell is this kind of models to understand to know the origin of each cell, lifetime, and similar cells which create clones and generally how we can define clones in the mathematical models?
We applied a great idea to this question but I didn't find any similar approaches. Therefore, I will thankful if there is anyone who can help us to get more info.
Relevant answer
  • asked a question related to Computational Modeling
Question
3 answers
I would like to model each concentrations of the same inhibitor. I've not really seen any article where this is done through molecular modelling software. Though, I'm thinking this might be possible through changing the solvent dielectric constant, I'm not sure. I need guidance please. Thanks. 
Relevant answer
Answer
Thank you Dr Abdallah
  • asked a question related to Computational Modeling
Question
2 answers
Hello. If you know some references about heterogeneous-agent stochastic OLG model with idiosyncratic risk and incomplete markets, tell me please. (Other than Huggett 1996 JME)
A computational model is even better but an analytical one is also good. 
Relevant answer
Answer
De Nardi and Yang have been working with such a model for quite a bit. Here's one of their latest publications on this topic:
Perhaps the survey by Golosov and Tsyvinski also covers some relevant sources:
  • asked a question related to Computational Modeling
Question
4 answers
When is DMA( Dynamic Mechanical Analysis) Experimental data, not a good choice in predicting exact Visco elastic behavior(Relaxation and Creep) of short fiber reinforced thermoplastics?
Relevant answer
Answer
Regardless the type of the composite material the biggest problem with DMA technique, concerning creep and relaxation experiments, is the size of the specimens. The specimens are too small, resulting in a distorted image of the viscoelastic behavior, due to edge effects, etc. 
The experimental results also depend on what viscoelastic behavior you are examining. Is it linear or nonlinear viscoelastic behavior? Depending on these parameters the limitations vary.
In my opinion, if you wish to study in-depth the viscoelastic behavior composite materials, you need self-made apparatuses. They are easy to construct and can provide more accurate results. However, if this isn't possible, DMA is a good solution, as long as you provide all the information regarding the experimental procedure (loading speed, experimental time etc.) and appropriate literature to support this procedure. Finally, verification of the experimental results with analytical modeling will help you better understand how the materials behave. 
  • asked a question related to Computational Modeling
Question
2 answers
I was trying to model the behavior of concrete for its possible crack propogation, i was planning to use smeared crack and cohesive zone models in ABAQUS. Do i need the experimental data to input the required parameters? If not do suggest suitable way to get the input data.
Relevant answer
Answer
how can i get the surface based cohesive behavior stiffness Knn &Kss&Ktt from the tensile chemical post installed anchors test?
  • asked a question related to Computational Modeling
Question
5 answers
While modeling amorphous materials computationally, I have seen papers using periodic boundary conditions. But glasses/amorphous materials doesn't have any periodic nature. Is it some kind of approximation? Or do we have some other way to model aperiodic materials?
Relevant answer
Answer
with my team we have modeled electrolyte and fluoride glass with the periodic boundary conditions, but these conditions are used just to reproduce the macroscopic system from an elementary simulation box, as in The MMC an the MD simulations for example. The periodic boundary conditions are not necessarily related to the structure. If you have any technical question on the use of the periodic boundary conditions do not hesitate.
  • asked a question related to Computational Modeling
Question
3 answers
Lets assume the data were pulled from an assumed normal distribution although due to not enough data points the distribution is more or less skewed.
Is it unwise to combine raw data with averaged data to train models and why?
Relevant answer
Answer
Can you describe the problem a little more? For example in what domain are you averaging? on all the samples, on a subset of samples or on some features?
  • asked a question related to Computational Modeling
Question
4 answers
Hi everyone!
I'm working with a dataset with imagery movement and I want extracted the features using ERS/ERD. It is possible?
I'm trying to compute these features following the paper of Bernhard and Graimman, (see attach file), but I don't understand how it could be used like features.  Also, how can i plot the ERS/ERD.  ?
Attach a piece of code that computes the ERS/ERD for alpha band. (I'm using EEGLAB)
Thanks !
Relevant answer
Answer
Jaime,
To my knowledge, within the basic EEGLAB package, you should use an event-related spectral perturbation calculation for this. It has different options, and the default is a wavelet-based decomposition rather than the FFT which if I remember correctly, Pfurtscheller and colleagues utilized in their initial papers describing the outcome. You have the option to use FFT for this data, which may help with lower frequencies. 
I'm afraid I cannot help you with a code for this, however, if you take advantage of the epoch and EEG data structure in EEGLAB, you should be able to write your own code. 
I hope this helps a little bit!
Kind regards,
Chris
  • asked a question related to Computational Modeling
Question
6 answers
I am trying to create a threshold where the difference between the feature vectors of two different images of same person should be below that threshold while two different people must cross that threshold. What measure of comparison should i use to to find this threshold. As of now i subtract the two feature vectors and calculate the resultant's magnitude which turns out to be a very big number for same people and that there is no pattern in it. Sometimes for different people it will be a smaller number. What mathematical function can i use for this? I would be thankful if you could provide me some help on this.
Regards
  • asked a question related to Computational Modeling
Question
1 answer
I am using a tool Brian Hears of a program called Brian (it runs on Python ) to create a computational model of the ear and part of afferent fibers to the inner hair cells . At the end of the simulation I export a .dat file that contains the numbers of neurons that generated the spike and the time at which they were generated . Unfortunately, the time in which the spikes are generated is indicated in ms or in sec . Is it possible to start to export the spike timing with a uniform measurement units , it means all the spike timing in ms or seconds ?
Relevant answer
Answer
  • asked a question related to Computational Modeling
Question
9 answers
Logistic growth describes somes phenomena (for example, population growth, bubble markets, tumor growth, etc) but the current models don't consider an abrupt fall ("crash"). Tsoularis et al (1997) suggested the "Generalized Logistic Growh" with four or five parameters but none of the linked equations (Blumberg, Bertalanffy, etc) simulate this situations.
Relevant answer
Jansen, Mende's article is simply wonderful! In biology there's a field that study exactly the interconnection between biological and ecological phenomena called allometric scaling and a lot of works were done to explain why the exponents (from powers-law) varies and tend to assume some characteristics values. Mende needs to be know, because he found some universal behavior in growth phenomena with strong mathematical justifying (exponential towers, that I didn't know), suggesting another explanation to exponents and connecting micro-scale behavior between interconnected and independent phenonema to macro-scale outpus. Would be better if mathematicians tried to derive Mende-Peschel differential equation to biological appreciation. Once more, thanks for your readiness, Jansen.
  • asked a question related to Computational Modeling
Question
2 answers
Individuals are often taken as agents, with its specific characteristics and criterion. Butm is it posssible to consider groups as individuals, and their agragated characterisitcs as a sigle unit?
Relevant answer
Answer
In organization theory, organizations actually usually constitute the agent. See the long tradition of NK papers that started with Levinthal's 1997 paper: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.43.7.934.
Whether a region can be a decision-making unit - well, depends on if it is regional decision making you want to simulate.
  • asked a question related to Computational Modeling
Question
14 answers
Neural network output should be in the form of equation? Or number? In Response Surface Methodology (RSM) either a single order or second order equation can be used as fitness function for PSO. But ANN output for PSO?
  • asked a question related to Computational Modeling
Question
7 answers
are there differences between math model and regression equation, and are math model build on theory while  regression equation build on experimental result?
Relevant answer
Answer
The mathematical model is formulated on the basis of theory, review of literature and logical basis. The regression model is generated after formulating the mathematical model as a method for estimating the conditional Y value given the X(s) Value(s). It is important for the regression model to include the error as a result of the incomplete relationship to represent the effects of the other variables that were omitted from the equation.
  • asked a question related to Computational Modeling
Question
2 answers
i am working on a method for synthesis of functions to using of Majority and NOT gates. i see papers that are in field reversible circuits and QCA. in this papers for evaluation of functions synthesized to Majority and NOT gates , the numbers of constant inputs is computed then is used from LANDAUER formula. but in qca is allowed to use from fanout. so, to use of fanout, the numbers of constant inputs are two, 0 and 1. why must all constant inputs is computed?
Relevant answer
Answer
Thanks alot for your answer
  • asked a question related to Computational Modeling
Question
5 answers
My pharmacophore modeling experiment in Accelrys Discovery Studio constitute 36 actives and 43 inactives as validation set towards one pharmacophore hypothesis (2 HBA, 1HBD and 1 hydrophobe). I got predictions in 2 models:
Model 1: True Positives=34 True Negatives=11 False Positives=32 False negatives=2 Sensitivity=0.94 Specificity=0.25
Model 2: True Positives=36 True Negatives=2 False Positives=41 False negatives=0 Sensitivity=1.0 Specificity=0.04
Which one Pharmacophore model you will recommend to select me for further analysis? Specificity is more important than sensitivity or vice versa. Please give valuable suggestions.
Relevant answer
Answer
I am thankful to Ashish and Mohd. Athar for pointing out the Güner-Henry scoring method for evaluation of VS protocol. The paper suggested that they have used 66 active molecules to make a database of decoys comprising 3606. It is acceptable that what will be the condition if you append Inactives of acetylcholinesterase inhibitors in place, whether the VS protocol may have not recognized inactives as hits since it is chemically similar to actives. Give your take.
  • asked a question related to Computational Modeling
Question
11 answers
For instance, FLAME, MAMID, EMA, GRACE are computational models of emotions. What are the different parameters on basis of which, output of these models can be compared.?
What are the different ways to validate such model?
Relevant answer
Answer
I think that if the comparison between two computational models of emotions (whether the PIA expressed or not faithfully emotions of humans), as we are dealing with in this dialectic interaction may be based, or emanate from the "Conceptual Science Systems" under the agreement that can provide a theoretical framework for understanding, or provide knowledge on the development of a theory or a mental model using rigorous methodologies and a common programming language artificial intelligence "emotional" (PIAe)
  • asked a question related to Computational Modeling
Question
8 answers
In finite Element Modelling, what material model is best for modelling steel frame and RC frame?
Relevant answer
Answer
TNO Diana.
For steel frame, I used von mises yield criteria with no hardening. But it do not shows perfectly plastic behavior. I have attached the pictures for the reference (the blue  curve is the result from the software and the black curve (I assume) to be the expected result shape only)
  • asked a question related to Computational Modeling
Question
3 answers
is there any comprehensive computational, or formal model (and/or visual representation) for Generalized Anxiety Disorder or any other mental disorder or disease?
Relevant answer
Answer
Maybe you should look for the models in anxiety (http://www.sciencedirect.com/science/article/pii/S0893608099000994, for instance) and tweek them to your needs.
  • asked a question related to Computational Modeling
Question
5 answers
Suppose I have the dispersal kernel (frequency distribution of individuals across space after a dispersal event i.e. x-axis: distances from a common origin (binned) and y-axis: frequency of individuals present at the corresponding distance-bin) for two populations of one species. How can these two kernels be compared quantitatively and inferred whether they are statistically same or different?
Relevant answer
Answer
Dear Sudipta,
As Hein Van Gils suggested, I believe spatial metrics are a good way to understand not only quantitatively but qualitatively how the distributions are different/similar.
Measures of overlap between kernels are a nice option, since you can measure how much one distribution looks like the other.There are different ways of calculating kernel overlap: calculating the area/length (for 2D/1D kernels) of a given threshold (e.g., 95% kernel) that overlaps betwenn two distributions; calculating the volume/area that overlaps, etc.
A good review for that, in the context of animal movement (but it may be thought for other kinds of data), is the paper by Fieberg and Kochanny 2005:
Also, if you are working with kernels in R, there is a good package that has already operationalized the calculations of kernel overlap: package adehabitatHR, function kerneloverlap(). Take a look here:
or here:
Best,
Bernardo
  • asked a question related to Computational Modeling
Question
12 answers
Dear all, 
there are different studies supporting the hypothesis that the vertebrate motor system produces movements by combining a set of building blocks named motor primitives or motor synergies. 
One year ago, Levine and colleagues identified classes of interneurons in the mouse spinal cord that could support motor primitives in mammals (http://www.nature.com/neuro/journal/v17/n4/full/nn.3675.html).
I'm developing a computational model of the spinal cord and i would like to take into account these kind of networks but it seems that at the moment none know how to implement the motor primitives by a neurobiological point of view.
In particular, i want to investigate the role of this kind of spinal circuitry in the execution of reaching movements. D'avella and colleagues have shown (just for example here https://www.researchgate.net/publication/5818579_Combining_modules_for_movement) how a reaching movement can be decomposed in a linear combination of muscle synergies but it's a mathematical model.
Can you suggest me any papers that can help me to model a motor primitive circuitry? 
Thank you for your support,
Antonio
Relevant answer
Answer
I know I'm late to the party on this, but I thought to provide an answer anyways.
The answer to this question is neither straightforward nor consistent across classes of movements.  This is because a primitive represents a movement class, a synergy does not.  A synergy represents a strategy to fulfill a movement class.  More specifically, it specifies the control signal to realize a type of primitive.  I try to explain a little more below.
By and large, movement primitives can be divided into several class (a la Hogan and Sternad).  Minimally, you could have discrete primitives, impedance primitives, or  rhythmic primitives.  What makes these primitives is that, presumably, any of them could be combined into a more complex movement.   
I think the nature of your question is this:  how might the central/ and periperihal nervous system represent and institute such a primitive.  That is, in what sense is it constrained by interneuron regulation by the CNS.  On the other side of this coin, in what context are these synergies due to biomechanical or physiological constraints.  
However, talking about synergies and primitives as one in the same is difficult.  Let's take a simple reaching movement to a target.  Let's ignore the shoulder, and only consider the reach as requiring torques around a two-planar link arm. We assume two agonist-anatognist pairs of muscle, one for each link.  Let's assume the CNS (e.g., the motor cortex) represents the required transformation to endpoint force (at the hand) as a trajectory.  Let's further assume this is represented as a motor primitive in terms of a simple equation of motion as follows.  F(t) = -kX(t)+bX(t)' +u(t).  A simple forced damped mass spring.  The system simply needs to translate the desired force pattern into a control signal to follow some desired path (a difficult problem I won't discuss).  Consider the following two ways this could happen: (1). the CNS represents the necessary activity to each muscle independently; therefore, not really fulfilling the requirement of synergy. (2). the CNS represents the activity as modulating something at the level of interneurons, which individually might project to multiple motor neurons; this fulfills the synergy condition.  In both scenarios, though, the same motor primitive is used.  Thus, the distinction between synergy and primitive, I believe, is required. 
  • asked a question related to Computational Modeling
Question
17 answers
Is there any good books or resources on Agent-Based Modeling? 
Relevant answer
Answer
Hi, if you are interested on how to design MAS, I think one of the most updated books about agent-oriented design processes is: 
Handbook on Agent-Oriented Design Processes
Editors: Cossentino, M., Hilaire, V., Molesini, A., Seidita, V. 
Springer
2014
  • asked a question related to Computational Modeling
Question
2 answers
Evaluation of a surface equation of state generally involves an evaluation of its ability to represent raw experimental data, or information derived from such data. The parameters of the equation have been adjusted to
optimize this representation, they are followed by an evaluation of the physical reasonableness of the optimizing parameters
in terms of the physical model on which the equation of state is based.
Relevant answer
Answer
.
an example of parameter fitting (with VdW model but the paper shows the difference between VdW and F to be quite small) :
(unsure this answers your question, though)
.
  • asked a question related to Computational Modeling
Question
3 answers
I have a complex geometry and there is no node connectivity as for transferring lodes I think it is necessary. There are more the 400 instance in it so partition and meshing every is very tedious I have tried both dependent as well as Independent meshing but unable to merge. Is there any method in which it can be merged easily.
Thanks in advance
Relevant answer
Answer
In the Assembly module you can merge parts, with various forms of control.  If you have already meshed your parts, and you've planned it so that the meshes of each part should have nodes on the boundary in the same place, you can use Merge Mesh.  Node that are recognised as coincident are highlighted in a different colour.  If you don't see all the expected nodes highlighted, you might need to change the tolerance setting.
An alternative approach is to mesh individual parts separately, and then define a tied boundary.  Abaqus then calculates the right loads to be transfered to the right nodes - note that if you use this method, you should avoid having the boundary in an area where you might expect to see interesting or critical stress levels.
  • asked a question related to Computational Modeling
Question
1 answer
CATIA has wireframe and surface design workbench and generative shape design workbench.
Relevant answer
Answer
  • asked a question related to Computational Modeling
Question
11 answers
Let me describe my task in short.
I've created the electronic map of forestry, which includes a set of objects (polygons) of different nature, such as forest plants, roads, lakes etc. Each object is homogeneous. I have to write a program for mapping the dynamic of fire contour on the map. I’m going to use Rothermel’s model, because I have all input data for this model. But this model is good for homogeneous terrain, but my terrain is heterogeneous. I’m looking for the information, how Rothermel’s model can be adjusted to solve my task. Can anyone help me with the information on this question? Maybe there are other models, appropriate for programming... I would be grateful for any help.
Relevant answer
Answer
Alianna, I appreciate your kind help, thank you very much.
I didn't manage to find Dr. Adler via ResearchGate and wrote him directly on his e-mail.
Kindest regards,
Marina.
  • asked a question related to Computational Modeling
Question
3 answers
Tools especially for trust evaluation based on the log details and feedbacks.
Relevant answer
Answer
My expertise is on WARMF, which is a simulation model for hydrology and water qualities of a river basin. I want to develop App of the model for cloud computing so that stakeholders can login, run the model and see the results. I know how to do it, but I have difficulty finding a business model for users to subscribe to the services.
  • asked a question related to Computational Modeling
Question
4 answers
Can anybody help me to find out birefringence value of Paclitaxel (Toxol), ordinary and extraordinary refractive index values, for computational modelling in COMSOL?
Thanks,
Mahendar. 
Relevant answer
Answer
Hi Huseyin,
Thanks for reply. It was a  type error in my reply. I want to model anisotropic media. I am just doing that modelling with assumed birefringence of 0.001. nx=1.64, ny=1.639 and nz= 1.64. It am getting good response of Intensity variation with sample thickness.
Thanks
Mahnedar
  • asked a question related to Computational Modeling
Question
4 answers
I am looking for a way to connect the order parameter to the changes in the mass. 
Relevant answer
Answer
Laminar 2 phase flow with a solute, soluble in both phases, experience Tylor Diffusion Effect, see Wikipedia. 
  • asked a question related to Computational Modeling
Question
4 answers
I am modeling a steel bridge by OpenSees. I have to model it with ndm 3 and ndf 6 and using ForceBeamColumn elements because have to model frames. But I want to release the end moments of other members (bracings) to behave as truss elements.
I tried assigning a ZeroLength element for one member and it gives following error in eigenvalue analysis-
"FullGenEigenSolver::solve() - the eigenvalue 83 is complex with magnitude 81246.8
ProfileSPDLinDirectSolver::solve() - aii < minDiagTol (i, aii): (81, -3.37889e-027)
WARNING NewtonRaphson::solveCurrentStep() -the LinearSysOfEqn failed in solve()
DirectIntegrationAnalysis::analyze() - the Algorithm failed at time 0.0001
OpenSees > analyze failed, returned: -3 error flag"
My question is how to release end moments with Zerolength elements. Though it can be done, also it is difficult to assign ZeroLength elements because there are large number of members in a bridge. Are there any other methods to release ends.
Please advise me.
Relevant answer
Answer
No need to use ZeroLenght element. If it's an end element you can release any of the reactions using "fix" command:
But if you want to release end of a beam or column that is connected to another element, then you need to define a new node exactly at the location of the end of the element. It means you are going to have 2 nodes at the same location. Then use one of the nodes as end of one element and the other for the end of the other element. Then you have to use "EqualDOF" command to constraint forces of one node to the other:
You can release the moment between two nodes (i.e., two elements) easily using this method.
Hope it helps,
  • asked a question related to Computational Modeling
Question
13 answers
I am working on invariant object recognition problem. Now, i required to compare my model with CNNs. I am looking for an open source code for CNNs. Please let me know if there is open code for CNNs.
Relevant answer
Answer
You can take a look in:
There are three implementations of convultional NNs there.
Hope this helps
  • asked a question related to Computational Modeling