Science topic

Computational Science - Science topic

Explore the latest questions and answers in Computational Science, and find Computational Science experts.
Questions related to Computational Science
  • asked a question related to Computational Science
Question
2 answers
2024 5th International Conference on Computer Vision and Data Mining(ICCVDM 2024) will be held on July 19-21, 2024 in Changchun, China.
Conference Webiste: https://ais.cn/u/ai6bQr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Computer Science and Engineering
......
◕ Vision Science and Engineering
· Image/video analysis
· Feature extraction, grouping and division
· Scene analysis
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Robotics Science and Engineering
Image/video analysis
Feature extraction, grouping and division
Scene analysis
......
All accepted papers will be published by SPIE - The International Society for Optical Engineering (ISSN: 0277-786X), and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: June 19, 2024
Registration Deadline: June 30, 2024
Final Paper Submission Date: June 30, 2024
Conference Dates: July 19-21, 2024
For More Details please visit:
Relevant answer
Answer
Thanks for sharing. Wishing you every success in your task.
  • asked a question related to Computational Science
Question
1 answer
Amrita School of Engineering, Bengaluru campus, is currently accepting applications from highly motivated researchers who possess a strong background in mathematics, computational physics, applied physics, fluid dynamics, or a closely related field. Proficiency in programming languages such as C/C++, MATLAB, or Python is advantageous. Candidates should actively contribute to the team's research efforts. For more details, you may contact: Dr. K. V. Nagaraja - kv_nagaraja@blr.amrita.edu - +91- 98452 23844 ; Dr. T. V. Smitha - tv_smitha@blr.amrita.edu - +91- 9611107480 ; Dr. Naveen Kumar R - r_naveen@blr.amrita.edu - +91- 78296 70202
Relevant answer
Answer
Job Boards:
University Websites:
  • Many universities advertise open PhD positions on their departmental websites. Look for the department of Computational Science, Mathematics, Physics, Engineering, or a related field depending on the specific area of research you're interested in.
Additional Resources:
Tips for your search:
  • Tailor your search: Use keywords that reflect your specific research interests within Computational Science (e.g., machine learning, materials science, astrophysics).
  • Consider funding: Some PhD positions come with scholarships or fellowships that cover tuition and living expenses. Look for keywords like "funded" or "scholarship" in your search.
  • Be proactive: Contact professors directly whose research aligns with your interests. Express your enthusiasm and inquire about potential PhD openings in their group.
  • asked a question related to Computational Science
Question
1 answer
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Please let me know if anyone is interested to o
  • asked a question related to Computational Science
Question
2 answers
Using machine learning algorithms
Relevant answer
  1. Surrogate Modeling: Using ML to create simplified models of complex physical systems for faster simulations and improved understanding.
  2. Data-driven Discovery: Employing ML to find patterns and relationships in data to make new scientific discoveries and develop innovative products.
  3. Inverse Problems: Solving problems that infer underlying causes from observations using machine learning techniques.
  4. Computer-Aided Engineering (CAE): Enhancing accuracy and efficiency in engineering simulations and automating the design process with ML.
  5. Healthcare Applications: Utilizing ML in medical image analysis, drug discovery, and personalized medicine for advanced healthcare solutions.
These topics represent some of the exciting frontiers where machine learning is making significant contributions to computational science and engineering research.
  • asked a question related to Computational Science
Question
18 answers
Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.
Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.
Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.
Relevant answer
Formation of Zoonoses: Climate Change and Anthropogenic Factors Expanding the Area of Zoonosis
Hemorrhagic fever with renal syndrome is a non- transmissible viral zoonosis widespread in Russia. It is considered one of the most important natural focal diseases. We examined leptospirosis and tularemia as climate-dependent diseases of bacterial etiology. Leptospirosis is transmitted only non-transmissively, while tularemia, in addition to a number of non-transmissible pathways, is characterized by a facultative- transmissible pathway, and arthropods, primarily blood- sucking dipterans, act as mechanical carriers.
Among the transmissible natural focal diseases of various etiologies, we examined tick-borne encephalitis, ixodid mites-borne borreliosis (Lyme disease), mites- borne typhus (mites-borne rickettsiosis), and Crimean hemorrhagic fever.
West Nile fever is one of the most important natural focal diseases transmitted by mosquitoes. The most unfavorable situation was observed in Astrakhan, Volgograd, and Rostov regions. They account for the vast majority of cases of the disease.
A classic example of anthroponosis is malaria. The epidemiological situation at the present time can be considered favorable given that the number of imported cases is small (about a hundred per year) and local cases are rare. However, malaria is also a classic example of a disease that can quickly regain its position when control is loosened.
Model analysis of changes in the distribution of Ixodidae
mites and malaria mosquitoes showed that their habitats were influenced by the observed climate change which is significantly expanding in the northern and eastern regions, and the reduction is negligible.
The prerequisites for an increase in the incidence of these infections and their wider spread persist.
Climate-related risk factors include expansion of the areas of arthropod vectors and an increase in their numbers, as well as a similar increase in the number and expansion of the areas of vertebrates, mainly murine rodents, which are reservoirs of natural focal infections and carriers of vectors in nature. A new risk factor is the importation of exotic vectors into the territory of Russia, as well as their rooting, the importance of which increases with the expected warming.
Countermeasures (adaptation measures) against climate-dependent infections include prevention (vaccination), strengthening monitoring of the species composition and number of vectors and reservoirs of infections, and increasing the scale and effectiveness of combating them. These measures also include personal protective equipment against them. The enhancement of the effectiveness of these measures should be based not only on the actual improvement of these funds but also on the improvement of the sanitary and epidemiological
  • asked a question related to Computational Science
Question
5 answers
Finding optimal meta-heuristic parameters is one of the open problems in computation science today. However, works in the literature using Design of Experiments or the so-called Hyper-heuristics can be seen, which are meta-heuristics specialized in optimizing meta-heuristics. What methods do you know? Which do you think is best?
Relevant answer
Answer
We have recently published two papers and proposed different adaptive methods for the hyperparameters tuning as follows
  • asked a question related to Computational Science
Question
4 answers
In experimental, the bond energy per atom rises quadratically with the number of bonds then why in simulations this bond energy per atom rises linearly with the number of bonds per atoms?
Relevant answer
Answer
I will get back to you soon.
  • asked a question related to Computational Science
Question
13 answers
Dear Colleagues,
I have recently graduated with a BSc in Mechanical Engineering. During my BSc, I assisted research and projects on a variety of fields ranging from nanomechanics of advanced materials (experimental), predictive analysis of stochastics data input for control (MATLAB), human balance control (theoretical), dynamical modeling of fluid/solid coupling problems, and corresponding CFD in OpenFOAM, computational aerodynamics with HPC. Upon my graduation, I joined a research team at ETH Zurich as a scientific assistant to work on vortex kinematics (theoretical and computational).
My main interest areas are:
  • Nonlinear Dynamics and Chaos, Stochastic Systems, Machine Learning of Dynamical Systems and Fluid Dynamics, Prediction, Nonlinear Control
  • Computational Finance, Financial Analytics
  • Numerical Methods, Computing and Algorithm Development
Clearly, all of the fields mentioned above require a decent knowledge of mathematical modeling, analysis, and computation (mostly by parallel computing over HPCs). One can also argue that these areas are not really far from each other as they can be all classified into an umbrella field of Dynamical Systems Theory.
I will soon start my MSc in Computational Science and Engineering at ETH Zurich. However, I am struggling to decide which specialization area I should choose.
As a part of the program I have to enroll at least in two of the following CORE SUBJECTS:
  • Advanced Numerical Methods for CSE
  • Optimization for Data Science
  • Computational Statistics
  • Advanced Systems Lab (Fast Numerical Codes)
Of this, I am planning to take all as they are rich in content, relevant to my multidisciplinary taste, and beneficial for my future plans. They are also fairly complementary to one another.
I will also have to take two mandatory subjects as a part of the admission requirement:
  • Numerical Methods for CSE
  • High-Performance Computing Lab for CSE
*The program requires me to take 5 courses in my selected specialization area. The rest of the credits necessary to graduate can be chosen freely from any department.
ETH is a top-notch institute for education and research in all three of Control & Robotics, Fluid Dynamics, and Applied/Computational Mathematics. This at least ensures that whatever I choose I will still get a quality education and have a chance to do quality research.
As we all know, modern areas such as robotics, data science, software engineering, neuroscience, computational biology and etc. have rather well-defined career paths. These people would not have as many troubles as a multidisciplinary guy (e.g. my MSc program) to decide what subjects to take and what to focus on.
Now, I lost 2 lost years between the high school and university and I believe this has eliminated some of my flexibility in this kind of decision, especially given that I am in a distance relationship of which I have to also take care of. It is likely that I will prefer to stay at ETH for my Ph.D. or work some time here before my Ph.D. I may also choose to do my Ph.D. in one of the other top schools.
I really appreciate your opinions and advice!
Thank you for your time and patience!
Kind Regards
Relevant answer
Answer
Dear Mirlan,
My congratulations on your graduation. Regarding your question about future studies at ETH, I have looked at the outline of the courses mentioned in your question.
You are probably familiar with these sites but I have included the links just for the documentation:
Advanced numerical methods
Optimization for data science
Computational statistics
Advanced systems lab
Based on the analysis of the above, I will probably choose the Advanced Numerical Methods and the Advanced Systems Lab. (I really like these courses and I think they are very useful regardless of the future specialization).
I wonder how the current situation (related to Covid-19) with online courses is evolving at ETH? Will this constraint change schedules and plans?
In any case, my best wishes for the success of your program.
Kind Regards
  • asked a question related to Computational Science
Question
1 answer
Dear ResearGate responsibles, In 2004 I was the leader & responsible of CEPIMA research group (UPC) that I created the same year as the result of linking together TQG group and LCMA, where I was the leader of both. We had an intense research activity. Our work
Badell, M., Fernandez, E., Bautista, J., Puigjaner, L. “Empowering Financial Tradeoff in Joint Financial & Supply Chain Scheduling & Planning Modeling.” In International Conference of Computational Methods in Sciences and Engineering 2004 (Eds.George Maroulis and Theodore Simos), VPS, Attica, Greece, ISBN: 90-6764-418-8, pp.653-656 (2004).
where I was one of the authors (responsible author). I sent Dr. Mariana Badell representing the work done by the four authors to Attica to make the presentation (paid by CEPIMA). It was successful an selected and invited for publication in "Int. Journal of Production Economics":
Badell, M., Fernández, E., Bautista, J., Puigjaner, L. “Empowering Tradeoff in Joint Financial & Supply Chain Scheduling & Planning Modeling”, Lecture Series on Computer & Computational Sciences, 11, ISSN: 1573-4196, pp. 653-656 (2004).
With kind regards,
Luis Puigjaner
Relevant answer
Answer
This was only a letter addressed to ResearchGate answering to its query, but by no means to be open to the public. So please, drop it.
  • asked a question related to Computational Science
Question
5 answers
The definition of a D-number is given by Y. Deng in his paper, " D-numbers: Theory and Applications" published in the Journal of Information & Computational Science 9: 9 (2012), pp. 2421-2428 is perhaps not correct.
Let us look at the definition:
Let Omega be a finite nonempty set, D number is a mapping D from Omega to [0, 1] such that
Sum of (D(B): B is a subset of Omega} is Less than or equal to 1.
By its definition D has the domain Omega. So, D is applicable to elements of Omega. How it is applied to subsets of Omega?
Should we replace the definition as D is a mapping from P(Omega) to [0, 1]?
Relevant answer
Answer
Thanks Dr. Seiti. But, i could not get any of the two papers you have referred as the full texts are not available in RG.
I have put a request to the authors and hope to get copies of the papers soon to verify your suggestions.
regards,
B. K. Tripathy
  • asked a question related to Computational Science
Question
3 answers
How should the researchers know the importance of Green computing in current era.
Relevant answer
Answer
Two route to answer here.
The first is general green technology in general. Sustainability initiatives require reducing the energy intensity of industrial and consumer applications. Computing is part of that. Greener computing reduces the energy intensity of economic activities improving the energy intensity of economies (i.e. increasing the GDP per unit of energy used.) This has the same general impact as environmental sustainability issues in general.
Second is in terms of operations cost. For large data center or supercomputers the energy intensity is a big concern. Let's take round order of magnitude numbers. A top supercomputer today costs on the order of 100M$. (or maybe a little less, I overestimate a bit to make THIS calculation conservative.) The power required to run the machine is on the order to 10 MW. If the machine has a useful life of say 3 years. This is 26,000 hours. Now if you are lucky you can buy energy for $0.1/kWhr. That means your machine costs $1,000 per hour to run just for the electricity. Over the life of the machine you will spend 26M$ just for electricity! This has two BIG effects. First the total cost of ownership of supercomputers (or datacenters) is greatly affected by the energy use. Second, because of the high cost of energy, the lifetime of machines are shorter. Machines still work after 3 years but the cost to operate them are too high in comparison to value when the newer machines are available in three years and the old ones are too expensive to operate. Let me restate this in bold: supercomputers are turned off not because they end their useful life but because they use too much energy in relation to other options.
Green computing means less energy per computation which allows you to buy more computer per budget dollar on a lifetime cost basis and allows you to economically run the machine longer.
In fact it is so important that the logic runs in reverse. For example, early scoping studies for the US exascale effort REQUIRED the power of an exaflop machine to be less than 20 MW. Straight scaling from the petaflop era would have given a machine with a power well above 100 MW. So from the start, the exaflop project REQUIRED planners to plan for an increase in computing efficiency of an order of magnitude. To state it another way, green computing is a prerequisite to play the supercomputing game today. You must be this green to rider this ride.
Note: this discussion is not quite the same as the discussion of the "power wall" and the failing of Dennard scaling in HPC. There the issue is the inability to dissipate heat on a chip. However, the two problems scale similarly and the solution to one helps the solution to the other.
  • asked a question related to Computational Science
Question
8 answers
I’m trying to implement semantic similarity based on Normalized Google Distance and i have many problems to obtain correct data. I tried to simulate browser, get data via Google API (PHP, Java, R, Google Docs) and every time i had different results. Is there any proper way to get accurate and current data?
Relevant answer
Answer
1) They do not provide in their API the same results as in Google Live.
2) The results are just an estimate of the total. If you navigate through the results pages you will see that for some queries there are actually less results than the number they show you.
3) Another point is that it is very difficult to compute NGD, you can check this thesis (chapter 4 (also 3)) as an alternative (method CBM): http://www.dbd.puc-rio.br/pergamum/tesesabertas/1012681_2014_completo.pdf
  • asked a question related to Computational Science
Question
3 answers
i want do masters in computational science and for that I need to learn matlab so i need help about learning matlab from basic leval. And which specific portion is covered in computational science courses?
Relevant answer
You might use these ones;
There are many examples from beginning to advance!
  • asked a question related to Computational Science
Question
6 answers
Any research on Graphene being carried out through computational science?
Relevant answer
Your question is kind of vague...
If you mean Quantum chemical/Computational chemistry studies of carbon in its various allotropic forms, the answer is yes. Otherwise, you need to clarify your question.
For the QM part you may start here:
Basiuk VA, Rybak-Akimova EV, & Basiuk EV (2017) Graphene oxide and nanodiamond: same carboxylic groups, different complexation properties. RSC Adv. 7:17442-17450.
  • asked a question related to Computational Science
Question
3 answers
What behavior of which animals/birds/insects shows Swarm intelligence. And what are the practical aspects of using them to solve different problems?
Relevant answer
Answer
Dear Muhammad Gulraj,
As Dr. Ramon López de Mántaras pointed out Bee or Ant colonies foraging is a behavior that is simulated to solve optimization problems. Grey Wolf Optimizer(GWO) algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature.  Particle Swarm Optimization (PSO) simulates the social behavior of bird flocking or fish schooling which is used to solve different optimization problems. For more information please see the following links:
Regards,
  • asked a question related to Computational Science
Question
10 answers
Can we teach morality to machines using current AI & machine learning techniques? Examples?
Who will define morality?
Relevant answer
Answer
  • asked a question related to Computational Science
Question
13 answers
to write research proposal in high performance computing area for my Phd. 
Relevant answer
Answer
Reading in general is good, but an important part of original research is, based on knowledge of a field, come up yourself or with some little help with research topics to investigate. An important part of a Ph.D. research is finding the important, relevant, challenging subjects you can develop on, solve, advance.
If you get direct advice from others you can be a technician of some sort, but not a complete researcher!
  • asked a question related to Computational Science
Question
2 answers
Is there any python module or submodule that can calculate the Kretschmann scalar given a specified metric?Or do I have to write down the code?
Relevant answer
Answer
I think there is no such module available in python. We need to write..
  • asked a question related to Computational Science
Question
1 answer
I synthesized Leon3 with two different options:
  • flatten-all & auto-ungroup:
It produce a single top level verlig module with all the module merged inside. During synthesis there are some uninitialized FF which cause 'x' propagation and results in incomplete annotation while doing power estimation. For that I  can initialize all the FF in verilog file by searching all FF using Linux 'sed', before the modelsim simulation starts, then producing SAIF, then power estimation is done with 100% annotation.
But
  • no-auto-ungroup
It preserves the original design hierarchy in terms of verilog modules, and save synthesized netlist in a verilog file. Now there are many modules, before modelsim simulation how to initialize all the FF in different modules?
Please guide.
Best Regards, Sajjad
Relevant answer
Answer
I think if you use the origin packaging from grlib, it works perfectly. Although there are x propagation in early simulation, it is ok. If the designated program runs on it, it can give the desirable results.
That is my experience using leon3.
Cheers
  • asked a question related to Computational Science
Question
4 answers
I am unable to find data sets with all positive values from UCI. Values can be fraction but not negative.
Thanks
Relevant answer
Answer
I require datasets number of instances(ROWS) more then 10K. Further positive values means positive values of attributes.
Thanks
  • asked a question related to Computational Science
Question
2 answers
Hello everyone;
I have a problem with parallel computing using Fluent when I activate the dynamic adaptive mesh, event the code has the possibility to load balancing automatically after each adaptation.
if you have any suggestion or solution for this problem please shared with me.  
thank you.
Relevant answer
Answer
Non, it's parallel licence 
  • asked a question related to Computational Science
Question
1 answer
For modelling light emission from semiconductors I have often seen people using a dipole source which is justified for an exciton but if we consider a volume which may have several excitons (enough to be emitting in all directions from that small volume), can we use just a point source (or point sources filled inside a circular region)? And another question is whether it should be a single pulse, or a continuous wave is justified (given that total simulation time is just 40 fs) ?
Relevant answer
Answer
Spontaneous emission (SE) rates from semiconductors is interestingly approximated by a dipole radiation using classical electrodynamics even though SE actually requires quantum electrodynamics explanation (This was shown as early as in 1999 by Xu et al). That is why I found many papers which use a dipole excitation. But the question is about validity of monopole sources and single pulse/continuous wave at femto second timescales.
  • asked a question related to Computational Science
Question
1 answer
Complexity of computation
Relevant answer
Answer
Dear Laouid,
You will find excellent information at the following links
With my best regards
Prof. Bachir ACHOUR
  • asked a question related to Computational Science
Question
3 answers
Regarding the problem of graph coloring, is there any good reference dialing with the VC dimension of the problem ?
More generally: concerning NP-complete problems, is there any good reference dealing with the VC dimension of such problems ? 
Relevant answer
Answer
Dear Prof.  Andriy O. Borisyuk(Borysyuk) !
Many thanks for your suggestion !
  • asked a question related to Computational Science
Question
1 answer
In CUDA C i searched how to make a table but not geeting how can i do group by queries on that table and function which can define them.
Relevant answer
Answer
I don't fully understand your question, but:
1) the PDF file is from 2009 ! This was an ancient times of GPGPU ! Maybe this is more up-to-date https://wiki.postgresql.org/wiki/PGStrom
2) you should decide on which level you will write your programs: at the CUDA level or at SQL level?
  • asked a question related to Computational Science
Question
3 answers
We have data of domestic workers like their wages, choice to do domestic work rather than commercial job, its benefits and drawback over the other jobs of the same level. How we can use fuzzy logic to measure the economic and social values of the domestic work?
Relevant answer
Answer
The work by Osorio Saraz et al. (2012) about "Fuzzy modeling applied to the welfare of poultry farms workers" may be suitable to start with the idea.
Jairo Alexander Osorio Saraz; Leonardo Schiassi; Tadayuki Yanagi Junior; Flávio Alves Damasceno; Neiton Silva Machado, (2012). Fuzzy modeling applied to the welfare of poultry farms workers, DYNA. Revista de la Facultad de Minas, Vol 79, No 174; pp. 127-135.
  • asked a question related to Computational Science
Question
5 answers
Let us assume a connected graph with large number of nodes without weight in the edge. Also assume that entire information is not available (i.e. adjacency matrix is not available). One node is having information of its neighbor only. Now I want to find a path to destination. How do I find one path which is shortest among various possible path.
Relevant answer
Answer
In addition, consider searching from
both sides. When two opposing vertices
from different starting points meet,
you can stop. For example, you can
start from one side with positive
counting 1,2,3,4... from the other
side with -1,-2,-3,-4... When an edge
appears with a positive and negative
node on either side, you can construct
the path by stepping back in descending
and ascending order.
You can also use 2 marks instead
of a single positive/negative.
Regards,
Joachim
  • asked a question related to Computational Science
Question
5 answers
Currently, it seems that quantum computation is still in the very early stages of research, with no commercial solutions on horizon (D-wave is a questionable candidate). Nevertheless, some market researchers estimate that in a few years the quantum computing market will be worth over 25 billion USD: http://www.marketresearchmedia.com/?p=850
How justified are such claims, in your opinion? When will we see useful quantum computers? Will the field of quantum computing enter a sharp funding decline if a useful quantum computer does not appear soon?
Relevant answer
Answer
You question is one of the most used in this time.
You see it from holistic commercial side. I mean you can start hope in some cases (called Quantum Circuits). They are quite realistic in special cases.
But in general I suppose the Quantum Computer can't be realized in future "as one unified concept".
As actual state is we need special mathematical algorithms (HADAMARD, DEUTSCH, SHOR) to get parallel functioning and a reverse fixing of states of a Quantum Circuit to get that fascinating "parallel computing". In reality many such special physical circuits are found and all claim to be the first "quantum computer".
So my actual state of knowledge is that a unified quantum computer cant be, because we can't rebuilt the physical world of quants and steer them as we need them for our results. We have to create some very special circuits and surround them by our mathematically defined algorithms to get some effects like our particles have in Quantum Mechanics.
Sure is too that not in all cases a Quantum Computer (Quantum Circuit) is better than our classical computer. That special positive advantages are f. i. : factoring of numbers and combined "cryptomysing" of data.    
So there will be a lot to do for algorithmic and mathematics to use quantum circuits for more cases in commercial purposes.     
  • asked a question related to Computational Science
Question
8 answers
I have some values which are close to zero (such as  e^(-800)  ) , and when i run my code, MATLAB rounds off these values to zero.How  to prevent this from happening
Relevant answer
Answer
ok, good.  Then see Arxiv 1504.01964 for a procedure, which calculates y directly, without going through the intermediate calculation of exp(x).  It is described in the Arxiv note as being the solution of y = log(W(exp(x))) but it is mathematically equivalent to y = x - W(exp(x)).  And the algorithm does not require calculation of exp(x) intermediate.  Please let me know if you have trouble implementing in Matlab.
  • asked a question related to Computational Science
Question
3 answers
Is there a way to translate arbitrary Boolean functions into ILP. I'm particularly struggling with disjunctions of linear constraints. Is it possible with Simplex at all?
E.g. a OR b AND ((5x +6y <= 7z) OR (4x + 7z <= 9y))
Of course I can use SMT solvers for this, but I want to try ILP because of performance reasons.
Does anyone has a suggestion?
Relevant answer
Answer
Maybe it is even sufficient to have a solver that only supports OR but not AND. The only prerequisite is to also have negation. Then you can use De Morgan's Law to remove all ANDs. Together with Frédéric's answer this might help to use existing solvers.
  • asked a question related to Computational Science
Question
1 answer
Synaptic weights
Relevant answer
Answer
STDP idea deserves attention. However, additive and multiplicative models changes synapse weights are not perfect. They do not take into account the system-wide factors specific to neural networks. Now to the question about the typical of distribution of synapse weights. You should keep in mind that the typical of distribution weights of synapses carry some genetic information. Without knowledge of the specific tasks assigned to the neural network, the typical distribution of synapse weights can not be justified.
  • asked a question related to Computational Science
Question
3 answers
Given a set of minimal T-invariants of a Petri net, is it possible to reduce the state space of the net system?
Relevant answer
Answer
You might be interested in this paper (we used T-invariants as a memory optimization heuristics during state exploration):
R. Carvajal-Schiaffino, G. Delzanno and G. Chiola.
Combining Structural and Enumerative Techniques for the Validation of Bounded Petri Nets.
In T. Margaria and W. Yi, editors. Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2001.
Lecture Notes in Computer Science 2031, Springer. 2001.
  • asked a question related to Computational Science
Question
3 answers
Relating two vectors / variables / attributes is possible. Apart from time complexity, is there any methods to find the relationship that may exist between algorithms.
Relevant answer
Answer
In general it is not possible to decide
whether two algorithms do the same,
see Rice's theorem.
Regards,
Joachim
  • asked a question related to Computational Science
Question
2 answers
When testing between trapdoor and PEKS what is A and B mean?
Relevant answer
Answer
S is a searchable encryption of a keyword, i.e. the output of PEKS. The encryption has two parts, the first part (g^r) is referred as A, the second part (H2(t)) is referred as B in TEST.
  • asked a question related to Computational Science
Question
3 answers
I need to build a new molecular group containing number of atoms more than 30. Can you suggest a free download software to build a molecule only by giving chemical formula as input?
Relevant answer
Answer
I am not sure if it is exactly what you need, but you can try MolGen (http://molgen.de/?src=documents/molgenonline).
  • asked a question related to Computational Science
Question
10 answers
I know the L1 is called first before L2 and fourth but why? Is there anybody with a theoretical and practical reason?
Relevant answer
Answer
There is no theoretical reason. The most compelling practical reason for this is cost. Maybe a minor practical reason is that is hard to implement a fast search for a larger cache.
The thing is that fast memory is very expensive. Otherwise we would use computers with the number of registers equivalent to several gigabytes of data. Because the processor logic would be quite complex for billions of registers (and thus expensive) we are using a L1 cache. This cache is still quite fast, but having a larger size of this kind of fast memory would again cost a lot. Therefore, the trade-off is again a small size. In the beginning of computing it would only take about 2 CPU cycles to access RAM (which by today's standards is also really small). With every increase in CPU speed another level of cache is needed to compensate for speed differences between CPU registers and RAM. Large and slow memory costs as much as fast and small memory. This is always a trade-off between size, speed, and money. You cannot optimize for all three at the same time. The main reason to have a cache is to hide latency to the RAM (or to the L3 cache, or to the L2 cache). Sure, this only works for programs that adhere to memory locality. Otherwise, you would have roughly 200 CPU cycles without any work; just waiting for data from the RAM.
  • asked a question related to Computational Science
Question
3 answers
Graph based community detection methods are very effective in explaining the underlying structure of graph but i have not come across any method find optimal number of community similar to clustering methods.
Relevant answer
Answer
I am sorry, I should have made it a bit clear.
Say, I am trying to identify the communities in an unsupervised manner and for that  I am trying to maximize the modularity. Now, I get different number of communities with different nodes even at a single resolution parameter. The question arises which of the communities is the best i.e. is there any statistical criteria which can lead me to find that number? 
Moreover, the choice of resolution parameter itself is a question mark.
  • asked a question related to Computational Science
Question
7 answers
I have a huge data to process, to pace up the work I run multiple MATLAB instances on the same computer. Can it lead to erroneous result as my programs contain variables with same name.
Relevant answer
Answer
No, it will not lead to errors in the results, but first you must be sure to use a computer with enough memory to handle such "huge" datasets. The fact of having several variables with the same name will only use more memory. I suggest you to try another approach such as parallelizing your calculation, storing your datasets in external tables and workspaces and loading them only when they are necessary, optimizing the code.             
  • asked a question related to Computational Science
Question
1 answer
Is there any faster way than the direct way of computing Smith Normal Form of polynomial Matrices?
  • asked a question related to Computational Science
Question
1 answer
Since the double bonds are rigid we cannot rotate the C molecule which is associated with the double bond. I tried changing the dihedral angles of the methyl group atoms but the results are unclear.
Relevant answer
Answer
I am not clear about the purpose of this rotation. But if you are interested in the potential energy surface for this torsional angle rotation you can do a relaxed PES scan.
  • asked a question related to Computational Science
Question
1 answer
I want to implement some algorithms for the basic load balancing problem (also known as multiprocessor scheduling problem), where the input is a set of n independent jobs J={j1,j2,j3...,jn} and a number m of identical machines. The goal is to find an assignment of jobs to machines that minimizes the execution time of the most loaded machine. I would like to use a standard set of instances in order to compute the experimental approximation factor delivered by some algorithms.
Relevant answer
Answer
Have you tried making a tight example? That is, an infinite set of instances that hit your approximation factor yet?
I am working on the makespan problem on unrelated parallel machines these days, so I can suggest some ideas as your problem is very related to this (not the same though). If experimental and not formal results are wanted, one thing I can suggest is just generating permutations of a schedule you know will have an optimal load you know (say m). A big thing to deal with in a scheduling problem is what order jobs are processed. For example, the classic list scheduling algorithm by Graham does some pretty nasty things if you force it to pick bad choices then try to squeeze a job on a machine that is already balanced. That's how you get the tight example.
One thing I can suggest is looking at your algorithm, and the properties it has, and prove it has a certain approximation factor based on the characteristics of your algorithm. Usually knowing the approximation factor typically requires finding the flaws in the algorithm by finding a provably good bound you know about the scheduling problem.
But yes, for experimental results, try permuting a set of jobs where about half are even, half are odd. Usually even and odd job lengths (with difference greater than 2) can do funky things to a schedule sometimes. I can't suggest any standard instances beyond what I said, as I usually go about this formally, since an approximation factor is something somebody can prove usually.
Hope this helps!
  • asked a question related to Computational Science
Question
18 answers
What is the best way to Schedule/Execute two separate codes on a dual core processor? Consider that the two codes don't have any dependency on each other or any shared resources. For Example : one piece of code Generates a Fibonacci series and another the Square root. ( Both code from 1 to billion). Can these two code be run on two separate processor entirely independent of each other? If one of the codes (Assume the Fibonacci) encounters a overflow error the other must not be affected in any manner.
Relevant answer
Answer
It depends on what level of independence you expect from the two programs. Generally the first answer would be "yes", since the usual level of independence is good for most cases. I.e. if just run on two cores, one program should not crash the other (provided the program and the OS are free of bugs). The ways in which these programms may interact is only relevant if you need very clear guarantees, e.g. real-time behavior or safety-related functions.
In case hard real-time behavior is required for example the question cannot be answered without knowing the exact hardware. You mention that the programs do not use shared resources, but this is close to impossible to realize in realistic hardware. Most times the cores of a multiprocessor share caches or the connection to main memory. So in case one programm accesses the main memory, the other may have to wait for the access to wait for the first program to finish, which can change the execution time. Or the programs may share cache lines which can also influence performance. It would even be possible that one program drives up the heat, so that the OS has to activate a power saving mode for both cores which then again degrades performance for the other program.
The point is, that the concept of two programs sharing no resources is an illusion provided by the OS and the hardware. Some resources will always be shared (and even if it is only because of physical proximity of the cores, making them share a common heat-sink). In most cases however this illusion is performed so well, that one can actually assume independence of the programs. And the cases where one has to look beyond that illusion are rare and require very special methods of development.
  • asked a question related to Computational Science
Question
18 answers
Students often complain that analysis of algorithm is difficult and abstract in nature. Some of them often ask about the significance of analysis of algorithms to computer science, software engineering, computational tool development and bioinformatics software. With this in view, what is the significance of algorithm analysis to the development of computational software and bioinformatics tools?
Relevant answer
Answer
As somebody who has actually taught a university course in Analysis of Algorithms, I felt it may be good for me to give a "stab" at answering this one.
To begin, we must understand that programs and algorithms are not the same thing. One is an implementation, the other is a mathematical description. One is associated with a machine, the other with a computational model. Why do we care about analysis of algorithms? This question is trivial to answer. We care about having efficient algorithms in terms of time and space. The computational models at the present time reflect quite well (and very rarely not) the behaviour of an algorithm when it is implemented. Remember, the interest is in algorithms typically. We prove theorems and analyze algorithms, not their implementations.
So why would we care about this in software tools? As I said, computational models seem to be pretty accurate to how they perform (and often is of research interest to some). For example, the RAM model is extremely good for most cases in analyzing an algorithm in the worst-case. Typically we are interested in the behaviour of the algorithm in specific cases. Why would we care about this? Because, if we know the algorithm will have a certain asymptotic behaviour, then we know in the worst-case it cannot do any worse than that. The other factor to consider is asymptotics. If we know the behaviour as the input gets large, we have a better idea of how it will compare in complexity with other algorithms.
Not just in bioinformatics (which relies a lot on algorithms), but any area of Computing needs to consider algorithms as formal descriptions and ensuring they are correct and terminate with a certain time-complexity. These are not just "abstract ideas" that serve no purpose. They lay the ground work of almost everything we do in Theoretical Computer Science (core of CS) when it comes to Algorithms. Once you have time-complexities, we can compare algorithms, and even categorize them and ask questions about them. Many bioinformatics problems in particular can be NP-hard, so this will be even more important when considering an application.
Hope this helps!
  • asked a question related to Computational Science
Question
8 answers
I'm programming on a simulation software that provides both a parallelization with either MPI or OpenMP. The goal is to have a hybrid implementation that uses MPI for communication between physical compute nodes in our cluster, but OpenMP on the same machine. Explicitly setting the correct number of computers and parameters works with our software.
The problem is the following: I can specifically request nodes on our cluster with a certain number of processors per node. This will give me multiple entries per node in the nodefile/machinefile. For certain reasons I sometimes want to start several MPI processes on the same node and use only fewer OpenMP threads per MPI process. Hence, I cannot filter the nodefile to contain each node only once.
Currently, I am setting the number of OpenMP threads first and then start an MPI process for every entry in the nodefile. Only some of the MPI processes continue with the computation and other are put to sleep. I am not entirely happy with my solution, though.
I am using Intel's MPI library (version 3.2). Terminating MPI processes not necessary for computation kills all MPI processes because communication does not work. Using an MPI barrier is not an option since it is a busy wait and so the processor resources are not freed for other OpenMP threads. My current solution is for the MPI processes not taking part in the computation to sleep for one minute and then look for an MPI message if they should terminate. Does anyone have a better idea? Is there a way to put processes to (real) sleep and wake them up based on an MPI message?
Relevant answer
Answer
@James
I didn't know about I_MPI_WAIT_MODE yet. But, from the manual I take it that it should be set to 'on'. 'off' is the default value and will always do active polling. There is one restriction, though: It does not support InfiniBand devices.
I have been looking into a similar option as you are describing, but using mostly portable FORTRAN for sleeping: We are using the Intel compiler and also the software has to run with Windows, not just Linux. Here is what I do:
do
call SLEEPQQ(60000) ! sleep one minute
call MPI_IPROBE(MPI_ANY_SOURCE, MPI_ANY_TAG, &
MPI_COMM_WORLD, flag, tmp, ierror)
if (flag .eq. .true.) exit
end do
call MPI_Finalize()
Then the master process just needs to send any message and the sleeping thread will eventually finish. This works. But, I am curious if there is a more elegant solution to this problem that does not involve active polling.
@Cedrick
MPI can only use shared memory for communication which will make communication faster. But, because of the programming model each process will have to load the same data. There is no shared memory for data in MPI. This is the advantage of OpenMP which will decrease memory footprint. And this is the only reason why we consider a hybrid approach.
We will always keep the pure MPI version and pure OpenMP version of our software as well.
BTW, I know about processor pinning. But, this is not the problem I am concerned with. It is quite easy to figure out for Intel MPI and Intel OpenMP. The real question is how do I make my application smart enough to configure itself for hybrid use. I thought about this a lot and I think that it is definitely necessary to first start one MPI process per core. Then I need to figure out how many MPI processes should participate in the computation and which ones those are. These MPI processes will later split work on OpenMP threads. The only question is what to do with the remaining MPI processes that are not necessary for computation.
  • asked a question related to Computational Science
Question
1 answer
I have a structured mesh en 2D by triangles and I'm going to put circles on every node of the mesh and I need some ideas about to accomplish assign radii to the circles but in a "homogeneous way", avoiding to have circles with tiny or large radii.
Relevant answer
Answer
did you try checking for algorithms for microstructure creation in 2D (e.g. based on Voronoi tessellation/Delaunay triangulation)? In that case you get a distribution of radii (it is unavoidable.. otherwise you have to put in each point a particle with a radius equal to the mid distance between the two closest points on the mesh.
Probably you can find some useful information checking at Surface Evolver as well (http://www.susqu.edu/brakke/evolver/evolver.html).
  • asked a question related to Computational Science
Question
11 answers
Given a bunch of particles in 2D, I know the radii and the center of each particle. I need to implement an algorithm to find which particles are in contact and with whom. I already know about kd-tree and quadtree but I want to know about some new alternative? One time I heard about a method that consists of doing a grid with the maximum radii, sort of like in a table hash but I cant find information about it, maybe it's because I don`t know the name of the method.
I hope someone can help me.
Relevant answer
Answer
The name is Verlet list (cell list). A quick intro and some references to start are here: http://en.wikipedia.org/wiki/Cell_lists.
  • asked a question related to Computational Science
Question
8 answers
I need an example, how to use literate programming using C++ language.
Relevant answer
You can write whole papers using SWeave, which integrates literate programming with the R language and statistics framework http://users.stat.umn.edu/~geyer/Sweave/ There are several examples of integration of R and LaTeX down the page (My Examples)