Science topic
Computational Science - Science topic
Explore the latest questions and answers in Computational Science, and find Computational Science experts.
Questions related to Computational Science
2024 5th International Conference on Computer Vision and Data Mining(ICCVDM 2024) will be held on July 19-21, 2024 in Changchun, China.
Conference Webiste: https://ais.cn/u/ai6bQr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Computer Science and Engineering
......
◕ Vision Science and Engineering
· Image/video analysis
· Feature extraction, grouping and division
· Scene analysis
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Robotics Science and Engineering
Image/video analysis
Feature extraction, grouping and division
Scene analysis
......
All accepted papers will be published by SPIE - The International Society for Optical Engineering (ISSN: 0277-786X), and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: June 19, 2024
Registration Deadline: June 30, 2024
Final Paper Submission Date: June 30, 2024
Conference Dates: July 19-21, 2024
For More Details please visit:
Amrita School of Engineering, Bengaluru campus, is currently accepting applications from highly motivated researchers who possess a strong background in mathematics, computational physics, applied physics, fluid dynamics, or a closely related field. Proficiency in programming languages such as C/C++, MATLAB, or Python is advantageous. Candidates should actively contribute to the team's research efforts.
For more details, you may contact:
Dr. K. V. Nagaraja - kv_nagaraja@blr.amrita.edu - +91- 98452 23844 ;
Dr. T. V. Smitha - tv_smitha@blr.amrita.edu - +91- 9611107480 ;
Dr. Naveen Kumar R - r_naveen@blr.amrita.edu - +91- 78296 70202
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.
Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.
Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.
Finding optimal meta-heuristic parameters is one of the open problems in computation science today. However, works in the literature using Design of Experiments or the so-called Hyper-heuristics can be seen, which are meta-heuristics specialized in optimizing meta-heuristics. What methods do you know? Which do you think is best?
In experimental, the bond energy per atom rises quadratically with the number of bonds then why in simulations this bond energy per atom rises linearly with the number of bonds per atoms?
Dear Colleagues,
I have recently graduated with a BSc in Mechanical Engineering. During my BSc, I assisted research and projects on a variety of fields ranging from nanomechanics of advanced materials (experimental), predictive analysis of stochastics data input for control (MATLAB), human balance control (theoretical), dynamical modeling of fluid/solid coupling problems, and corresponding CFD in OpenFOAM, computational aerodynamics with HPC. Upon my graduation, I joined a research team at ETH Zurich as a scientific assistant to work on vortex kinematics (theoretical and computational).
My main interest areas are:
- Nonlinear Dynamics and Chaos, Stochastic Systems, Machine Learning of Dynamical Systems and Fluid Dynamics, Prediction, Nonlinear Control
- Computational Finance, Financial Analytics
- Numerical Methods, Computing and Algorithm Development
Clearly, all of the fields mentioned above require a decent knowledge of mathematical modeling, analysis, and computation (mostly by parallel computing over HPCs). One can also argue that these areas are not really far from each other as they can be all classified into an umbrella field of Dynamical Systems Theory.
I will soon start my MSc in Computational Science and Engineering at ETH Zurich. However, I am struggling to decide which specialization area I should choose.
As a part of the program I have to enroll at least in two of the following CORE SUBJECTS:
- Advanced Numerical Methods for CSE
- Optimization for Data Science
- Computational Statistics
- Advanced Systems Lab (Fast Numerical Codes)
Of this, I am planning to take all as they are rich in content, relevant to my multidisciplinary taste, and beneficial for my future plans. They are also fairly complementary to one another.
I will also have to take two mandatory subjects as a part of the admission requirement:
- Numerical Methods for CSE
- High-Performance Computing Lab for CSE
*The program requires me to take 5 courses in my selected specialization area. The rest of the credits necessary to graduate can be chosen freely from any department.
ETH is a top-notch institute for education and research in all three of Control & Robotics, Fluid Dynamics, and Applied/Computational Mathematics. This at least ensures that whatever I choose I will still get a quality education and have a chance to do quality research.
As we all know, modern areas such as robotics, data science, software engineering, neuroscience, computational biology and etc. have rather well-defined career paths. These people would not have as many troubles as a multidisciplinary guy (e.g. my MSc program) to decide what subjects to take and what to focus on.
Now, I lost 2 lost years between the high school and university and I believe this has eliminated some of my flexibility in this kind of decision, especially given that I am in a distance relationship of which I have to also take care of. It is likely that I will prefer to stay at ETH for my Ph.D. or work some time here before my Ph.D. I may also choose to do my Ph.D. in one of the other top schools.
I really appreciate your opinions and advice!
Thank you for your time and patience!
Kind Regards
Dear ResearGate responsibles,
In 2004 I was the leader & responsible of CEPIMA research group (UPC) that I created the same year as the result of linking together TQG group and LCMA, where I was the leader of both. We had an intense research activity. Our work
Badell, M., Fernandez, E., Bautista, J., Puigjaner, L. “Empowering Financial Tradeoff in Joint Financial & Supply Chain Scheduling & Planning Modeling.” In International Conference of Computational Methods in Sciences and Engineering 2004 (Eds.George Maroulis and Theodore Simos), VPS, Attica, Greece, ISBN: 90-6764-418-8, pp.653-656 (2004).
where I was one of the authors (responsible author). I sent Dr. Mariana Badell representing the work done by the four authors to Attica to make the presentation (paid by CEPIMA). It was successful an selected and invited for publication in "Int. Journal of Production Economics":
Badell, M., Fernández, E., Bautista, J., Puigjaner, L. “Empowering Tradeoff in Joint Financial & Supply Chain Scheduling & Planning Modeling”, Lecture Series on Computer & Computational Sciences, 11, ISSN: 1573-4196, pp. 653-656 (2004).
With kind regards,
Luis Puigjaner
The definition of a D-number is given by Y. Deng in his paper, " D-numbers: Theory and Applications" published in the Journal of Information & Computational Science 9: 9 (2012), pp. 2421-2428 is perhaps not correct.
Let us look at the definition:
Let Omega be a finite nonempty set, D number is a mapping D from Omega to [0, 1] such that
Sum of (D(B): B is a subset of Omega} is Less than or equal to 1.
By its definition D has the domain Omega. So, D is applicable to elements of Omega. How it is applied to subsets of Omega?
Should we replace the definition as D is a mapping from P(Omega) to [0, 1]?
How should the researchers know the importance of Green computing in current era.
I’m trying to implement semantic similarity based on Normalized Google Distance and i have many problems to obtain correct data. I tried to simulate browser, get data via Google API (PHP, Java, R, Google Docs) and every time i had different results. Is there any proper way to get accurate and current data?
i want do masters in computational science and for that I need to learn matlab so i need help about learning matlab from basic leval. And which specific portion is covered in computational science courses?
Any research on Graphene being carried out through computational science?
What behavior of which animals/birds/insects shows Swarm intelligence. And what are the practical aspects of using them to solve different problems?
Can we teach morality to machines using current AI & machine learning techniques? Examples?
Who will define morality?
to write research proposal in high performance computing area for my Phd.
Is there any python module or submodule that can calculate the Kretschmann scalar given a specified metric?Or do I have to write down the code?
I synthesized Leon3 with two different options:
- flatten-all & auto-ungroup:
It produce a single top level verlig module with all the module merged inside. During synthesis there are some uninitialized FF which cause 'x' propagation and results in incomplete annotation while doing power estimation. For that I can initialize all the FF in verilog file by searching all FF using Linux 'sed', before the modelsim simulation starts, then producing SAIF, then power estimation is done with 100% annotation.
But
- no-auto-ungroup
It preserves the original design hierarchy in terms of verilog modules, and save synthesized netlist in a verilog file. Now there are many modules, before modelsim simulation how to initialize all the FF in different modules?
Please guide.
Best Regards, Sajjad
I am unable to find data sets with all positive values from UCI. Values can be fraction but not negative.
Thanks
Hello everyone;
I have a problem with parallel computing using Fluent when I activate the dynamic adaptive mesh, event the code has the possibility to load balancing automatically after each adaptation.
if you have any suggestion or solution for this problem please shared with me.
thank you.
For modelling light emission from semiconductors I have often seen people using a dipole source which is justified for an exciton but if we consider a volume which may have several excitons (enough to be emitting in all directions from that small volume), can we use just a point source (or point sources filled inside a circular region)? And another question is whether it should be a single pulse, or a continuous wave is justified (given that total simulation time is just 40 fs) ?
Regarding the problem of graph coloring, is there any good reference dialing with the VC dimension of the problem ?
More generally: concerning NP-complete problems, is there any good reference dealing with the VC dimension of such problems ?
In CUDA C i searched how to make a table but not geeting how can i do group by queries on that table and function which can define them.
We have data of domestic workers like their wages, choice to do domestic work rather than commercial job, its benefits and drawback over the other jobs of the same level. How we can use fuzzy logic to measure the economic and social values of the domestic work?
Let us assume a connected graph with large number of nodes without weight in the edge. Also assume that entire information is not available (i.e. adjacency matrix is not available). One node is having information of its neighbor only. Now I want to find a path to destination. How do I find one path which is shortest among various possible path.
Currently, it seems that quantum computation is still in the very early stages of research, with no commercial solutions on horizon (D-wave is a questionable candidate). Nevertheless, some market researchers estimate that in a few years the quantum computing market will be worth over 25 billion USD: http://www.marketresearchmedia.com/?p=850
How justified are such claims, in your opinion? When will we see useful quantum computers? Will the field of quantum computing enter a sharp funding decline if a useful quantum computer does not appear soon?
I have some values which are close to zero (such as e^(-800) ) , and when i run my code, MATLAB rounds off these values to zero.How to prevent this from happening
Is there a way to translate arbitrary Boolean functions into ILP. I'm particularly struggling with disjunctions of linear constraints. Is it possible with Simplex at all?
E.g. a OR b AND ((5x +6y <= 7z) OR (4x + 7z <= 9y))
Of course I can use SMT solvers for this, but I want to try ILP because of performance reasons.
Does anyone has a suggestion?
Given a set of minimal T-invariants of a Petri net, is it possible to reduce the state space of the net system?
Relating two vectors / variables / attributes is possible. Apart from time complexity, is there any methods to find the relationship that may exist between algorithms.
When testing between trapdoor and PEKS what is A and B mean?
I need to build a new molecular group containing number of atoms more than 30. Can you suggest a free download software to build a molecule only by giving chemical formula as input?
I know the L1 is called first before L2 and fourth but why? Is there anybody with a theoretical and practical reason?
Graph based community detection methods are very effective in explaining the underlying structure of graph but i have not come across any method find optimal number of community similar to clustering methods.
I have a huge data to process, to pace up the work I run multiple MATLAB instances on the same computer. Can it lead to erroneous result as my programs contain variables with same name.
Is there any faster way than the direct way of computing Smith Normal Form of polynomial Matrices?
Since the double bonds are rigid we cannot rotate the C molecule which is associated with the double bond. I tried changing the dihedral angles of the methyl group atoms but the results are unclear.
I want to implement some algorithms for the basic load balancing problem (also known as multiprocessor scheduling problem), where the input is a set of n independent jobs J={j1,j2,j3...,jn} and a number m of identical machines. The goal is to find an assignment of jobs to machines that minimizes the execution time of the most loaded machine. I would like to use a standard set of instances in order to compute the experimental approximation factor delivered by some algorithms.
What is the best way to Schedule/Execute two separate codes on a dual core processor? Consider that the two codes don't have any dependency on each other or any shared resources. For Example : one piece of code Generates a Fibonacci series and another the Square root. ( Both code from 1 to billion). Can these two code be run on two separate processor entirely independent of each other? If one of the codes (Assume the Fibonacci) encounters a overflow error the other must not be affected in any manner.
Students often complain that analysis of algorithm is difficult and abstract in nature. Some of them often ask about the significance of analysis of algorithms to computer science, software engineering, computational tool development and bioinformatics software. With this in view, what is the significance of algorithm analysis to the development of computational software and bioinformatics tools?
I'm programming on a simulation software that provides both a parallelization with either MPI or OpenMP. The goal is to have a hybrid implementation that uses MPI for communication between physical compute nodes in our cluster, but OpenMP on the same machine. Explicitly setting the correct number of computers and parameters works with our software.
The problem is the following: I can specifically request nodes on our cluster with a certain number of processors per node. This will give me multiple entries per node in the nodefile/machinefile. For certain reasons I sometimes want to start several MPI processes on the same node and use only fewer OpenMP threads per MPI process. Hence, I cannot filter the nodefile to contain each node only once.
Currently, I am setting the number of OpenMP threads first and then start an MPI process for every entry in the nodefile. Only some of the MPI processes continue with the computation and other are put to sleep. I am not entirely happy with my solution, though.
I am using Intel's MPI library (version 3.2). Terminating MPI processes not necessary for computation kills all MPI processes because communication does not work. Using an MPI barrier is not an option since it is a busy wait and so the processor resources are not freed for other OpenMP threads. My current solution is for the MPI processes not taking part in the computation to sleep for one minute and then look for an MPI message if they should terminate. Does anyone have a better idea? Is there a way to put processes to (real) sleep and wake them up based on an MPI message?
I have a structured mesh en 2D by triangles and I'm going to put circles on every node of the mesh and I need some ideas about to accomplish assign radii to the circles but in a "homogeneous way", avoiding to have circles with tiny or large radii.
Given a bunch of particles in 2D, I know the radii and the center of each particle. I need to implement an algorithm to find which particles are in contact and with whom. I already know about kd-tree and quadtree but I want to know about some new alternative? One time I heard about a method that consists of doing a grid with the maximum radii, sort of like in a table hash but I cant find information about it, maybe it's because I don`t know the name of the method.
I hope someone can help me.
I need an example, how to use literate programming using C++ language.