Science topic

Supercomputing - Science topic

Explore the latest questions and answers in Supercomputing, and find Supercomputing experts.
Questions related to Supercomputing
  • asked a question related to Supercomputing
Question
3 answers
I am using a supercomputing cluster to test some quantum algorithms. At present, my program can implement the HHL algorithm of up to 30qubit, but the calculation time and the number of qubits are too much. What are some good cloud platforms or computing frameworks that can be used?
Relevant answer
Answer
This article can move a little bit forward
  • asked a question related to Supercomputing
Question
2 answers
I am using QwikMD to prepare simulation files for MD. However, I cannot actually run the simulation on my computer as it would take a very long time. Are there any tutorials or videos demonstrating how to use QwikMD and then submit the actual simulation to a supercomputer/HPC? What are the steps to do this? Any help would be highly appreciated!
Relevant answer
Answer
This is a great question for the systems administrator of the HPC cluster you plan to use. They should be able to provide exactly what you need in terms of interfacing with the cluster to submit and monitor your job(s).
  • asked a question related to Supercomputing
Question
5 answers
Hello everyone,
Let me explain my situation. In lab, we have enough computers for general uses. Recently, we started to DFT studies of our materials. My personal computer has i5 and 20gb ram and mostly enough for calculation. However, i can not use my computer during the calculations. Moreover, accessing the supercomputer clusters of government is a bit pain and needs a lot of paper work.
My goal is creating simple cluster for calculations and free my personal computer. Speed is important but not so much. Thus, is investing money on 4 raspberry pi 4 8gb a good idea?? I am also open for other solutions.
Relevant answer
Answer
Berkay-Sungur: I DID NOT say that a Beowulf Cluster would be unsuitable. What I did say is that one built from small computers such as RaspberryPi or BeagleBone would not be suitable.
What you describe would require what I did suggest: A Beowulf Cluster built from reasonable personal computers, a switch, and cables that nobody wants anymore. I've built several of those. The computers do not have to be homogeneous, while I do recommend sticking with all 32-bit or all 64-bit. Whether a computer is a desktop, floortop, or laptop would not matter, as long as they have at least one ethernet port. It is possible to mix machines with different word sizes but that requires additional work to be done.
I've built several such clusters. The first one used machines that were pulled from a scrap pile. Some were put together from parts coming from that same pile. That bunch was all 32-bits of varying CPU speeds. One node was even less than 1mhz. Another was so old it eventually caught fire but was easily replaced. The software would allocate workload to a node according to it's cores processing speed. Node-level software allocated threads according to the number of cores. Thus, the advantages of distributed and shared processing and memory were brought to bear.
Another cluster was built from scrap to mimic the one used in a customer facility that could not be remotely accessed. We rented time on our cluster to a company that needed a remote testbed whose results could be directly installed manually on the customer's facility.
Beowulf clusters can be very handy as a laboratory's computational needs grow beyond what one computer can reasonably handle. All of the clusters I have built required NO capital outlay.
  • asked a question related to Supercomputing
Question
3 answers
Any experiences about the review time, time to get first response or time to publication of "The Journal of Supercomputing"- Springer Publications. Need real experiences, I have already checked the average time to get first response and time to publication on Journal's web page.
Relevant answer
Answer
Based in the https://journalsuggester.springer.com/ website its take up to 87 days (Average) for the first decision ..
  • asked a question related to Supercomputing
Question
3 answers
I am a beginner and I found some tutorials on GROMACS website. Although I was able to follow some of steps through typing commands in ssh window, but I wasn't successful to run energy minimization step because I don't know how to write the script file specifying number of processors that can be utilized for running energy minimization. your help is really much appreciated.
Relevant answer
Answer
Here I hope you find this link helpful:
  • asked a question related to Supercomputing
Question
3 answers
Hello. I run my Abaqus simulation in the supercomputer and once it's done, I copied the result (odb file) into my local computer using winSCP. But I faced a problem to read that file (see picture). How can I resolve this issue? How to ensure that the file was copied using the binary mode instead of ASCII mode? Thank you in advance.
Relevant answer
Answer
Rudraprasad Bhattacharyya rightly mentioned the answer. However, when we need visualization then we need to transfer .odb files to our PC. When I tried to transfer larger .odb files from HPC to my PC, I got the same error (I used mobaxterm software).
The problem is solved by transferring the files using cmd. The command used is
scp –r username@ipaddress:/source_folder_address_In_HPC destination_folder_address
  • asked a question related to Supercomputing
Question
3 answers
Hello everyone,
I want to perform some gaussian calculations and want to use Materials studio for my research work. Can anyone guide me how to get access to supercomputer resources in India for these jobs? I am ready to pay charges for the usage.
Thank You.
Relevant answer
Answer
Try the High-Performance Computing Laboratory at IUAC, Delhi. The facility is for the users from universities, colleges and institutes across the country.
If you wish to use e-mail a request to sumit@iuac.res.in with a short (~ 1 page) description of the proposed work, the software you require, and the resources you need for a typical run (number of cores, amount of RAM, disk space, time).
  • asked a question related to Supercomputing
Question
1 answer
Hi
My system I am trying to simulate has 600.000 atoms. I want to simulate for 1 us
I am using a supercomputer to run the simulation with tesla K80 GPUs. I am using 1 node and a total of 28 core but the performance is not good at all. only 10 ns/day
Core t (s) Wall t (s) (%) Time: 221835.265 7922.688 2800.0 2h12:02 (ns/day) (hour/ns) Performance: 10.905 2.201
I noticed this note in the log file
NOTE: GROMACS was configured without NVML support hence it can not exploit application clocks of the detected Tesla K80 GPU to improve performance. Recompile with the NVML library (compatible with the driver used) or set application clocks manually.
PME mesh takes 70% of the computation time according to the log file
What are the ways to optimize performance and speed up the simulation?
Thank you all
Relevant answer
Answer
Why did you choose 28 cores??
  • asked a question related to Supercomputing
Question
4 answers
Greetings,
I am trying to run GROMACS 2020.3 on a supercomputer using this command to do the production run:
mpirun -np 24 gmx_mpi mdrun -v -s AB_md_0_1.tpr -deffnm AB_md_0_1 > AB_md_1.log
however, it produces this error for each core used:
Fatal error:
525 particles communicated to PME rank 3 are more than 2/3 times the cut-off
out of the domain decomposition cell of their charge group in dimension x.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
How can I solve this error? should I equilibrate it more?
attached are the log file produced from the run and the slurm file containing the errors.
Thanks
Relevant answer
Two suggestions:
1- If the previous suggestions don't work, check the version of the CUDA toolkit installed on the computer. I think Gromacs 2020.3 does not support CUDA 11. This problem has been fixed with Gromacs 2020.4. Please see the link ( http://manual.gromacs.org/current/release-notes/2020/2020.4.html )
2- Use -nt 24 (number of threads). The flag '-np' is not a valid option in gmx mdrun.
  • asked a question related to Supercomputing
Question
4 answers
We often use Quantum ESPRESSO for material modeling but when the number of atoms in a structure is a bit high then very powerful and ultrahigh speed PC is required to calculate and give us the requested results. This type of very high performance PC is not accessible to many researchers.
Is there any free high performance cloud based computer or supercomputer available for these types of calculations and modeling on the internet?
Relevant answer
Answer
Dear Abdolazim,
Hi, I don't think there would be any place to share HPCs for free online, even if they do you will need to wait in the queue for a long time. There are plenty of HPC centers available with quite reasonable prices and with quantum chemistry software packages installed in your country in Iran and you can access them easily from your laptop. So, I recommend hpclab.ir or the HPC center of the Amirkabir University of Tehran. For the latter as far as I know, you will need to install QE or other packages yourself but hopefully, you will have the root access to your own virtual machine on their cluster and you can install it through apt-get install.
I hope it helps.
  • asked a question related to Supercomputing
Question
1 answer
Does anyone know if there are providers (not universities or research institutes) of commercial supercomputers out there that one can subscribe to for running earth system models?
Thanks,
Ashehad
Relevant answer
Answer
Any cloud provider can be used in this manner. For my own purposes, I prefer RackSpace or DigitalOcean. All my prototypes start off on my own laboratory network. Once proven there, they get uploaded to larger systems.
  • asked a question related to Supercomputing
Question
2 answers
i am using NAMD to do an MDS run on a supercomputer. but i want to know what is the number of cores to use to make it run very faster?
supercomputer nodes: 6158
supercomputer cpu-per-node: 32
i tried 32 nodes and it did a 20 ns run of nearly 3000 amino acids in 19 hours.
can it be faster than this?
thanks
Relevant answer
Answer
Wanted to ask some very basic and simple questions:
1. What is the percentage engagement of each CPU vs. the simulation's runtime?
2. Is the problem partitioned in such a way as to make use of 90% memory of each node (to include operating system overhead)?
3. As nodes are added, what is the change in communication data load?
4. Is there an estimate of computation time vs. process/node intercommunication?
5. Are any processes or nodes sitting idle at any point during the run?
Consideration of these issues could lead to adjustments that would enable faster end-to-end simulation time.
  • asked a question related to Supercomputing
Question
3 answers
SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 ARM9 processors (specifically ARM968), each with 18 cores and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM.The computing platform is based on spiking neural networks, useful in simulating the human brain
Relevant answer
Answer
Of course, although the key is to chose relevant inputs of data that is uncorrelated. And the decision of what is relevant at the moment is human based
  • asked a question related to Supercomputing
Question
7 answers
what are the required numbers for cpu-per-task and ntasks in the .sh file for running NAMD on a supercomputer?
Relevant answer
Answer
If you have an MPI build of NAMD, than you need to start it via mpirun. Write mpirun in front of the namd2 command. Than you need to select the number cpus for the simulation. If a node has 24 cores, than use multiples of 24 for --ntasks.
  • asked a question related to Supercomputing
Question
3 answers
Living robot named Xenobot has been developed using stem cells of frog . These managed by supercomputer with algorithms. They are self - repairable on any destruction.
May it create threat to life of human on further progress like LIVE BOMBS bet and nuclear or hydrogen bombs?
Relevant answer
Answer
I recently read about it. a real evolution in Robot industry
  • asked a question related to Supercomputing
Question
6 answers
Hi,
I am wondering what are the files that should be prepared? how? commands? that enables me to perform MD simulations using AMBER on Supercomputer.
  • asked a question related to Supercomputing
Question
3 answers
Dear All,
I got access to C-DAC Super computer facility, Pune, India to perform Gromacs. As am a new user to access this server supercomputer, could anyone please help me to access gromacs and submit jobs. Please do the needful.
Relevant answer
Answer
Most HPC facilities provide template scripts, but not help with specific applications. Breaking the task into two or more parts, start with learning how to use the cluster. It is relevant to know its architecture. Then from your applications specific users, you can find how to do the simulations. It may help if you go to other departments and talk to advanced grad students/post-docs. good luck.
  • asked a question related to Supercomputing
Question
3 answers
I am trying to run an analysis of a .trj file generated from a Roundabout simulation in Vissim and the SSAM software is hanging out before completing the task. The file is about 100 MB.
Does anybody know what it takes to perform such analysis?
PC configuration:
* Processor: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz, 3301 Mhz, 4 Core(s), 4 Logical Processor(s)
* Installed Physical Memory (RAM): 16.0 GB
Relevant answer
Answer
If the source is open, try to compile with the debug flags. Then run the version of your simulation that hangs and closes. This may or may not produce a core file that you can debug to find out where it is hanging and closing unexpectedly. One source of error is how you give the software its data, and how much error handling it can do.
In case the same version of the program has been used by others, and they found that an analysis is finished after a few hours, then more investigation may be needed. Assuming above suggestion worked, then first thing to find is whether the program is capable of using multiple processors, or is co-processor enabled. Any local computer with more than 1 CPU will let you test it. A HPC service is useful only when your codes/software can use multiple processors, can use GPUs, or better, are CPU-GPU enabled. Otherwise, there is no reason to need a supercomputer.
  • asked a question related to Supercomputing
Question
5 answers
I would like to know if there is a supercomputer or a server machine, which is free to access, and can run the STRUCTURE software (https://web.stanford.edu/group/pritchardlab/structure.html)? I mean like Compute Canada (https://www.computecanada.ca/) or something similar. Maybe in a Windows environment? Your comments and suggestions would be greatly appreciated.
Relevant answer
  • asked a question related to Supercomputing
Question
2 answers
It is a trend to employ physics-based battery models to improve the performance and extend the lifetime of battery cells, modules and packs, as they performs better at battery ageing prediction.
Due to the computation burden coupled with such models, it is hard to be applied to mobile applications such as EVs, as one's car cannot afford carrying a heavy and large supercomputer around, let along that brings up the cost and thus the price of a EV. Many researchers are trying all kinds of techniques to simplify such models.
However, for stationary applications of batteries, like those energy storage systems for power system services, seems to be less sensitive to the computation burden and cost. So, maybe stationary application of battery are more suitable to apply the physics-based models?
Relevant answer
Answer
Mohammad Arshad Hi thanks for the comments. For batteries placed in desserts, do you have any air-conditioning to keep the temperature of the surrounding of the batteries low?
  • asked a question related to Supercomputing
Question
5 answers
UPDATE: SOLVED (See my final comment at the bottom of this page)
Apologies for the long post, but I feel that I need to give some context/details about the problem at hand before getting to the problem. Thank you for your time in advance!
I am minimizing the electronic structure and atomic positions (ISIF = 2) of a surface slab of CdSe with pseudo-hydrogen passivation in specific places (not important for the discussion below), and the desired surface is in the plane perpendicular to the z-axis. Whenever I reach the desired force cutoff for an ISIF = 2 calculation like this one, I always perform a couple of follow-up calculations where I read from the WAVECAR and keep the relaxation tags turned on (usually IBRION = 2, ISIF = 2, and NSW = 100). If the structure does not relax any further and if the force drift remains approximately the same, then I feel confident that my structure is at the desired position.
For this relaxation, I require the forces to be no greater than 1.8 meV/Angstrom, which I know is a strict convergence criterion for a large surface slab (this one is 140 atoms). However, with parameters such as PREC and ENAUG set in appropriate ways, I found this force convergence could be achieved with a force drift that adheres to VASP manual’s recommendation --- that is, the force drift is less than the desired force cutoff.
After relaxing the structure, I performed two further “relaxations” as mentioned in the second paragraph above. The output files corresponding to those jobs are ‘OUTCAR.1208664’ and ‘OUTCAR.1211575’. Both jobs give reasonable values in the drift, namely:
total drift: 0.00000045 0.00159302 0.00053071
and
total drift: 0.00000011 0.00080124 0.00101411
respectively. There are some differences between these, but nothing out of the ordinary for such a large slab in my experience. Plus, these drifts are acceptable with respect to my desired force cutoff. Both of these runs were performed on my university’s supercomputer on 6 nodes (24 cores-per-node), and I set NCORE = 24 to match this architecture.
Now, here is the issue: If I turn off the relaxation tags (IBRION, ISIF, and NSW) and change the calculation to run on a high-memory node (48 cores-per-node) with no NCORE set, I get the following drift (see ‘OUTCAR.1232305’):
total drift: -0.00000005 0.00045769 -0.01691489
The x- and y- components are still fine, but the z-component is much larger than I would like. It’s still “small” relative to most other calculations of this scale, but it bothers me that the drift is now about an order of magnitude larger than my desired EDIFFG value. I’m not sure if it’s due to the change to the high-memory node, or turning NCORE off, or both, or neither. But it seems very strange. I mention the high-memory/NCORE requirements because I perform the VASP2WANNIER conversion, otherwise I wouldn't mention those details.
Any help on this would be greatly appreciated. I have included the aforementioned OUTCAR* files, as well as the corresponding INCAR* files (appended with the same numbers as the OUTCAR* files), KPOINTS file, POSCAR file, and POTCAR file. Also, I have slightly modified the OUTCAR* files to give me more digits of precision in certain places, so there may be some areas that seem a little more cluttered with numbers than usual.
I know this issue is not that big of a deal in the grand scheme of things, but to see the drift jump from < 1.8 meV/Angstrom to 10 times this amount between different runs is troubling to me.
Relevant answer
Answer
UPDATE: I have an update for those who are reading this thread in the future.
I feel stupid for not realizing this sooner, but there is a workaround to the problem I presented in my original post. My issue was that I kept ALGO = Normal and re-minimized the electronic structure in the same step as the vasp2wannier conversion step. This is unnecessary, and potentially dangerous, since I had to switch over to a high-memory node to do this step, and also since I already had pre-converged my system to exactly the desired point (forces < 0.0018 eV/Ang). This is related to what Sinhué López said in the following comment to me: "On the other hand, something related to your case was observed when the same calculation was performed on different computers with different compilations."
Instead of what I did above, what you should do is set ALGO = None and NELM = 1 in your INCAR when you want to invoke the vasp2wannier conversion process. Make sure you converge everything beforehand and save that WAVECAR. But setting ALGO = None will set IALGO = 2. You could probably set ALGO = Eig instead, which sets IALGO = 3, but I think that's a bit unnecessary for most applications. Feel free to test on your own, though.
  • asked a question related to Supercomputing
Question
16 answers
I thinking of starting active research in molecular simulation with the goal of producing high impact papers. Do I necessarily need a supercomputer or a good intel xeon workstation with a high end gpu will suffice.
Relevant answer
Answer
The hardware and software needed for MD simulations depend on what you are trying to achieve. In general, fast hardware can produce long MD trajectories in less time than slow hardware. You can check specifications and standard benchmarks on various web sites to see how MD codes scale according to such parameters as number of CPU cores, number of GPU cores, and clock speeds of CPUs and/or GPUs. In my own case, I use YASARA-Structure for MD simulations, which combines CPU processing with GPU acceleration using OpenCL. With an Intel i7 CPU running at 4.4 GHz and an Nvidia GTX1080Ti GPU, I can get 350 ns/day in the DHFR benchmark in explicit solvent.
  • asked a question related to Supercomputing
Question
5 answers
Hey,
I have been trying to optimize the structure of what I guess I would call a lignin derivative. The system contains 63 atoms and has no charge. I have tried using DFT and HF using a 6-31g(d) basis set but each run I submit terminates without an error message. I am using the Ohio Supercomputer so I am not attempting this on a PC. I also tried working back to a MM method for a preliminary optimization and that has not helped. Any suggestions are greatly appreciated! I can provide more info if needed.
Relevant answer
Answer
Hi,
It is difficult to say what is going wrong with your calculations, however, if you kindly provide a sample gaussian output file (.log/.out) and an input file (.gjf/.com), I think the researchgate community may be able to help you more.
Aside from that few initial possibilities which come to mind,
1. Have you checked that whether there is any issue with Gaussian installation/configuration or with your job submission script (if any)? A quick way to check is just submit a very simple test calculation (e.g optimization of a H2O molecule in b3lyp/6-31g basis) and please check that whether that finishing properly.
2. Please make sure the %MEM= and %NPROCSHARED= values are defined properly in the gaussian input file and also in the job submission script file (if needed). Also, the values must be large enough for the level of theory and the size of the molecule. Finally, please check whether there is enough memory and processors are available in the Supercomputer. I doubt this will be an issue
3. Make sure you have enough space left in your working directory and tmp directory (where usually a large sized .rwf files are being generated by default)
  • asked a question related to Supercomputing
Question
5 answers
As per "Exploring Chemistry with Electronic Structure Methods" 3rd Edition book, R.E.D tool are used to calculate MM charges http://upjv.q4md-forcefieldtools.org/RED/
However the download link is not working.
1) Do you recommend other downloadable sotware compatible with windows?
2) Do you recommend other tutorials to follow to perform ONIOM job ?
BTW, I am using Gaussian 09 on supercomputer (Linux shell) and gaussiaview installed on my windows PC.
Relevant answer
Answer
Hello, If you have the install tutorial of TAO in windows system? I get some problem.
  • asked a question related to Supercomputing
Question
3 answers
The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list’s inception 25 years ago.
Given number of implemented Supercomputers worldwide, will these powerful machines be able to connect and work as one Ultra Powerful Super Computer?
Relevant answer
Answer
The computers must be able to communicate in a meaningful way. There are benchmarks based on LINPACK, especially HPL. They rank supercomputers based on their ability to solve special linear algebra problems.
I guess if supercomputers of the world would be connected over the internet, in order to run LINPACK type benchmarks for distributed processors, they would spend most of their time synchronizing their states
and exchanging intermediate results.
Many supercomputing problems are different from those in the TOP500
ranking. For example decryption or finding out a password. Each
computer could be given a distinct part of the key space. No communication would be required, except when one computer succeeds and then sends its result to the headquarter.
Regards,
Joachim
  • asked a question related to Supercomputing
Question
5 answers
Climate models are based
on mathematical equations
that represent the best
understanding of the basic
laws of physics, chemistry,
and biology that govern the
behaviour of the atmosphere,
ocean, land surface, ice, and
other parts of the climate
system, as well as the
interactions among them. The most comprehensive climate
models, Earth-System Models, are designed to simulate
Earth’s climate system with as much detail as is permitted by
our understanding and by available supercomputers.
Relevant answer
Answer
Dear Nabeel Hameed Al-Saati
Climate models use mathematical formulas run by computers to simulate the Earth's climate. Such tools allow scientists to manipulate and thus better understand the physical, chemical, and biological processes that influence climate.
The climate system is hugely complex, and to understand the climate and make projections about how it will respond to changes such as rising greenhouse gas levels, we need to synthesize a number of data by taking muliplte factors in account for simulations of earth's climate system which can may be at a global or regional level or both. However no mathematical model can perfectly reflect all of its intricate processes in perfect detail. Hence there's always some difference between a model and reality, and it's normal when presenting model results to estimate how big this difference is.
Nonetheless, scientists are confident that models can project big-picture changes such as global temperature rise. The IPCC gives three reasons for its confidence in large-scale climate modeling, the fact that the fundamentals of the models are based on well-established physical laws; the success of models at predicting or reproducing observed patterns and variability in our current and recent climate; and the success of models at reproducing past changes in our climate, including global temperature changes.
Comparing models developed independently by different centres around the world provides additional confidence where those models agree on the response (typically on global and continental scales). To minimise the impact of inaccuracy in any one model, scientists can simulate the same scenarios in multiple models and compare the outcomes.
  • asked a question related to Supercomputing
Question
2 answers
TCNG (The Cancer Network Galaxy) is a database of cancer gene networks estimated from the publicly available cancer gene expression data. The gene networks are estimated using the Japanese national flagship supercomputer "K computer."
TCNG builds networks with 8,000 nodes, and establishes a ranking within them. For example, the "Effects of tobacco smoke on gene expression and cellular pathways in a cellular model of oral leukoplakia " network (http://tcng.hgc.jp/index.html?t=network&id=2) is led by RPL10P15 and RPL36 hubs.
Best regards,
César.
Relevant answer
Answer
Hi Cesar,
This can mean multiple things actually.
The one solid conclusion is that when the data set is read, the gene appears many times and is well connected to interactions with other genes.
From there it is much less obvious, however. There are many reasons the gene can be found so often in the network: It might be inherently important in the pathogenesis of cancer -- but not uniquely so and could be important for other things. Add to this the possibility that the gene ended up being essentially a fad in research (although an important one).
An example is the proto-oncogene p53 (aka TP53), which although critical in cancer pathogenesis, is likewise researched very heavily. That does not mean it's a good target for a drug; it just means it's important and central in cell function.
You may wish to consult Hainaut & Weiman (2005) for more on the p53 example: https://books.google.com/books?id=tH4te5w-3BUC&dq
Best wishes and best of luck in your research.
  • asked a question related to Supercomputing
Question
4 answers
Hi Folks,
Are there any free supercomputers to run FE simulations on? If not, any suggestions.
Thanks in advance
Relevant answer
Answer
Bismillah.
Dear,
Dr. Kareem
I suggest you to use San Diego Supercomputer Center (SDSC). Please visit:
Regards
Toto
  • asked a question related to Supercomputing
Question
17 answers
The technological developments are running at a great speed in different directions. Do you think in near future a totally MAN-Less University may develop? A well designed Supercomputer may perform all the administrative works as well as supply of educational inputs to the students and taking exams – all through online connections?
Relevant answer
Answer
yes I think so, because of competetion mark
regards
  • asked a question related to Supercomputing
Question
8 answers
I'm trying to run a LAMMPS script on my PC which gives the following error:
ERROR: Expected integer parameter in input script or data file (../dump_custom.cpp:1556)
But, I tried to run the same code on Stampede supercomputer and it worked!!! I don't know why this is happening. Any ideas?
Thanks in advance.
Relevant answer
Answer
Thanks.
  • asked a question related to Supercomputing
Question
1 answer
Support to provide high-performance computing means GENCI aims to promote the use of supercomputing for the benefit of French scientific communities
Access to the services is free of charge
Relevant answer
Answer
GENCI:
ALLOCATION PROCESS An initial project call is issued during the last quarter of year n-1 for an allocation of time between 01 January and 31 December of year n. This includes applications for computer time for new projects or to renew current projects. This is followed by a second project call held during the second quarter of year n for an allocation between 01 July and 31 December of the same year. This allows applications to be submitted for computing time for a new project or a project approved during the first session. Additional hours, or supplementary resources, can be allocated in exceptional instances to ongoing projects. EVALUATION OF APPLICATIONS Each application for computing time must include details of the expected for results. These are evaluated on the basis of the scientific quality of the research project, and there is an obligation to publish the results of the research. Applications are submitted through the www.edari.frwebsite. PREPARATORY ACCESS CALL As of the beginning of 2013, French researchers have been able to request, using a quick and simple procedure, access to a small number of hours for migrating or testing their codes on new architectures. The resquest ara assessed on the basis of tehcnical criteria by the computing centres, which can seek scientific advice as may be required.
  • asked a question related to Supercomputing
Question
1 answer
Supercomputer architectures with thousands of processors working in parallel mode, are processing large amount of data at the same time. Given the supercomputers complexity and mesh nature of inter-connections networks, in conjunction with multi-threaded processing in parallel mode, deadlocks may be quite costly.
Relevant answer
Answer
It's not very clear to me what you mean with "parallel mode" and of which "deadlocks" you are talking.
It's not really the hardware that is actually parallel in the 1st place rather then the software. Also, it is evident that the parallel scaling is generally limited by the algorithm or its implementation and by obvious hardware bottlenecks such as memory bandwidth and network latency.
If there would be a considerable amount of time lost due any kind of deadlocks, linear scaling wouldn't work out the way it is seen widely.
Can you please point out your question in detail?
  • asked a question related to Supercomputing
Question
1 answer
I recently started a work on supercomputer-based WRF modeling. During the compilation, I failed many times.
The configure program of WRF didn't provide any useful configuration settings for sxf90/sxcc. I tried to use the options listed in WRFV3/arch/configure_new.defaults file. However, it didn't work well.
The available compiler on the supercomputer used by my research group are limited to sxf90/sxcc/sxmpif90 on the NEC-SX and hfc/mpihfc on the HITACHI SR****** series. Is there anyone successfully compiled the WRF model by sxf90/sxcc or hfc/mpihfc before?
Relevant answer
Answer
actually, I succeed in compile wrf by using gfortran/gcc, but failed when using sxf90/sxc++. So, I think it's not due to the path of netcdf.
  • asked a question related to Supercomputing
Question
14 answers
I don't know how to prepare input and script files for GROMACS. I appreciate if you could share example for a small job. I learnt that i have to convert .pdb file to .gro but I don't know how. 
Relevant answer
Answer
It is the same as submitting the jobs on a local machine assuming that gromacs is installed on the server. You need exactly the same files and the command sequence is exactly the same. After logging into the server, you shall probably need to ssh to a compute node and then cd to the directory where the pdb file is located (or create a directory with mkdir and scp the pdb file there). Now, as for preparing the protein and the system, I would suggest you to follow the excellent tutorial by Dr. Lemkul (http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/index.html).
All the .tpr, .top, .cpt etc. files shall be created by gromacs commands. However, you shall need three .mdp files. The tutorial I suggested above shall also provide you those three .mdp files. 
Finally, in order to utilize the full potential of the supercomputer, you need to assign the number of CPUs for the mdrum job. So, the last command, as suggested by Tanuj Sharma, needs an additional flag -np X, where X is the number of processors, which can be the maximum number of processor available at that compute node.
  • asked a question related to Supercomputing
Question
6 answers
Hi all
I am trying to run md simulations using supercomputer with gromacs-5.1.4, but i got this error "Cannot rename checkpoint file; maybe you are out of disk space?"
Received the TERM signal, stopping at the next NS step
Please suggest in this regard.
Relevant answer
Answer
The error seems to state that you may be out of disk space. You will have to check where the checkpoint file is getting created. If it is in your user area, then clearing some large files in your work area will help. There are commands like du and df which will be of help in locating which directories may be a good starting place to look for files. Also, ls -S will be able to sort the files in the order of their size.
Best wishes
  • asked a question related to Supercomputing
Question
43 answers
Consider an electron entering a grain of salt (NaCl). In a quantum mechanical description of the process the wavefunction of the electron will see more and more ions in its range of influence and so the relevant wave function will depend on more and more dynamical variables.
If one analyzes what this would mean in a computer simulation, only one conclusion is possible: the required storage space would grow exponentially with time and there would be now way to cope with a single grain of salt even if we had available as many supercomputers as there are Plancklength-sized cell in our earth.
So, the verbatim application of quantum mechanics on the motion of a charged particle after entering a ionic crystal is only possible if it is complemented by a reset and continue strategy. The inevitably ocurring breakdown would mark the emergence of an recordable experimental fact. In our case this would be a polaron (i.e. a localized electron surrounded by a distorted lattice) with a definite position.  The reset would consist in extracting from the last valid multi-particle wave function the 'most plausible few-particle content' and give all other particles the (non-entangled) wavefunctions which they would have without the visit of the electron. Replacing the pre-breakdown wave function by this extremely simplified wave function would free a huge amount of storage and would allows to execute strict quantum mechanical time-evolution until a potential breakdown asks again for a reset.
I hope to learn from the RG-community
1. whether this or similar concepts were discussed elsewhere
2. how this obviously incomplete picture can be brought closer to completion.
Relevant answer
Answer
Dear Ulrich!
I think that there is a possibility to explain the causes of the wave function collapse, which is not associated with any additional assumptions or creation of a new theory.
Collapse of the wave function - is only the limiting case, when measurement is performed by classical apparatus. The solution of the problem of collapse is connected with nonlinearity of quantum mechanics - this is the consequence of the creation (annihilation) of particles.
See, for example, Melkikh A.V. Nonlinearity of quantum mechanics and the solution of the problem of wave function collapse. Communications in Theoretical Physics. 2015. V.64, No. 1, 47-53.
You can download my paper here:
Alexey
  • asked a question related to Supercomputing
Question
12 answers
I am working on a Scheduling for parallel System(multi core processors) for my thesis.I was wondering if Any one else has worked in this area or if there is any thesis about that subject,
I would be very thankful for any help
Relevant answer
Answer
Since you're naming GA, this sounds like a classic static (offline) task-graph scheduling combined with task assignment. There is a large amount of work done in the area, from GA (if one has no much insight into the problem) to ILP, Constraint Programming and all sorts of heuristics. You might want to look at list-scheduling as a start.
  • asked a question related to Supercomputing
Question
1 answer
i am concerned that,  do i really need a supercomputer if i want to perform radiation damage calculations? or what type of processor/system is needed for cascade simulation? i mean up to what extent can I perform LAMMPS radiation damage simulation on an ordinary system?
Relevant answer
Answer
The key issue in radiation damage / cascade simulations is the time you give your system to evolve. These kind of simulations need a long time for evolution, therefore, supercomputers are desirable. You can still do them if you have a modest computer with stable power supply so that you can run for several days or even a few weeks.
  • asked a question related to Supercomputing
Question
6 answers
I want to assemble a personal super computer for CUDA calculations with new generation of hardware
Relevant answer
Answer
As you are talking about a couple of million atoms I'd suggest looking at those with rather more memory. Experience wise memory is more scarce than one might think. The Tesla series is very expensive but certainly a good option. I'd also look at the Titan series (Maxwell-based) if you can live without ECC-protected memory.
I also understand that you are affiliated with a university and that money is short in academia. I strongly suggest you to contact Nvidia asking for a donation of one (maybe two) GPUs cards, which might get you back to Tesla cards.
  • asked a question related to Supercomputing
Question
6 answers
Is there any Super-computing facility freely available for research especially for Chemistry
Relevant answer
Answer
You may see PRACE, GRID5000, and in general, countries put their public facilities at researchers disposal through proposal evaluations.
  • asked a question related to Supercomputing
Question
2 answers
The question focus on the possibility of implementing the DSP processing at the front end of the radar receivers !
Relevant answer
Answer
Dear Mr. Danny ,Thanks , I'm interested in performance figures and kindly the references about that !
  • asked a question related to Supercomputing
Question
3 answers
Just gathering info on MATLAB on supercomputing cloud or supercomputing clouds available for postgraduate research.
Relevant answer
Answer
The real question is how much you're going to use it.  Providers like Rescale are a wise choice if your usage is sparse and there's no obvious way for you to group with other users.  Obviously, $4.80/hour doesn't sound like that much, till you realize that buying a 32c server and using it for a year would cost $1.25/hour.  I'm ignoring the cost of floorspace (but not purchase and power/cooling) - and from the looks of it, a lot of Rescale's price is actually licenses.
It's extremely cost-effective for a pool of users to cooperatively fund and license a set of machines.  But of course hard to do by your lonesome, and takes some effort, not to mention a certain amount of lead time...
  • asked a question related to Supercomputing
Question
2 answers
Can anybody give me a PBS script of gromacs-5.0.1,thanks.
Relevant answer
Answer
thanks
  • asked a question related to Supercomputing
Question
11 answers
I 've devolved a molecular dynamic simulation approach for laser-material interaction using the direct simulation Monte-Carlo algorithm and my code run very slowly on the computer of the group research which I belong to. I am wondering are there any free supercomputers one can connect to through internet? any other suggestions to remove this problem.
Relevant answer
Answer
Hi,
If you have a current collaboration with french people, you may ask for an account on the network Grid'5000 based in several french towns. Even if this network is localized in France, the support is done in English. The registration is free but required for accessing the ressources.
Here is the homepage of the project :
I recommend reading the user charter (see the section 'Get an account') before asking for an account.
Hope it helps...
Alexandre
  • asked a question related to Supercomputing
Question
10 answers
I am interested in any papers written on the loss of performance in HPC (High Performance Computing) applications due to thermal throttling. My current R&D project goal is to replace passive heat sinks with active liquid cooling that will match the keep out, life and cost while providing about 6 times the thermal performance. I need to translate the thermal performance improvement into an estimate for increased computer performance.
Relevant answer
Answer
well, the problem is that HPC machines are normally designed to not throttle - if they did throttle, the only interpretation is that the heatsink is inadequate, or the airflow, or the incoming air temperature.  further, current HPC chips (say, e5-2690 v3) have a kind of built-in throttling, in that you can only achieve max clock by using a subset of cores.  to realy take advantage of improved cooling, you'd have to somehow get Intel to let you exceed this programmed TDP-based capping.  and as an HPC person, I'm not sure I'd go for a product like that, since although you might support higher clocks than normal, any chip's power-performance efficiency drops off as you really push it.  (a pitch based on reducing cooling costs would be more successful, I think.  and/or improved cluster density...)
I also note that the range for Intel's "turbo" modulation is about 10%.  that's not a knock-your-socks-off kind of competitive advantage...
  • asked a question related to Supercomputing
Question
2 answers
Dear all,
I am trying to set up amber14 in a newly arrived cray machine.
I don't know how to build the parallel version using the default
mpi of cray (cc/ftn),
i.e. how to ./confiure -mpi intel // if I have mpi but here no mpi
So I installed mpich (with intel) in my local account and installed amber (pmemd.MPI).
But when I ran it in parallel, although it shows running on 64 processors, the performance was very poor (.1 ns/day for a system ~ 0.1 m atom)
I built using intel compilers as we don't have PGI compilers.
I suspect mpirun (built with mpich3.1.3) is not able to run parallel
Can anybody tell me how to build parallel in cray with their default mpi.
I am very new to the cray and put the problem in absurd way.
But any hint or clue will be highlly appreciable.
Relevant answer
Answer
Did you run sander or pmemd with 
mpirun -np $number_of_cpus $AMBERHOME/bin/sander.MPI ?
The problem of HPCC is always tricky for me, I tended to give them to the HPCC managers, they are really experts for these.
  • asked a question related to Supercomputing
Question
18 answers
When use "opt freq" to calculate water, I only get three frc consts, but the literature had four data. Thanks very much.
Relevant answer
Answer
Given that the question was based on a project/homework, this thread is probably a little out of date, but I thought I would add that you can extract the Force Constant Hessian matrix from a formatted checkpoint file (fchk). First convert the .chk generated from the opt freq calculation using:
formchk file.chk outfile.fchk
Then put together a very small text file (I called mine FC.inp) containing the text:
outfile.fchk
Cartesian Force Constants
quit
Use this file on the command prompt as an input into the demofc script that comes with Gaussian 09:
demofc <demofc.inp>hessian.out &
Inside the hessian.out file will be the cartesian force constants. 
I am still quite new to Gaussian, so I'm not figuring out how to generate specific bond force constants for Amber force fields. Slowly getting there. 
Hope this helps someone. 
  • asked a question related to Supercomputing
Question
3 answers
Hi all,
anyone have an idea about how to rent a slot of a supercomputer for a simulation (I need to run a heavy matlab simulation)?
Any recommended sites?
Thanks in advance
Relevant answer
Answer
How about
and ask about pricing on the Amazon Cloud.
In your title you have High end PC , but in the text you have supercomputer.
Which do you need?
  • asked a question related to Supercomputing
Question
1 answer
I had read the paper of warren smith on predicting the application runtime based on its historical information where a genetic algorithm has been used to find the template set. Then select a template from the set & find jobs with same template. Finally use mean or linear regression to obtain the run time of target job.
Based on this I took an example of feitelson parallel workload traces for machine CTC SP2 form which I collected only jobs with user_id :97.
There were 97 jobs submitted by this user_id with varying number of requested processor-->306,4,300,16,20
Where a maximum of 49 jobs out of the 97 with a requested number of processor as 306.
Remaining 32 jobs-->4
11 jobs-->300
3 jobs-->16
2 jobs-->20
So when I tried to find any relation such that i should be able to find the run time of the 11th occurrence of user_id:97 based on the first 10 occurrences I couldn't find any. So does anyone have an idea about one simple strategy as it would be really helpful for me. I am also attaching the log with traces of user_id:97, so please take a look at it & reply.
Relevant answer
Answer
Is it just coincidence that the user id = 97 and the number of jobs is also 97? Also, I would have assumed that you will be interested in predicting the characteristics of the 98th  job rather than the 11th? 
People may not be familiar with the format of the log file that you have posted! 
  • asked a question related to Supercomputing
Question
23 answers
I have a large, but limited amount of RAM in my computer. Now I have to exceed this amount (for example, need to define a very large array). Which ways do exist? I use a fort compiler, already tried setting unlimited and changing -mcmodel parameters. What do you do when you have to work with an amount of data that exceeds your available RAM?
Relevant answer
Answer
Alexander - Are you saying that your declaration of the array exceeds the amount of available memory? If so, here are some options to consider:
1) Do you need/use all the entries within the array?
2) Can you use a sparse matrix to represent the large array?
3) Could you write the array values to a file and read them back in when necessary?