Questions related to Supercomputing
I am using a supercomputing cluster to test some quantum algorithms. At present, my program can implement the HHL algorithm of up to 30qubit, but the calculation time and the number of qubits are too much. What are some good cloud platforms or computing frameworks that can be used?
I am using QwikMD to prepare simulation files for MD. However, I cannot actually run the simulation on my computer as it would take a very long time. Are there any tutorials or videos demonstrating how to use QwikMD and then submit the actual simulation to a supercomputer/HPC? What are the steps to do this? Any help would be highly appreciated!
Let me explain my situation. In lab, we have enough computers for general uses. Recently, we started to DFT studies of our materials. My personal computer has i5 and 20gb ram and mostly enough for calculation. However, i can not use my computer during the calculations. Moreover, accessing the supercomputer clusters of government is a bit pain and needs a lot of paper work.
My goal is creating simple cluster for calculations and free my personal computer. Speed is important but not so much. Thus, is investing money on 4 raspberry pi 4 8gb a good idea?? I am also open for other solutions.
Any experiences about the review time, time to get first response or time to publication of "The Journal of Supercomputing"- Springer Publications. Need real experiences, I have already checked the average time to get first response and time to publication on Journal's web page.
I am a beginner and I found some tutorials on GROMACS website. Although I was able to follow some of steps through typing commands in ssh window, but I wasn't successful to run energy minimization step because I don't know how to write the script file specifying number of processors that can be utilized for running energy minimization. your help is really much appreciated.
Hello. I run my Abaqus simulation in the supercomputer and once it's done, I copied the result (odb file) into my local computer using winSCP. But I faced a problem to read that file (see picture). How can I resolve this issue? How to ensure that the file was copied using the binary mode instead of ASCII mode? Thank you in advance.
I want to perform some gaussian calculations and want to use Materials studio for my research work. Can anyone guide me how to get access to supercomputer resources in India for these jobs? I am ready to pay charges for the usage.
My system I am trying to simulate has 600.000 atoms. I want to simulate for 1 us
I am using a supercomputer to run the simulation with tesla K80 GPUs. I am using 1 node and a total of 28 core but the performance is not good at all. only 10 ns/day
Core t (s) Wall t (s) (%) Time: 221835.265 7922.688 2800.0 2h12:02 (ns/day) (hour/ns) Performance: 10.905 2.201
I noticed this note in the log file
NOTE: GROMACS was configured without NVML support hence it can not exploit application clocks of the detected Tesla K80 GPU to improve performance. Recompile with the NVML library (compatible with the driver used) or set application clocks manually.
PME mesh takes 70% of the computation time according to the log file
What are the ways to optimize performance and speed up the simulation?
Thank you all
I am trying to run GROMACS 2020.3 on a supercomputer using this command to do the production run:
mpirun -np 24 gmx_mpi mdrun -v -s AB_md_0_1.tpr -deffnm AB_md_0_1 > AB_md_1.log
however, it produces this error for each core used:
525 particles communicated to PME rank 3 are more than 2/3 times the cut-off
out of the domain decomposition cell of their charge group in dimension x.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
How can I solve this error? should I equilibrate it more?
attached are the log file produced from the run and the slurm file containing the errors.
We often use Quantum ESPRESSO for material modeling but when the number of atoms in a structure is a bit high then very powerful and ultrahigh speed PC is required to calculate and give us the requested results. This type of very high performance PC is not accessible to many researchers.
Is there any free high performance cloud based computer or supercomputer available for these types of calculations and modeling on the internet?
i am using NAMD to do an MDS run on a supercomputer. but i want to know what is the number of cores to use to make it run very faster?
supercomputer nodes: 6158
supercomputer cpu-per-node: 32
i tried 32 nodes and it did a 20 ns run of nearly 3000 amino acids in 19 hours.
can it be faster than this?
SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 ARM9 processors (specifically ARM968), each with 18 cores and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM.The computing platform is based on spiking neural networks, useful in simulating the human brain
what are the required numbers for cpu-per-task and ntasks in the .sh file for running NAMD on a supercomputer?
Living robot named Xenobot has been developed using stem cells of frog . These managed by supercomputer with algorithms. They are self - repairable on any destruction.
May it create threat to life of human on further progress like LIVE BOMBS bet and nuclear or hydrogen bombs?
I am wondering what are the files that should be prepared? how? commands? that enables me to perform MD simulations using AMBER on Supercomputer.
I got access to C-DAC Super computer facility, Pune, India to perform Gromacs. As am a new user to access this server supercomputer, could anyone please help me to access gromacs and submit jobs. Please do the needful.
I am trying to run an analysis of a .trj file generated from a Roundabout simulation in Vissim and the SSAM software is hanging out before completing the task. The file is about 100 MB.
Does anybody know what it takes to perform such analysis?
* Processor: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz, 3301 Mhz, 4 Core(s), 4 Logical Processor(s)
* Installed Physical Memory (RAM): 16.0 GB
I would like to know if there is a supercomputer or a server machine, which is free to access, and can run the STRUCTURE software (https://web.stanford.edu/group/pritchardlab/structure.html)? I mean like Compute Canada (https://www.computecanada.ca/) or something similar. Maybe in a Windows environment? Your comments and suggestions would be greatly appreciated.
It is a trend to employ physics-based battery models to improve the performance and extend the lifetime of battery cells, modules and packs, as they performs better at battery ageing prediction.
Due to the computation burden coupled with such models, it is hard to be applied to mobile applications such as EVs, as one's car cannot afford carrying a heavy and large supercomputer around, let along that brings up the cost and thus the price of a EV. Many researchers are trying all kinds of techniques to simplify such models.
However, for stationary applications of batteries, like those energy storage systems for power system services, seems to be less sensitive to the computation burden and cost. So, maybe stationary application of battery are more suitable to apply the physics-based models?
UPDATE: SOLVED (See my final comment at the bottom of this page)
Apologies for the long post, but I feel that I need to give some context/details about the problem at hand before getting to the problem. Thank you for your time in advance!
I am minimizing the electronic structure and atomic positions (ISIF = 2) of a surface slab of CdSe with pseudo-hydrogen passivation in specific places (not important for the discussion below), and the desired surface is in the plane perpendicular to the z-axis. Whenever I reach the desired force cutoff for an ISIF = 2 calculation like this one, I always perform a couple of follow-up calculations where I read from the WAVECAR and keep the relaxation tags turned on (usually IBRION = 2, ISIF = 2, and NSW = 100). If the structure does not relax any further and if the force drift remains approximately the same, then I feel confident that my structure is at the desired position.
For this relaxation, I require the forces to be no greater than 1.8 meV/Angstrom, which I know is a strict convergence criterion for a large surface slab (this one is 140 atoms). However, with parameters such as PREC and ENAUG set in appropriate ways, I found this force convergence could be achieved with a force drift that adheres to VASP manual’s recommendation --- that is, the force drift is less than the desired force cutoff.
After relaxing the structure, I performed two further “relaxations” as mentioned in the second paragraph above. The output files corresponding to those jobs are ‘OUTCAR.1208664’ and ‘OUTCAR.1211575’. Both jobs give reasonable values in the drift, namely:
total drift: 0.00000045 0.00159302 0.00053071
total drift: 0.00000011 0.00080124 0.00101411
respectively. There are some differences between these, but nothing out of the ordinary for such a large slab in my experience. Plus, these drifts are acceptable with respect to my desired force cutoff. Both of these runs were performed on my university’s supercomputer on 6 nodes (24 cores-per-node), and I set NCORE = 24 to match this architecture.
Now, here is the issue: If I turn off the relaxation tags (IBRION, ISIF, and NSW) and change the calculation to run on a high-memory node (48 cores-per-node) with no NCORE set, I get the following drift (see ‘OUTCAR.1232305’):
total drift: -0.00000005 0.00045769 -0.01691489
The x- and y- components are still fine, but the z-component is much larger than I would like. It’s still “small” relative to most other calculations of this scale, but it bothers me that the drift is now about an order of magnitude larger than my desired EDIFFG value. I’m not sure if it’s due to the change to the high-memory node, or turning NCORE off, or both, or neither. But it seems very strange. I mention the high-memory/NCORE requirements because I perform the VASP2WANNIER conversion, otherwise I wouldn't mention those details.
Any help on this would be greatly appreciated. I have included the aforementioned OUTCAR* files, as well as the corresponding INCAR* files (appended with the same numbers as the OUTCAR* files), KPOINTS file, POSCAR file, and POTCAR file. Also, I have slightly modified the OUTCAR* files to give me more digits of precision in certain places, so there may be some areas that seem a little more cluttered with numbers than usual.
I know this issue is not that big of a deal in the grand scheme of things, but to see the drift jump from < 1.8 meV/Angstrom to 10 times this amount between different runs is troubling to me.
I thinking of starting active research in molecular simulation with the goal of producing high impact papers. Do I necessarily need a supercomputer or a good intel xeon workstation with a high end gpu will suffice.
I have been trying to optimize the structure of what I guess I would call a lignin derivative. The system contains 63 atoms and has no charge. I have tried using DFT and HF using a 6-31g(d) basis set but each run I submit terminates without an error message. I am using the Ohio Supercomputer so I am not attempting this on a PC. I also tried working back to a MM method for a preliminary optimization and that has not helped. Any suggestions are greatly appreciated! I can provide more info if needed.
As per "Exploring Chemistry with Electronic Structure Methods" 3rd Edition book, R.E.D tool are used to calculate MM charges http://upjv.q4md-forcefieldtools.org/RED/
However the download link is not working.
1) Do you recommend other downloadable sotware compatible with windows?
2) Do you recommend other tutorials to follow to perform ONIOM job ?
BTW, I am using Gaussian 09 on supercomputer (Linux shell) and gaussiaview installed on my windows PC.
The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list’s inception 25 years ago.
Given number of implemented Supercomputers worldwide, will these powerful machines be able to connect and work as one Ultra Powerful Super Computer?
Climate models are based
on mathematical equations
that represent the best
understanding of the basic
laws of physics, chemistry,
and biology that govern the
behaviour of the atmosphere,
ocean, land surface, ice, and
other parts of the climate
system, as well as the
interactions among them. The most comprehensive climate
models, Earth-System Models, are designed to simulate
Earth’s climate system with as much detail as is permitted by
our understanding and by available supercomputers.
TCNG (The Cancer Network Galaxy) is a database of cancer gene networks estimated from the publicly available cancer gene expression data. The gene networks are estimated using the Japanese national flagship supercomputer "K computer."
TCNG builds networks with 8,000 nodes, and establishes a ranking within them. For example, the "Effects of tobacco smoke on gene expression and cellular pathways in a cellular model of oral leukoplakia " network (http://tcng.hgc.jp/index.html?t=network&id=2) is led by RPL10P15 and RPL36 hubs.
The technological developments are running at a great speed in different directions. Do you think in near future a totally MAN-Less University may develop? A well designed Supercomputer may perform all the administrative works as well as supply of educational inputs to the students and taking exams – all through online connections?
I'm trying to run a LAMMPS script on my PC which gives the following error:
ERROR: Expected integer parameter in input script or data file (../dump_custom.cpp:1556)
But, I tried to run the same code on Stampede supercomputer and it worked!!! I don't know why this is happening. Any ideas?
Thanks in advance.
Supercomputer architectures with thousands of processors working in parallel mode, are processing large amount of data at the same time. Given the supercomputers complexity and mesh nature of inter-connections networks, in conjunction with multi-threaded processing in parallel mode, deadlocks may be quite costly.
I recently started a work on supercomputer-based WRF modeling. During the compilation, I failed many times.
The configure program of WRF didn't provide any useful configuration settings for sxf90/sxcc. I tried to use the options listed in WRFV3/arch/configure_new.defaults file. However, it didn't work well.
The available compiler on the supercomputer used by my research group are limited to sxf90/sxcc/sxmpif90 on the NEC-SX and hfc/mpihfc on the HITACHI SR****** series. Is there anyone successfully compiled the WRF model by sxf90/sxcc or hfc/mpihfc before?
I don't know how to prepare input and script files for GROMACS. I appreciate if you could share example for a small job. I learnt that i have to convert .pdb file to .gro but I don't know how.
I am trying to run md simulations using supercomputer with gromacs-5.1.4, but i got this error "Cannot rename checkpoint file; maybe you are out of disk space?"
Received the TERM signal, stopping at the next NS step
Please suggest in this regard.
Consider an electron entering a grain of salt (NaCl). In a quantum mechanical description of the process the wavefunction of the electron will see more and more ions in its range of influence and so the relevant wave function will depend on more and more dynamical variables.
If one analyzes what this would mean in a computer simulation, only one conclusion is possible: the required storage space would grow exponentially with time and there would be now way to cope with a single grain of salt even if we had available as many supercomputers as there are Plancklength-sized cell in our earth.
So, the verbatim application of quantum mechanics on the motion of a charged particle after entering a ionic crystal is only possible if it is complemented by a reset and continue strategy. The inevitably ocurring breakdown would mark the emergence of an recordable experimental fact. In our case this would be a polaron (i.e. a localized electron surrounded by a distorted lattice) with a definite position. The reset would consist in extracting from the last valid multi-particle wave function the 'most plausible few-particle content' and give all other particles the (non-entangled) wavefunctions which they would have without the visit of the electron. Replacing the pre-breakdown wave function by this extremely simplified wave function would free a huge amount of storage and would allows to execute strict quantum mechanical time-evolution until a potential breakdown asks again for a reset.
I hope to learn from the RG-community
1. whether this or similar concepts were discussed elsewhere
2. how this obviously incomplete picture can be brought closer to completion.
I am working on a Scheduling for parallel System(multi core processors) for my thesis.I was wondering if Any one else has worked in this area or if there is any thesis about that subject,
I would be very thankful for any help
i am concerned that, do i really need a supercomputer if i want to perform radiation damage calculations? or what type of processor/system is needed for cascade simulation? i mean up to what extent can I perform LAMMPS radiation damage simulation on an ordinary system?
I want to assemble a personal super computer for CUDA calculations with new generation of hardware
Just gathering info on MATLAB on supercomputing cloud or supercomputing clouds available for postgraduate research.
I 've devolved a molecular dynamic simulation approach for laser-material interaction using the direct simulation Monte-Carlo algorithm and my code run very slowly on the computer of the group research which I belong to. I am wondering are there any free supercomputers one can connect to through internet? any other suggestions to remove this problem.
I am interested in any papers written on the loss of performance in HPC (High Performance Computing) applications due to thermal throttling. My current R&D project goal is to replace passive heat sinks with active liquid cooling that will match the keep out, life and cost while providing about 6 times the thermal performance. I need to translate the thermal performance improvement into an estimate for increased computer performance.
I am trying to set up amber14 in a newly arrived cray machine.
I don't know how to build the parallel version using the default
mpi of cray (cc/ftn),
i.e. how to ./confiure -mpi intel // if I have mpi but here no mpi
So I installed mpich (with intel) in my local account and installed amber (pmemd.MPI).
But when I ran it in parallel, although it shows running on 64 processors, the performance was very poor (.1 ns/day for a system ~ 0.1 m atom)
I built using intel compilers as we don't have PGI compilers.
I suspect mpirun (built with mpich3.1.3) is not able to run parallel
Can anybody tell me how to build parallel in cray with their default mpi.
I am very new to the cray and put the problem in absurd way.
But any hint or clue will be highlly appreciable.
anyone have an idea about how to rent a slot of a supercomputer for a simulation (I need to run a heavy matlab simulation)?
Any recommended sites?
Thanks in advance
I had read the paper of warren smith on predicting the application runtime based on its historical information where a genetic algorithm has been used to find the template set. Then select a template from the set & find jobs with same template. Finally use mean or linear regression to obtain the run time of target job.
Based on this I took an example of feitelson parallel workload traces for machine CTC SP2 form which I collected only jobs with user_id :97.
There were 97 jobs submitted by this user_id with varying number of requested processor-->306,4,300,16,20
Where a maximum of 49 jobs out of the 97 with a requested number of processor as 306.
Remaining 32 jobs-->4
So when I tried to find any relation such that i should be able to find the run time of the 11th occurrence of user_id:97 based on the first 10 occurrences I couldn't find any. So does anyone have an idea about one simple strategy as it would be really helpful for me. I am also attaching the log with traces of user_id:97, so please take a look at it & reply.
I have a large, but limited amount of RAM in my computer. Now I have to exceed this amount (for example, need to define a very large array). Which ways do exist? I use a fort compiler, already tried setting unlimited and changing -mcmodel parameters. What do you do when you have to work with an amount of data that exceeds your available RAM?