Discussion
Started 12 February 2023

Fight disease and help advance science by joining distibuted supercomputers!

Please spread the word: Folding at Home (https://foldingathome.org/) is an extremely powerful supercomputer composed of thousands of home computers around the world. It tries to simulate protein folding to Fight diseases. We can increase its power even further by simply running its small program on our computers and donating the spare (already unused and wasted) capacity of our computers to their supercomputation.
After all, a great part of our work (which is surfing the web, writing texts and stuff, communicating, etc.) never needs more than a tiny percent of the huge capacity of our modern CPUs and GPUs. So it would be very helpful if we could donate the rest of their capacity [that is currently going to waste] to such "distributed supercomputer" projects and help find cures for diseases.
The program runs at a very low priority in the background and uses some of the capacity of our computers. By default, it is set to use the least amount of EXCESS (already wasted) computational power. It is very easy to use. But if someone is interested in tweaking it, it can be configured too via both simple and advanced modes. For example, the program can be set to run only when the computer is idle (as the default mode) or even while working. It can be configured to work intensively or very mildly (as the default mode). The CPU or GPU can each be disabled or set to work only when the operating system is idle, independent of the other.
Please spread the word; for example, start by sharing this very post with your contacts.
Also give them feedback and suggestions to improve their software. Or directly contribute to their project.
Folding at Home's Forum: https://foldingforum.org/index.php
Folding at Home's GitHub: https://github.com/FoldingAtHome
Additionally, see other distributed supercomputers used for fighting disease:

Most recent answer

I tested "Folding at Home" (F@H). For those with moderate computers, it is no longer using the Excess capacity of the user's computer (most of the time). Details below:
Many of the projects F@H distributes to users are quite computation-heavy. Besides this, F@H works on an "all-or-nothing" basis, meaning that if a user cannot finish a particular project (coined "Work Unit") until the deadline, the whole Work Unit will be simply DISCARDED by F@H, wasting a lot of computations and time and electricity. Even if a Work Unit is 98% complete when expired, F@H will throw out the whole 98%.
So it is no longer about the Excess capacity. In other words, for finishing many projects on time (i.e., before the whole effrots go to waste), the user needs to keep her computer on and plugged in 24/7 and also set it to work at full capacity. This is no longer Excess computing, unless for very powerful computers. Not to mention that this can damage laptops' batteries.
Of course, there are still some Work Units that are small enough to be considered as really "using the Excess capacity" the way it is advertised (even for moderate computers). But this is not the case for many Work Units, if not most of them.
I hope in newer versions of F@H, they consider this limitation and add FLEXIBILITY to their distributed programming; it should be demanding to those who want to dedicate a lot of resources to it, while at the same time being less demanding and more friendly to those who can't or won't.
I think FAH should go easier on donors by making the computations more user-friendly. I have the following suggestions for this (recapping my previous suggestions):
1. More flexible algorithms that allow a processed work unit to be used even if it is not 100% finished until its deadline. For example, if my CPU couldn't finish the task at 100% but reached 86%, the whole amount of calculation should not be discarded and the next person should not start crunching from scratch; instead, she should continue from 86%. This seems very useful for preventing the waste of any unfinished computations and allowing ANY CPUs and GPUs (even the weak ones) to contribute. Currently, only powerful-enough CPUs/GPUs can be used. And even with them, many users would need to keep their computers always on. This makes the whole process not user-friendly anymore. Many donors would prefer to live their routine daily life and help science in parallel, and not disrupt their life routines by babysitting the program to make sure it finishes its assigned Work Unit on time.
2. Besides that, I think it is technically possible to break down the current Work Units into smaller pieces. So please implement it and help weaker computers join the grid. By smaller, I don't mean smaller sizes of files to be sent to FAH clients over the internet. I mean smaller amounts of computations needed to finish a task (Work Unit).
3. Also expanding the deadlines would help a lot. It would again allow the user to relax a little bit more and live his routine life, without worrying about finishing the task on time.
4. It would be very good if both the CPU and GPU (or all the processor units within one computer or within one computer network) could process the very same Work Unit together.

All replies (5)

Satyendra Singh
Central University of Rajasthan
Yes, you can join distributed supercomputer networks such as Folding at Home, GPUGRID, Rosetta, etc. to contribute your computer's processing power to aid in scientific research and fight diseases. These projects are aimed at using the combined processing power of thousands of individual computers to perform complex simulations and calculations that would otherwise be impractical with traditional supercomputing methods.
Folding at Home is a popular project that focuses on protein folding and the development of new treatments for diseases such as cancer, Alzheimer's, and Huntington's. To participate, you can download the Folding at Home software to your computer and select the project you want to support. Your computer will then receive a small portion of the simulation work and process it when it is idle.
GPUGRID is another project that uses the power of graphics processing units (GPUs) to perform simulations in the fields of molecular dynamics, quantum chemistry, and bioinformatics. Participants can download the GPUGRID client and donate their computer's GPU processing power to the project.
Rosetta is a project focused on the study of proteins and the design of new drugs for the treatment of various diseases. By joining the Rosetta network, participants can contribute their computer's processing power to help researchers gain a deeper understanding of how proteins function.
In conclusion, joining distributed supercomputer networks is a great way to contribute to scientific research and help fight diseases. By donating your computer's idle processing power, you can make a difference in the fight against some of the world's most challenging diseases.
Satyendra Singh Thanks for your input. Please spread the word so that more and more computers join this grid and help fight nasty diseases like Alzheimer's, AIDS, COVID, etc. The point is that it is the excess and already wasted computer power that is being contributed. So the donor actually loses nothing.
Satyendra Singh
Central University of Rajasthan
Vahid Rakhshan I will definitely spread the word about this amazing initiative. It's great to know that we can contribute to such a noble cause by simply utilizing our excess computer power. Thank you for bringing this opportunity to my attention. Let's join hands in making a difference in the fight against diseases.
1 Recommendation
I tested "Folding at Home" (F@H). For those with moderate computers, it is no longer using the Excess capacity of the user's computer (most of the time). Details below:
Many of the projects F@H distributes to users are quite computation-heavy. Besides this, F@H works on an "all-or-nothing" basis, meaning that if a user cannot finish a particular project (coined "Work Unit") until the deadline, the whole Work Unit will be simply DISCARDED by F@H, wasting a lot of computations and time and electricity. Even if a Work Unit is 98% complete when expired, F@H will throw out the whole 98%.
So it is no longer about the Excess capacity. In other words, for finishing many projects on time (i.e., before the whole effrots go to waste), the user needs to keep her computer on and plugged in 24/7 and also set it to work at full capacity. This is no longer Excess computing, unless for very powerful computers. Not to mention that this can damage laptops' batteries.
Of course, there are still some Work Units that are small enough to be considered as really "using the Excess capacity" the way it is advertised (even for moderate computers). But this is not the case for many Work Units, if not most of them.
I hope in newer versions of F@H, they consider this limitation and add FLEXIBILITY to their distributed programming; it should be demanding to those who want to dedicate a lot of resources to it, while at the same time being less demanding and more friendly to those who can't or won't.
I think FAH should go easier on donors by making the computations more user-friendly. I have the following suggestions for this (recapping my previous suggestions):
1. More flexible algorithms that allow a processed work unit to be used even if it is not 100% finished until its deadline. For example, if my CPU couldn't finish the task at 100% but reached 86%, the whole amount of calculation should not be discarded and the next person should not start crunching from scratch; instead, she should continue from 86%. This seems very useful for preventing the waste of any unfinished computations and allowing ANY CPUs and GPUs (even the weak ones) to contribute. Currently, only powerful-enough CPUs/GPUs can be used. And even with them, many users would need to keep their computers always on. This makes the whole process not user-friendly anymore. Many donors would prefer to live their routine daily life and help science in parallel, and not disrupt their life routines by babysitting the program to make sure it finishes its assigned Work Unit on time.
2. Besides that, I think it is technically possible to break down the current Work Units into smaller pieces. So please implement it and help weaker computers join the grid. By smaller, I don't mean smaller sizes of files to be sent to FAH clients over the internet. I mean smaller amounts of computations needed to finish a task (Work Unit).
3. Also expanding the deadlines would help a lot. It would again allow the user to relax a little bit more and live his routine life, without worrying about finishing the task on time.
4. It would be very good if both the CPU and GPU (or all the processor units within one computer or within one computer network) could process the very same Work Unit together.

Similar questions and discussions

If long-term memory formation needs LTP, why & how do people memorize many things instantly & permanently without needing any apparent reinforcement?
Question
37 answers
  • Vahid RakhshanVahid Rakhshan
Why and how is this kind of long-term potentiation (LTP) possible?
Is LTP even needed for all sorts of synaptic plasticity and long-term memory formation?
------------
Longer version:
Long-term potentiation (LTP which is necessary for synaptic plasticity and long-term memory formation) needs repeats and reinforcement of the engrams to be triggered.
However, apparently everybody automatically "absorbs" a lot of information immediately and also permanently, even without needing any extra effort (at least any conscious effort), which seems to be needed for LTP to happen. Everyone seems to have this ability, although it is even stronger in those with better memories.
People simply "learn" many things once; and many of those learned items remain there for a pretty long duration, and in many cases even for the rest of their lives. This seems to happen without any repeats, at least without any apparent or conscious efforts to remember or re-remember those memories. This is the case for a lot of semantic information (especially the information of interest or importance to the person) as well as a large portion of the contents of episodic memory.
Why and how is this kind of LTP possible?
Perhaps attention plays a major role here, e.g., being interesting and important automatically triggers LTP without a further need for repeats.
But such effortless long-term memorization happens also in the case of a lot of semantic information or autobiographical events that are not inherently interesting or significant to the person.
Is LTP even needed for all sorts of synaptic plasticity and long-term memory formation?
What is this curious form of non-updatable mega memory?
Question
8 answers
  • Vahid RakhshanVahid Rakhshan
What is this curious non-updatable mega memory? Does it have any scientific terms?
What are its causes and mechanisms?
--------------
Explanation:
I have had the honor of witnessing very rare people who have some strange forms of mega memory: They effortlessly, automatically, and immediately memorize many difficult things such as phone numbers or their difficult and comprehensive books, etc. And they retain those easily captured memories for a very very long time (a couple of decades at least), without any smallest effort or reinforcement. Not to mention that they record or remember almost everything else (semantic or episodic) quite easily, and also with a lot of details. Furthermore, they are very very accurate in recalling those items. For example, they can serve as pretty reliable living phone books; or for example, they are extremely awesome at medicine, etc.
But when I am talking about "strange", I don't mean their super-human ability to easily capture such vast amounts of information for such long durations and recall them accurately.
Their super-human ability is of course strange. But the even stranger part of their memory is that once it is captured, it cannot be updated or revised easily. For example, if they misunderstand something the first time, it will take perhaps 10 or 20 attempts over days or weeks for their colleagues to remind them of the mistake and ask them to correct their misunderstanding.
It is like that once their memory is formed the very first time, it is set in stone. It is absorbed very efficiently and strongly, and at the same time, not much prone to future updates.
What is this curious non-updatable mega memory? Does it have any scientific terms?
What are its causes and mechanisms?
COVID-19 : How Hospital Staff fight with COVID-19 to prevent transmission of virus to there family ?
Question
11 answers
  • Siku BiologySiku Biology
COVID- 19
Respiratory infections can be transmitted through droplets of different sizes: when the droplet particles are >5-10 μm in diameter they are referred to as respiratory droplets, and when then are <5μm in diameter, they are referred to as droplet nuclei. According to current evidence, COVID-19 virus is primarily transmitted between people through respiratory droplets and contact routes.2-7 In an analysis of 75,465 COVID-19 cases in China, airborne transmission was not reported.
Droplet transmission occurs when a person is in in close contact (within 1 m) with someone who has respiratory symptoms (e.g., coughing or sneezing) and is therefore at risk of having his/her mucosae (mouth and nose) or conjunctiva (eyes) exposed to potentially infective respiratory droplets. Transmission may also occur through fomites in the immediate environment around the infected person. Therefore, transmission of the COVID-19 virus can occur by direct contact with infected people and indirect contact with surfaces in the immediate environment or with objects used on the infected person (e.g., stethoscope or thermometer). 
Airborne transmission is different from droplet transmission as it refers to the presence of microbes within droplet nuclei, which are generally considered to be particles <5μm in diameter, can remain in the air for long periods of time and be transmitted to others over distances greater than 1 m. 
In the context of COVID-19, airborne transmission may be possible in specific circumstances and settings in which procedures or support treatments that generate aerosols are performed; i.e., endotracheal intubation, bronchoscopy, open suctioning, administration of nebulized treatment, manual ventilation before intubation, turning the patient to the prone position, disconnecting the patient from the ventilator, non-invasive positive-pressure ventilation, tracheostomy, and cardiopulmonary resuscitation. 
There is some evidence that COVID-19 infection may lead to intestinal infection and be present in faeces. However, to date only one study has cultured the COVID-19 virus from a single stool specimen.  There have been no reports of faecal−oral transmission of the COVID-19 virus to date.

Related Publications

Got a technical question?
Get high-quality answers from experts.