After literally months of trial and error, I finally managed to run a large analysis using our program Jaatha on my new local supercomputer, superMUC. Jaatha is written in R (with the performance critical part implemented in C and C++) and normally does not require a super computer at all. However, we wanted to conduct a huge likelihood ratio test using a computationally demanding finite sites model; so it came in handy when Europe's fastest computer opened just a few kilometers away. As there is very little written about running R on a supercomputer, I want to share my solution here so that it may hopefully be a bit easier for others to do something similar.
The Parallelization Model
SuperMUC consists of over 18.000 nodes, each of which I imagine of as a separate little computer with 16 cpu cores and it's own RAM. The nodes are connected through a fast network and share a (network) hard disk. As far a I know most supercomputers are build in a similar way.
Within a node, different processes can communicate quite fast using the memory, while the commutation of processes running on different nodes has to go over the network and therefore is much slower. Hence, it is quite nice if you can do a "two step" parallelization: First a "grand master" process distributes big tasks to a "node master" processes on each node. This tasks should be relative autonomous, so that there is only very little communication between the grand master and the node masters needed. Each node master now creates 16 Workers and distributes its big task on them. Heavy communication between the node master and the workers is not a performance problem here.
Implementation in R
Luckily, doing an LRT with Jaatha fitted quite well into this model, so that I could implement it without big changes to Jaatha's algorithm. An easy way to parallelize side effect free loops in R is the awesome foreach package. Using it, you basically only have to replace the existing loops with a foreach loop (as explained in foreach's vignette) and choose one of several parallelization backends. I use two different backends, namely doRedis for the grand master to node master connection and doMC for the node master to worker one.
doMC is the "simpler" of the two (simpler to use at least). It is using RAM for the communication between (node) master and workers. Hence it is amazingly fast but works only within nodes. For me, it always worked right out of the box and I can really recommend using it when ever possible.
doRedis seems to be more complex. It uses the redis database for the interprocess communication, which is a quite nice idea because it is a database design for (a) resting in RAM rather than an slow hard disks and (b) for being accessed over network. However, you have to set a redis server first. On superMUC, you can load redis with "module load redis". Additionally to setting the two options mentioned in its vignette, I also changed all the files mentioned in the config file to point to ~/.redis/ because /var is not user writable on superMUC.
With all that, my main R script looked like this:
Now running it on superMUC...
The really tricky part was now to write a "job command file", which are instructions for superMUC's loadleveler how the job should be run. For us, this means it should reserve a certain number of nodes, start a doRedis node master on every node and afterwards execute the script. After lots of trail-and-error, this file works quite well for me:
where LRT_4par.R is the main script above and the autonomous_master.ksh is an
example script from IBM that I use to execute the doRedis_addNode.R script on
every node. The latter looks like this:
This will run our script on 200 nodes. It is can also easily be adapted to
other situations, for example if you have a one step parallelization with the
grand master controlling many workers, you can set threads.per.node in the
addNodes script to 16 and skip the doMC part in the main script.
I hope the rest of the script is pretty self explaining. If you have any
question or comment about my approach, please post it on