# Workload Manager On HIMster 2 / Mogon 2, load the following module first ```bash module load lang/Python/3.6.6-foss-2018b ``` On a single node run with ```bash mpirun -n 4 ./wkmgr.py -v date ``` or for multi nodes: ```bash salloc -p parallel --reservation=himkurs -A m2_himkurs -N 1 -t 1:00:00 module load math/SUNDIALS/2.7.0-intel-2018.03 module load lang/Python/3.6.6-foss-2018b srun -n 20 ~/workload-manager/wkmgr.py -v ~/workload-manager/examples/LGS/PulsedLGS ~/workload- manager/examples/LGS/Run27_LaPalma_Profile_I50 ``` with loader (untested on cluster so far) replace `wkmgr.py` with `wkloader.py`. ## Hints ### When to Use - Single fast analysis step (eg your analysis file runs for only a minute) - 1000's or more single analysis steps - Usage of all cores in node exclusive partitions (Mogon 2, not on HIMster 2) ### Comparision - Queue based work distribution, equal work distribution (in contrast to [staskfarm](https://github.com/cmeesters/staskfarm)) - Usage of MPI: - large connected jobs (>200 cores) are preferred by the job manager - efficiently supports both node local and multi node usage - keeps environment , also in multi node sutiations (with GNU parallel only on node local) - Usage of Python makes changes for users simple