Commit 83deb81b authored by Peter-Bernd Otte's avatar Peter-Bernd Otte
Browse files

Merge branch 'master' of

parents 5728586c a17edf3b
......@@ -12,7 +12,27 @@ mpirun -n 4 ./ -v date
or for multi nodes:
srun -n 4 ./ -v date
salloc -p parallel --reservation=himkurs -A m2_himkurs -N 1 -t 1:00:00
module load math/SUNDIALS/2.7.0-intel-2018.03
module load lang/Python/3.6.6-foss-2018b
srun -n 20 ~/workload-manager/ -v ~/workload-manager/examples/LGS/PulsedLGS ~/workload- manager/examples/LGS/Run27_LaPalma_Profile_I50
with loader (untested on cluster so far) replace `` with ``.
\ No newline at end of file
with loader (untested on cluster so far) replace `` with ``.
## Hints
### When to Use
- Single fast analysis step (eg your analysis file runs for only a minute)
- 1000's or more single analysis steps
- Usage of all cores in node exclusive partitions (Mogon 2, not on HIMster 2)
### Comparision
- Queue based work distribution, equal work distribution (in contrast to SLURM multiprog or [staskfarm]( from [MogonWiki Node local scheduling](
- Usage of MPI:
- large connected jobs (>200 cores) are preferred by the job manager
- efficiently supports both node local and multi node usage
- keeps environment , also in multi node sutiations (with GNU parallel only on node local)
- Usage of Python makes changes for users simple
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment