PREP, TRACE, and POST

Assumptions:

  • Job scheduling on cluster is managed by PBS (Portable Batch System).

Prerequisites:

  • an environment variable PREP_PATH has to point to directory containing PREP executable

  • your PYTHONPATH should contain the directory to a directory containing trace.py

Prerequisites for use on a cluster:

  • for batch mode: /etc/CLUSTERNAME is an ASCII file containing the name of the cluster

  • for interactive mode: an environment variable CLUSTERNAME has to be set

Start PREP on clusters and workstations

$ prep.py --pyhelp

Usage:  /home/mors_ch/venv/mojo-develop/bin/prep.py  OPTIONS

 -np    --numberOfProcesses   Total number of processes [mandatory]
 -q     --queue               PBS-Queue in which the job will be submitted (only
                              on clusters).
 -l     --logFile             Filename in which the processes stdout and stderr
                              will be written.
 -e     --executable          Executable to start. If not given the
                              corresponding project path will be searched.
 -j     --jobname             Name for the job that will appear in the pbs
                              queue. This will also determine the names for the
                              output-/scriptfiles.
 -n     --nodetype            Select a certain node type: either 'w'/'westmere',
                              'ib'/'ivybridge' or 'sb'/'sandybridge'. (default:
                              'ivybridge')
 -w     --waitUntilFinished   Waits until the job is finished before returning.
        --version             Prints the version info.
 -h     --help                Prints help messages of executable and python
                              script.
 -?     --pyhelp              Prints python command line options only.
 -ppn	--numberOfProcessesPerNode
                              Number of processes for each node (e.g. on a
                              cluster). If not specified a default will be
                              calculated.
 -wt    --wallTime            Wall time required for this job. Must be smaller
                              than maximum wall time of queue. Format:
                              [[[days:]hours:]minutes:]seconds[.milliseconds].
 -di    --dependJobID         The job starts if the job with this ID ends.
 -A     --account             Account for cluster costs.
 -m     --mailAddress         Email address for status information (via local
                              mailing agent or SLURM.
 -sw    --slurmSwitches       <count>[@minutes] requests a node allocation with
                              at most <count> network switches within <minutes>
                              by SLURM

Start TRACE on clusters and workstations

$ ./trace.py --pyhelp
bash: Zeile 1: ./trace.py: Datei oder Verzeichnis nicht gefunden

Start POST on clusters and workstations

$ post.py --pyhelp

Usage:  /home/mors_ch/venv/mojo-develop/bin/post.py  OPTIONS

 -np    --numberOfProcesses   Total number of processes [mandatory]
 -q     --queue               PBS-Queue in which the job will be submitted (only
                              on clusters).
 -l     --logFile             Filename in which the processes stdout and stderr
                              will be written.
 -e     --executable          Executable to start. If not given the
                              corresponding project path will be searched.
 -j     --jobname             Name for the job that will appear in the pbs
                              queue. This will also determine the names for the
                              output-/scriptfiles.
 -n     --nodetype            Select a certain node type: either 'w'/'westmere',
                              'ib'/'ivybridge' or 'sb'/'sandybridge'. (default:
                              'ivybridge')
 -w     --waitUntilFinished   Waits until the job is finished before returning.
        --version             Prints the version info.
 -h     --help                Prints help messages of executable and python
                              script.
 -?     --pyhelp              Prints python command line options only.
 -ppn	--numberOfProcessesPerNode
                              Number of processes for each node (e.g. on a
                              cluster). If not specified a default will be
                              calculated.
 -wt    --wallTime            Wall time required for this job. Must be smaller
                              than maximum wall time of queue. Format:
                              [[[days:]hours:]minutes:]seconds[.milliseconds].
 -di    --dependJobID         The job starts if the job with this ID ends.
 -A     --account             Account for cluster costs.
 -m     --mailAddress         Email address for status information (via local
                              mailing agent or SLURM.
 -sw    --slurmSwitches       <count>[@minutes] requests a node allocation with
                              at most <count> network switches within <minutes>
                              by SLURM