Brief Introduction to HPC Computing
A brief introduction to the usage of the HPC facilities, targeted at new and unexperienced HPC users is given below. The introduction is based on various minimal examples that illustrate how to compile serial and parallel programs as well as how to submit and monitor actual jobs using SGE.
A simple serial program
write/compile simple non-parallel program
where and how to compile
how to include libraries
submit/monitor jobs
specifying resources
single jobs
job arrays
basic error-tracking
A simple parallel program
compile simple parallel program
example using openMpi
#include <stdio.h> #include <mpi.h> int main(int argc, char *argv[]) { int numprocs, rank, namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Get_processor_name(processor_name, &namelen); printf("Process %d (out of %d) on host %s\n",rank, numprocs, processor_name); MPI_Finalize(); }
Load proper openMpi environment and the respective compiler:
module unload gcc module load gcc/4.7.1 module load openmpi/1.6.2/gcc/64/4.7.1
compile via:
mpicc myHelloWorld_openMpi.c -o myHelloWorld_openMpi
(You might first check whether mpicc indeed refers to the desired compiler by typing which mpicc, which in this case yields
/cm/shared/apps/openmpi/1.6.2/gcc/64/4.7.1/bin/mpicc
so everything is fine and the stage is properly set!)
In order to submit the job via SGE, specifying a parallel environment (PE) that fits your choice (here: openMpi), you might use the following job submission script, called myProg_openMpi.sge:
#!/bin/bash ####### which shell to use #$ -S /bin/bash ####### change to directory where job was submitted from #$ -cwd ####### maximum walltime of the job (hh:mm:ss) #$ -l h_rt=0:10:0 ####### memory per job slot #$ -l h_vmem=1000M ####### disk space #$ -l h_fsize=1G ####### which parallel environment to use, and number of slots #$ -pe openmpi 13 ####### enable resource reservation (to prevent starving of parallel jobs) #$ -R y ####### name of the job #$ -N openMpi_test module unload gcc module load gcc/4.7.1 module load openmpi/1.6.2/gcc/64/4.7.1 mpirun --mca btl ^openib,ofud -machinefile $TMPDIR/machines -n $NSLOTS ./myHelloWorld_openMpi
Most of the resource allocation statements should look familiar to you. However, note that a few of them are required to ensure a proper submission of parallel jobs. E.g., you need to take care to use the proper PE: in the job submission script this is done by means of the statement
#$ -pe <parallel_environment> <num_slots>
wherein <parallel_environment> refers to the type of PE that fits your application and where <num_slots> specifies the number of desired slots for the parallel job. Here we decided to use openMpi, hence, the proper PE in reads openmpi. Further, in the above example a number of 13 slots is requested.
Now, typing
qsub myProg_openMpi.sge
ensues the job, assigning the jobId 704398 in my case. Once the job starts to run, it is possible to infer from which hosts the 13 requested slots are accumulated by typing qstat -g t, which in my case yields
job-ID prior name user state submit/start at queue master ja-task-ID ---------------------------------------------------------------------------------------------------------- 704398 0.50735 openMpi_te alxo9476 r 05/15/2013 09:54:23 mpc_std_shrt.q@mpcs002 MASTER mpc_std_shrt.q@mpcs002 SLAVE mpc_std_shrt.q@mpcs002 SLAVE 704398 0.50735 openMpi_te alxo9476 r 05/15/2013 09:54:23 mpc_std_shrt.q@mpcs004 SLAVE mpc_std_shrt.q@mpcs004 SLAVE mpc_std_shrt.q@mpcs004 SLAVE mpc_std_shrt.q@mpcs004 SLAVE mpc_std_shrt.q@mpcs004 SLAVE 704398 0.50735 openMpi_te alxo9476 r 05/15/2013 09:54:23 mpc_std_shrt.q@mpcs006 SLAVE mpc_std_shrt.q@mpcs006 SLAVE 704398 0.50735 openMpi_te alxo9476 r 05/15/2013 09:54:23 mpc_std_shrt.q@mpcs008 SLAVE mpc_std_shrt.q@mpcs008 SLAVE mpc_std_shrt.q@mpcs008 SLAVE mpc_std_shrt.q@mpcs008 SLAVE
Meanwhile the job has terminated successfully, there where 4 files created:
openMpi_test.e704398 openMpi_test.o704398 openMpi_test.pe704398 openMpi_test.po704398
In detail they contain:
- openMpi_test.po704398: the hostfile for the job which can be found in the spool directory for the MASTER process (which in this case is mpcs002), reading
-catch_rsh /cm/shared/apps/sge/current/default/spool/mpcs002/active_jobs/704398.1/pe_hostfile mpcs002.mpinet.cluster mpcs002.mpinet.cluster mpcs004.mpinet.cluster mpcs004.mpinet.cluster mpcs004.mpinet.cluster mpcs004.mpinet.cluster mpcs004.mpinet.cluster mpcs006.mpinet.cluster mpcs006.mpinet.cluster mpcs008.mpinet.cluster mpcs008.mpinet.cluster mpcs008.mpinet.cluster mpcs008.mpinet.cluster
- openMpi_test.pe704398: nothing (which is good!)
- openMpi_test.o704398: the (expected) program output, reading
Process 7 (out of 13) on host mpcs006 Process 8 (out of 13) on host mpcs006 Process 6 (out of 13) on host mpcs004 Process 3 (out of 13) on host mpcs004 Process 5 (out of 13) on host mpcs004 Process 2 (out of 13) on host mpcs004 Process 4 (out of 13) on host mpcs004 Process 10 (out of 13) on host mpcs008 Process 12 (out of 13) on host mpcs008 Process 9 (out of 13) on host mpcs008 Process 11 (out of 13) on host mpcs008 Process 1 (out of 13) on host mpcs002 Process 0 (out of 13) on host mpcs002
- openMpi_test.e704398: if there are N hosts involved to run your application (here: N=4), there should be N-1 harmless error messages of the form
bash: module: line 1: syntax error: unexpected end of file bash: error importing function definition for `module' bash: module: line 1: syntax error: unexpected end of file bash: error importing function definition for `module' bash: module: line 1: syntax error: unexpected end of file bash: error importing function definition for `module'
This is a harmless, well known and documented error for the SGE version (6.2u5) used on the local HPC facilities (see here) which you might safely ignore.
example using intel mph
submit/monitor jobs
specify resources
basic error-tracking
Misc
Importance of specifying reasonable resources
how to use local storage for I/O intense serial jobs (or parallel jobs that run on a single host)
Consider a situation where your particular application is rather I/O intense so that the speed of your program suffers from the amount of I/O operations that strain the global file system. Examples might be irregular I/O patterns at a fast pace or an application that has to create, open, close and delete many files. As a remedy in order to overcome such problems you might benefit from using a local scratch disk of an execution host on which your program is actually run. This reduces the amount of network traffic and hence reduces the strain on the global file system. The subsequent example illustrates how to access and use the local storage on a given host for the purpose of storing data during the runtime of the program. In the example, after the program terminates, the output data is copied to the working directory from which the job was submitted from and the local file system on the host is cleaned out. For this matter, consider the examplary C program myExample_tempdir.c
#include <stdio.h> int main(int argc, char *argv[]) { FILE *myFile; myFile=fopen("my_data/myData.out","w"); fprintf(myFile,"Test output to local scratch directory\n"); fclose(myFile); }
which, just for arguments (and to fully explain the job submission script below), is contained in the working directory
$HOME/wmwr/my_examples/tempdir_example/
The program assumes that there is a directory my_data in the current working directory to which the file myData.out with a certain content (here the sequence of characters Test output to local scratch directory) will be written.
In oder to compile the program via the current gcc compiler, you could first set the stage by loading the proper modules, e.g.,
module clear module load sge module load gcc/4.7.1
and then compile via
gcc myExample_tempdir.c -o myExample_tempdir
to yield the binary myExample_tempdir.
At this point bear in mind that we do not want to execute the binary by hand right away! Instead, we would like to leave it to SGE to determine a proper queue instance (guided by the resources we subsequently will specify for the job) on a host with at least one free slot, where the job will be executed. A proper job submission script, here called myProg_tempdir.sge, that takes care of creating the folder my_data needed by the program myExample_tempdir in order to store its output in a temporal directory on the executing host reads
#!/bin/bash ####### which shell to use #$ -S /bin/bash ####### change to directory where job was submitted from #$ -cwd ####### maximum walltime of the job (hh:mm:ss) #$ -l h_rt=0:10:0 ####### memory per job slot #$ -l h_vmem=100M ####### since working with local storage, no need to request disk space ####### name of the job #$ -N tmpdir_test ####### change current working directory to the local /scratch/<jobId>.<x>.<qInst> ####### directory, available as TMPDIR on the executing host with HOSTNAME cd $TMPDIR ####### write details to <jobName>.o<jobId> output file echo "HOSTNAME = " $HOSTNAME echo "TMPDIR = " $TMPDIR ####### create output directory on executing host (parent folder is TMPDIR) mkdir my_data ####### run program $HOME/wmwr/my_examples/tempdir_example/myExample_tempdir ####### copy the output to the directory the job was submitted from cp -a ./my_data $HOME/wmwr/my_examples/tempdir/
Note that in the above job submission script there is no need to request disk space by setting the resource h_fsize since we are working with local storage provided by the execution host. Submitting the script via
qsub myProg_tempdir.sge
enqueues the respective job, here having jobId 703914. After successful termination of the job, the folder my_data is moved to the working directory from which the job was originally submitted from. Also, the two job status files tmpdir_test.e703914 and tmpdir_test.o703914 where created that might contain further details associated with the job. The latter file should contain the name of the host on which the job actually ran and the name of the temporal directory. And indeed, cat tmpdir_test.o703914 reveals the file content
HOSTNAME = mpcs001 TMPDIR = /scratch/703914.1.mpc_std_shrt.q
Further, the file my_data/myData.out contains the line
Test output to local scratch directory
as expected. Note that the temporary directory $TMPDIR (here: /scratch/703914.1.mpc_std_shrt.q) on the execution host (here: mpcs001) is automatically cleaned out. Finally, note that since $TMPDIR is created on a single host, the procedure outlined above works well only if your application runs on a single host. I.e., it is feasible for jobs that either request only a single slot (i.e. non-parallel jobs) or for parallel jobs for which all requested slots fit onto the same host (however, due to the "fill up" allocation rule obeyed by SGE, this cannot be guaranteed in general).