Difference between revisions of "Singularity Examples"
Schwietzer (talk | contribs) |
Schwietzer (talk | contribs) |
||
Line 36: | Line 36: | ||
*Create a copy of the script shown above to a file: | *Create a copy of the script shown above to a file: | ||
''vim cuda9python3.def'' # insert the content and save&quit the file | ''vim cuda9python3.def'' # insert the content and save&quit the file | ||
* Build a singularity image from the .def file: | * Build a singularity image from the .def file (You will need ~ 1.3 GB free disk space for this image): | ||
''sudo singularity build cuda9python3.simg cuda9python3.def'' | ''sudo singularity build cuda9python3.simg cuda9python3.def'' | ||
*Copy the image file to the cluster | *Copy the image file to the cluster | ||
''rsync cuda9python3.simg abcd1234@carl.hpc.uni-oldenburg.de:~/<desired_path>'' | ''rsync cuda9python3.simg abcd1234@carl.hpc.uni-oldenburg.de:~/<desired_path>'' | ||
* After logging in, load the modules for Singularity and cuda: | * After logging in, load the modules for Singularity and cuda: | ||
''module load | ''module load hpc-env/6.4'' <br/> | ||
''module load Singularity'' </ | ''module load Singularity'' <br/> | ||
''module load CUDA-Toolkit'' </ | ''module load CUDA-Toolkit'' <br/> | ||
* Navigate to the image folder and start using it: | * Navigate to the image folder and start using it: | ||
''cd ~/path/to/uda9python3.simg'' | ''cd ~/path/to/uda9python3.simg'' | ||
'' singularity exec '''--nv''' <your_commands_here>'' | '' singularity exec '''--nv''' <your_commands_here>'' | ||
<br/> <br/> | |||
Please notice, that using CUDA-Toolkit requires logging in to a GPU node. That means, that the commands on the cluster shown above should be written to a batch script and transferred to a GPU partition (e.g. mpcg.p). | Please notice, that using CUDA-Toolkit requires logging in to a GPU node. That means, that the commands on the cluster shown above should be written to a batch script and transferred to a GPU partition (e.g. mpcg.p). | ||
Also keep in mind, that the singularity option '''--nv''' is mandatory to use | Also keep in mind, that the singularity option '''--nv''' is mandatory if you want to use the GPU functionality. With it, the container can make use of the clusters GPU driver libraries. |
Revision as of 10:42, 26 September 2019
Introduction
Here we show some quick examples on how singularity can be used. If you are unsure about the details please read the general description of Singularity or the singularity documentation.
Building and Running a Container with GPU support
The following recipe has been written to work with the CUDA-Toolkit and is based on an Ubuntu distribution environment:
Bootstrap: docker From: ubuntu:18.04 %post apt-get -y update apt-get -y install python3-numpy apt-get -y install python3-sympy apt-get -y install python3-pip pip3 install sympy==1.3 apt-get -y install python3-matplotlib apt-get -y install ffmpeg apt-get -y install --no-install-recommends --no-install-suggests nvidia-cuda-toolkit mkdir /metacode mkdir /scripts mkdir /staticcode mkdir /working mkdir /output %environment %runscript umask go-rwx
To make use of it you have to build it on your own local linux system and then transfer it to your cluster directory:
- Create a copy of the script shown above to a file:
vim cuda9python3.def # insert the content and save&quit the file
- Build a singularity image from the .def file (You will need ~ 1.3 GB free disk space for this image):
sudo singularity build cuda9python3.simg cuda9python3.def
- Copy the image file to the cluster
rsync cuda9python3.simg abcd1234@carl.hpc.uni-oldenburg.de:~/<desired_path>
- After logging in, load the modules for Singularity and cuda:
module load hpc-env/6.4
module load Singularity
module load CUDA-Toolkit
- Navigate to the image folder and start using it:
cd ~/path/to/uda9python3.simg
singularity exec --nv <your_commands_here>
Please notice, that using CUDA-Toolkit requires logging in to a GPU node. That means, that the commands on the cluster shown above should be written to a batch script and transferred to a GPU partition (e.g. mpcg.p).
Also keep in mind, that the singularity option --nv is mandatory if you want to use the GPU functionality. With it, the container can make use of the clusters GPU driver libraries.