Difference between revisions of "Singularity Examples"
Schwietzer (talk | contribs) |
Schwietzer (talk | contribs) |
||
Line 4: | Line 4: | ||
== Building and Running a Container with GPU support == | == Building and Running a Container with GPU support == | ||
The following recipe has been written to work with the CUDA-Toolkit and is based on an Ubuntu distribution environment: | |||
<pre> | |||
Bootstrap: docker | |||
From: ubuntu:18.04 | |||
%post | |||
apt-get -y update | |||
apt-get -y install python3-numpy | |||
apt-get -y install python3-sympy | |||
apt-get -y install python3-pip | |||
pip3 install sympy==1.3 | |||
apt-get -y install python3-matplotlib | |||
apt-get -y install ffmpeg | |||
apt-get -y install --no-install-recommends --no-install-suggests nvidia-cuda-toolkit | |||
mkdir /metacode | |||
mkdir /scripts | |||
mkdir /staticcode | |||
mkdir /working | |||
mkdir /output | |||
%environment | |||
%runscript | |||
umask go-rwx | |||
</pre> | |||
To make use of it you have to build it on your own local linux system and then transfer it to your cluster directory: | |||
*Create a copy of the script shown above to a file: | |||
''vim cuda9python3.def'' # insert the content and save&quit the file | |||
* Build a singularity image from the .def file: | |||
''sudo singularity build cuda9python3.simg cuda9python3.def'' # In this case, you will need ~ 1.3 GB free disk space | |||
*Copy the image file to the cluster | |||
''rsync cuda9python3.simg abcd1234@carl.hpc.uni-oldenburg.de:~/<desired_path>'' | |||
* After logging in, load the modules for Singularity and cuda: | |||
''module load Shpc-env/6.4'' | |||
''module load Singularity'' | |||
''module load CUDA-Toolkit'' | |||
* Navigate to the image folder and start using it: | |||
''cd ~/path/to/uda9python3.simg'' | |||
'' singularity exec '''--nv''' <your_commands_here>'' | |||
Please notice, that using CUDA-Toolkit requires logging in to a GPU node. That means, that the commands on the cluster shown above should be written to a batch script and transferred to a GPU partition (e.g. mpcg.p). | |||
Also keep in mind, that the singularity option '''--nv''' is mandatory to use with GPU functionality. With it, the container can make use of the clusters GPU driver libraries. |
Revision as of 10:23, 26 September 2019
Introduction
Here we show some quick examples on how singularity can be used. If you are unsure about the details please read the general description of Singularity or the singularity documentation.
Building and Running a Container with GPU support
The following recipe has been written to work with the CUDA-Toolkit and is based on an Ubuntu distribution environment:
Bootstrap: docker From: ubuntu:18.04 %post apt-get -y update apt-get -y install python3-numpy apt-get -y install python3-sympy apt-get -y install python3-pip pip3 install sympy==1.3 apt-get -y install python3-matplotlib apt-get -y install ffmpeg apt-get -y install --no-install-recommends --no-install-suggests nvidia-cuda-toolkit mkdir /metacode mkdir /scripts mkdir /staticcode mkdir /working mkdir /output %environment %runscript umask go-rwx
To make use of it you have to build it on your own local linux system and then transfer it to your cluster directory:
- Create a copy of the script shown above to a file:
vim cuda9python3.def # insert the content and save&quit the file
- Build a singularity image from the .def file:
sudo singularity build cuda9python3.simg cuda9python3.def # In this case, you will need ~ 1.3 GB free disk space
- Copy the image file to the cluster
rsync cuda9python3.simg abcd1234@carl.hpc.uni-oldenburg.de:~/<desired_path>
- After logging in, load the modules for Singularity and cuda:
module load Shpc-env/6.4 module load Singularity module load CUDA-Toolkit
- Navigate to the image folder and start using it:
cd ~/path/to/uda9python3.simg singularity exec --nv <your_commands_here>
Please notice, that using CUDA-Toolkit requires logging in to a GPU node. That means, that the commands on the cluster shown above should be written to a batch script and transferred to a GPU partition (e.g. mpcg.p).
Also keep in mind, that the singularity option --nv is mandatory to use with GPU functionality. With it, the container can make use of the clusters GPU driver libraries.