Difference between revisions of "Caffe"
Line 11: | Line 11: | ||
If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command: | If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command: | ||
module load Caffe | module load Caffe | ||
This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. | This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. '''Python 2.7.12''' will be loaded, its not necessary to load it separately. Its currently unknown if other, newer versions of Python would work too. | ||
'''Important Note:''' If you want to work with Caffe you have to request a GPU in your jobscript. Otherwise you will end up with an error like this: | '''Important Note:''' If you want to work with Caffe you have to request a GPU in your jobscript. Otherwise you will end up with an error like this: |
Revision as of 10:50, 8 February 2018
Introduction
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
Installed version
The currently installed version is 1.0. The full module name is Caffe/1.0-intel-2016b-CUDA-8.0.61-Python-2.7.12.
Using Caffe with the HPC cluster
If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command:
module load Caffe
This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. Python 2.7.12 will be loaded, its not necessary to load it separately. Its currently unknown if other, newer versions of Python would work too.
Important Note: If you want to work with Caffe you have to request a GPU in your jobscript. Otherwise you will end up with an error like this:
(38 vs 0) No CUDA capable device found
Requesting a GPU is easy and can be done by adding the following line to your jobscript:
#SBATCH --gres=gpu:1
This will request on GPU for your job. Since we have nodes with multiple GPUs, you can request up to 2 GPUs for your job. This request is per node, so its not possible to request serveral nodes with different amounts of GPUs.
Documentation
Serveral different documentations can be found on the official homepage of Caffe.