Difference between revisions of "Caffe"
Line 3: | Line 3: | ||
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. | Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. | ||
''Why Caffe?'' | '''''Why Caffe?''''' | ||
*'''Expressive''' architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices. | *'''Expressive''' architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices. | ||
Revision as of 10:59, 8 February 2018
Introduction
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
Why Caffe?
- Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
- Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.
- Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convnet implementations available.
- Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.
Installed version
The currently installed version is 1.0. The full module name is Caffe/1.0-intel-2016b-CUDA-8.0.61-Python-2.7.12.
Using Caffe with the HPC cluster
If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command:
module load Caffe
This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. Python 2.7.12 will be loaded, its not necessary to load it separately. Its currently unknown if other, newer versions of Python would work too.
Important Note: If you want to work with Caffe you have to request a GPU in your jobscript. Otherwise you will end up with an error like this:
(38 vs 0) No CUDA capable device found
Requesting a GPU is easy and can be done by adding the following line to your jobscript:
#SBATCH --gres=gpu:1
This will request on GPU for your job. Since we have nodes with multiple GPUs, you can request up to 2 GPUs for your job. This request is per node, so its not possible to request serveral nodes with different amounts of GPUs.
Documentation
Serveral different documentations can be found on the official homepage of Caffe.