Difference between revisions of "Caffe"
(13 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. | Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. | ||
'''''Why Caffe?''''' | |||
*'''Expressive''' architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices. | |||
*'''Extensible code''' fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models. | |||
*'''Speed''' makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convnet implementations available. | |||
*'''Community:''' Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github. | |||
== Installed version == | == Installed version == | ||
Line 11: | Line 20: | ||
If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command: | If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command: | ||
module load Caffe | module load Caffe | ||
This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. | This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. '''Python 2.7.12''' will be loaded, its not necessary to load it separately. Its currently unknown if other, newer versions of Python would work too. | ||
'''Important Note:''' If you want to work with Caffe you have to request a GPU in your jobscript. Otherwise you will end up with an error like this: | |||
(38 vs 0) No CUDA capable device found | |||
Requesting a GPU is easy and can be done by adding the following line to your jobscript: | |||
#SBATCH --gres=gpu:1 | |||
This will request on GPU for your job. Since we have nodes with multiple GPUs, you can request up to 2 GPUs for your job. This request is per node, so its not possible to request serveral nodes with different amounts of GPUs. | |||
== Documentation == | == Documentation == | ||
Serveral different documentations can be found on the [http://caffe.berkeleyvision.org/ official homepage] of Caffe. | Serveral different documentations, notebook examples and commandline examples can be found on the [http://caffe.berkeleyvision.org/ official homepage] of Caffe. | ||
== Citing Caffe == | |||
Please cite Caffe in your publications if it helps your research: | |||
@article{jia2014caffe, | |||
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor}, | |||
Journal = {arXiv preprint arXiv:1408.5093}, | |||
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding}, | |||
Year = {2014} | |||
} | |||
If you do publish a paper where Caffe helped your research, we encourage you to cite the framework for tracking by [https://scholar.google.com/citations?view_op=view_citation&hl=en&citation_for_view=-ltRSM0AAAAJ:u5HHmVD_uO8C Google Scholar]. |
Latest revision as of 11:01, 8 February 2018
Introduction
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
Why Caffe?
- Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
- Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.
- Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convnet implementations available.
- Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.
Installed version
The currently installed version is 1.0. The full module name is Caffe/1.0-intel-2016b-CUDA-8.0.61-Python-2.7.12.
Using Caffe with the HPC cluster
If you want to use Caffe, you will have to load its corresponding module first. You can do that with the following command:
module load Caffe
This takes a few seconds because Caffe has quite a few dependencies. In total, 72 modules will be loaded including Caffe itself. Python 2.7.12 will be loaded, its not necessary to load it separately. Its currently unknown if other, newer versions of Python would work too.
Important Note: If you want to work with Caffe you have to request a GPU in your jobscript. Otherwise you will end up with an error like this:
(38 vs 0) No CUDA capable device found
Requesting a GPU is easy and can be done by adding the following line to your jobscript:
#SBATCH --gres=gpu:1
This will request on GPU for your job. Since we have nodes with multiple GPUs, you can request up to 2 GPUs for your job. This request is per node, so its not possible to request serveral nodes with different amounts of GPUs.
Documentation
Serveral different documentations, notebook examples and commandline examples can be found on the official homepage of Caffe.
Citing Caffe
Please cite Caffe in your publications if it helps your research:
@article{jia2014caffe, Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor}, Journal = {arXiv preprint arXiv:1408.5093}, Title = {Caffe: Convolutional Architecture for Fast Feature Embedding}, Year = {2014} }
If you do publish a paper where Caffe helped your research, we encourage you to cite the framework for tracking by Google Scholar.