Gurobi 2016

From HPC users
Revision as of 13:24, 30 September 2022 by Schwietzer (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search


Gurobi Optimizer has been designed to be a fast and powerful solver for your LP, QP, QCP, and MIP problems. More information can be found on the Gurobi webpage. Read the Quickstart Guide for examples on how to use gurobi.

Please note that Gurobi Optimizer is commercial software. The academic license available at the University Oldenburg can only be used for research and teaching puporses. If you have used Gurobi in your research you can acknowledge this by citing

  author = "Gurobi Optimization, Inc.",
  title = "Gurobi Optimizer Reference Manual",
  year = 2015,
  url = ""

Installed Version

The currently installed Version is 9.5.2 (if you see older versions they probably will not work anymore since the license is expired).

Using Gurobi

In order to use Gurobi Optimizer you need to load the module with the command

module load Gurobi

This sets up the environment to run e.g. the command-line tool. To get some help type

gurobi_cl --help

The general format for the command-line tool is

gurobi_cl [--command]* [param=value]* filename

where filename contains a model for optimization. Using one of the examples described in the Quickstart Guide, a working command could look like this:

gurobi_cl Threads=1 $GUROBI_HOME/examples/data/coins.lp

Here, $GUROBI_HOME is set in the environment, coins.lp is an example model. The option Threads=1 set the parameter Threads to 1. This is an important option when you run Gurobi on the cluster: if not used Gurobi may use all available cores on a compute node which could interfere with other jobs (and therefore should be avoided!). The command above rund very quickly and produces a few lines of output. In addition, a file gurobi.log is created (or appended to if it already exists).

Using Gurobi with the HPC Cluster

Since there many people working with the HPC cluster, its important that everyone has an equal chance to do so. Therefore, every job should be processed by SLURM. A job script could look like this:

#SBATCH --ntasks=1                  
#SBATCH --mem=2G                  
#SBATCH --time=0-2:00  
#SBATCH --job-name GUROBI-TEST              
#SBATCH --output=gurobi-test.%j.out        
#SBATCH --error=gurobi-test.%j.err          
module load Gurobi
gurobi_cl Threads=1 $GUROBI_HOME/examples/data/coins.lp

This will just process an example file provided by the coders of gurobi. You will find an output like this in your "gurobi-test.JOBID.out"-file:

Set parameter Threads to value 1

Gurobi Optimizer version 7.0.2 build v7.0.2rc1 (linux64)
Copyright (c) 2017, Gurobi Optimization, Inc.

Read LP format model from file /cm/shared/uniol/software/Gurobi/9.5.2/examples/data/coins.lp
Reading time = 0.01 seconds
: 4 rows, 9 columns, 16 nonzeros
Optimize a model with 4 rows, 9 columns and 16 nonzeros
Variable types: 4 continuous, 5 integer (0 binary)
Coefficient statistics:
  Matrix range     [6e-02, 7e+00]
  Objective range  [1e-02, 1e+00]
  Bounds range     [5e+01, 1e+03]
  RHS range        [0e+00, 0e+00]
Found heuristic solution: objective -0
Presolve removed 1 rows and 5 columns
Presolve time: 0.00s
Presolved: 3 rows, 4 columns, 9 nonzeros
Variable types: 0 continuous, 4 integer (0 binary)

Root relaxation: objective 1.134615e+02, 3 iterations, 0.00 seconds

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0  113.46154    0    1   -0.00000  113.46154      -     -    0s
H    0     0                     113.4500000  113.46154  0.01%     -    0s

Explored 0 nodes (3 simplex iterations) in 0.00 seconds
Thread count was 1 (of 24 available processors)

Solution count 2: 113.45 -0 
Pool objective bound 113.45

Optimal solution found (tolerance 1.00e-04)
Best objective 1.134500000000e+02, best bound 1.134500000000e+02, gap 0.0000%

Running Gurobi in Parallel

This is possible by adding the line

#SBATCH --cpus-per-task=24

to the job script example above. The number of CPUs (meaning cores) per task can be set to any number up to the number of available cores on a single node (which is 24 for most nodes).

The command to run Gurobi can be modified in this way:

... Threads=$SLURM_CPUS_PER_TASK ...

Using Gurobi on your local work station

If you want to use Gurobi Optimizer on your own computer (within the University network) you can do so. Follow the instructions in Quickstart Guide to install Gurobi 9.5.2 (you need to create an account to download the software). Instead of retrieving your own named license you can install a license file named gurobi.lic in the location as described in guide. The license file must contain the following two lines:


The full documentation can be found here.