Difference between revisions of "Gurobi 2016"

From HPC users
Jump to navigationJump to search
 
(16 intermediate revisions by 4 users not shown)
Line 13: Line 13:
== Installed Version ==
== Installed Version ==


The currently installed Version is 7.0.2.
The currently installed Version is 9.5.2 (if you see older versions they probably will not work anymore since the license is expired).


== Using Gurobi Optimizer on the Cluster ==
== Using Gurobi ==


In order to use Gurobi Optimizer you need to load the module with the command
In order to use Gurobi Optimizer you need to load the module with the command
  module load gurobi/7.0.2
  module load Gurobi
This sets up the environment to run e.g. the command-line tool. To get some help type  
This sets up the environment to run e.g. the command-line tool. To get some help type  
  gurobi_cl --help
  gurobi_cl --help
Line 27: Line 27:
Here, $GUROBI_HOME is set in the environment, coins.lp is an example model. The option Threads=1 set the parameter Threads to 1. This is an important option when you run Gurobi on the cluster: if not used Gurobi may use all available cores on a compute node which could interfere with other jobs (and therefore should be avoided!). The command above rund very quickly and produces a few lines of output. In addition, a file gurobi.log is created (or appended to if it already exists).
Here, $GUROBI_HOME is set in the environment, coins.lp is an example model. The option Threads=1 set the parameter Threads to 1. This is an important option when you run Gurobi on the cluster: if not used Gurobi may use all available cores on a compute node which could interfere with other jobs (and therefore should be avoided!). The command above rund very quickly and produces a few lines of output. In addition, a file gurobi.log is created (or appended to if it already exists).


== Sample Job Script ==
== Using Gurobi with the HPC Cluster ==


In most cases you need to run an optimization that needs longer run time. In that case you should prepare and submit a job script as explained in the section about the [[SGE Job Management (Queueing) System]]. A job script could look like this:
Since there many people working with the HPC cluster, its important that everyone has an equal chance to do so. Therefore, every job should be processed by [[SLURM Job Management (Queueing) System|SLURM]]. A job script could look like this:
 
#!/bin/bash
               
#SBATCH --ntasks=1                 
#SBATCH --mem=2G                 
#SBATCH --time=0-2:00 
#SBATCH --job-name GUROBI-TEST             
#SBATCH --output=gurobi-test.%j.out       
#SBATCH --error=gurobi-test.%j.err         
 
module load Gurobi
gurobi_cl Threads=1 $GUROBI_HOME/examples/data/coins.lp
 
This will just process an example file provided by the coders of gurobi. You will find an output like this in your "gurobi-test.JOBID.out"-file:
<pre>
<pre>
#!/bin/bash
Set parameter Threads to value 1


### the following lines are used by SGE to determine your job requirements
Gurobi Optimizer version 7.0.2 build v7.0.2rc1 (linux64)
### you may want to modify the requested resources, e.g. h_vmem or h_rt,
Copyright (c) 2017, Gurobi Optimization, Inc.
### if you need more memory or run time (respectively)
### PE SMP is used, comment out if you only need 1 computing core
#$ -S /bin/bash
#$ -cwd
#$ -pe smp 8            # number of slots requested (maximum is 12 for PE SMP)
#$ -l h_rt=1:00:0      # requested time hh:mm:ss
#$ -l h_vmem=1500M      # requested memory per slot if PE SMP is used
#$ -l h_fsize=100G
#$ -N gurobi
#$ -j n


### maybe useful to receive emails when a job begins (b) and has finished (e)
Read LP format model from file /cm/shared/uniol/software/Gurobi/9.5.2/examples/data/coins.lp
### (to activate change email address and remove extra # in the next 2 lines)
Reading time = 0.01 seconds
##$ -m be
: 4 rows, 9 columns, 16 nonzeros
##$ -M your.name@uni-oldenburg.de
Optimize a model with 4 rows, 9 columns and 16 nonzeros
Variable types: 4 continuous, 5 integer (0 binary)
Coefficient statistics:
  Matrix range    [6e-02, 7e+00]
  Objective range  [1e-02, 1e+00]
  Bounds range    [5e+01, 1e+03]
  RHS range        [0e+00, 0e+00]
Found heuristic solution: objective -0
Presolve removed 1 rows and 5 columns
Presolve time: 0.00s
Presolved: 3 rows, 4 columns, 9 nonzeros
Variable types: 0 continuous, 4 integer (0 binary)


### here the actual job script begins
Root relaxation: objective 1.134615e+02, 3 iterations, 0.00 seconds
### you may want to modify this part according to your needs
echo Job started on `date`  # put a time stamp in log file


# load module gurobi
    Nodes    |    Current Node    |    Objective Bounds      |    Work
module load gurobi
Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd  Gap | It/Node Time


# settings (here you need to make your own modifications)
    0    0  113.46154    0    1  -0.00000  113.46154      -     -    0s
INPUTFILE=$GUROBI_HOME/examples/data/glass4.mps     # full path and name of input file
H    0    0                    113.4500000  113.46154  0.01%    -    0s
RESULTFILE=`basename $INPUTFILE | sed -e "s/\..*$/.sol/"` # filename for result file


# create a directory for this run
Explored 0 nodes (3 simplex iterations) in 0.00 seconds
mkdir gurobi_${JOB_ID}
Thread count was 1 (of 24 available processors)
cd gurobi_${JOB_ID}


# run gurobi (command-line tool) with some options
Solution count 2: 113.45 -0
# see documentation for more information about options
Pool objective bound 113.45
# do not change or remove the Threads=$NSLOTS
 
gurobi_cl MIPFocus=1 Threads=$NSLOTS ResultFile=$RESULTFILE $INPUTFILE
Optimal solution found (tolerance 1.00e-04)
Best objective 1.134500000000e+02, best bound 1.134500000000e+02, gap 0.0000%


echo Job finished on `date`  # put a time stamp in log file
exit
</pre>
</pre>


Here the lines beginning with #$ describe the requested resources for the job and other settings. You may need to modify these according to your needs. Note that the SMP parallel environment (#$ -pe smp <N>) is requested because Gurobi can make use of multi-core architectures. The information about the number of available slots is passed in the Threads= option using the variable $NSLOTS. If you want to change the number of slots you only need to modify the value of <N>. Note that not all parts of Gurobi can run in parallel and increasing the number of cores does not alway improve the performance. Please test this before wasting compute resources (the example glass4.mps runs in 500s on 8 cores and in 235s on 16 cores, which is very good). If you remove or comment out the request for PE smp NSLOTS will be set to 1 (i.e. you don't need to modify the Threads option).
== Running Gurobi in Parallel ==


The job script is creating a new directory at the location where the job was submitted and all output from Gurobi will be written to that directory. The output from the SGE will be found where the job script is. The script is somewhat generic, you can change the location and name of the input file, the result file name will be created automatically.
This is possible by adding the line
 
#SBATCH --cpus-per-task=24
If you need to make a large number of runs with similar input files/parameters consider using a [[SGE_Job_Management_(Queueing)_System#Array_jobs|job array]].
to the job script example above. The number of CPUs (meaning cores) per task can be set to any number up to the number of available cores on a single node (which is 24 for most nodes).


The command to run Gurobi can be modified in this way:
... Threads=$SLURM_CPUS_PER_TASK ...


== Using Gurobi on your local work station ==
== Using Gurobi on your local work station ==


If you want to use Gurobi Optimizer on your own computer (within the University network) you can do so. Follow the instructions in [http://www.gurobi.com/documentation/ Quickstart Guide] to install Gurobi 6.5.1 (you need to create an account to download the software). Instead of retrieving your own named license you can install a license file named gurobi.lic in the location as described in guide. The license file must contain the following two lines:
If you want to use Gurobi Optimizer on your own computer (within the University network) you can do so. Follow the instructions in [http://www.gurobi.com/documentation/ Quickstart Guide] to install Gurobi 9.5.2 (you need to create an account to download the software). Instead of retrieving your own named license you can install a license file named gurobi.lic in the location as described in guide. The license file must contain the following two lines:
  TOKENSERVER=gurobi.license.uni-oldenburg.de
  TOKENSERVER=gurobi.license.uni-oldenburg.de
  PORT=27001
  PORT=27001
== Documentation ==
The full documentation can be found [https://www.gurobi.com/documentation/9.5/quickstart_windows/index.html here].

Latest revision as of 12:24, 30 September 2022

Introduction

Gurobi Optimizer has been designed to be a fast and powerful solver for your LP, QP, QCP, and MIP problems. More information can be found on the Gurobi webpage. Read the Quickstart Guide for examples on how to use gurobi.

Please note that Gurobi Optimizer is commercial software. The academic license available at the University Oldenburg can only be used for research and teaching puporses. If you have used Gurobi in your research you can acknowledge this by citing

@misc{gurobi,
  author = "Gurobi Optimization, Inc.",
  title = "Gurobi Optimizer Reference Manual",
  year = 2015,
  url = "http://www.gurobi.com"
}

Installed Version

The currently installed Version is 9.5.2 (if you see older versions they probably will not work anymore since the license is expired).

Using Gurobi

In order to use Gurobi Optimizer you need to load the module with the command

module load Gurobi

This sets up the environment to run e.g. the command-line tool. To get some help type

gurobi_cl --help

The general format for the command-line tool is

gurobi_cl [--command]* [param=value]* filename

where filename contains a model for optimization. Using one of the examples described in the Quickstart Guide, a working command could look like this:

gurobi_cl Threads=1 $GUROBI_HOME/examples/data/coins.lp

Here, $GUROBI_HOME is set in the environment, coins.lp is an example model. The option Threads=1 set the parameter Threads to 1. This is an important option when you run Gurobi on the cluster: if not used Gurobi may use all available cores on a compute node which could interfere with other jobs (and therefore should be avoided!). The command above rund very quickly and produces a few lines of output. In addition, a file gurobi.log is created (or appended to if it already exists).

Using Gurobi with the HPC Cluster

Since there many people working with the HPC cluster, its important that everyone has an equal chance to do so. Therefore, every job should be processed by SLURM. A job script could look like this:

#!/bin/bash
               
#SBATCH --ntasks=1                  
#SBATCH --mem=2G                  
#SBATCH --time=0-2:00  
#SBATCH --job-name GUROBI-TEST              
#SBATCH --output=gurobi-test.%j.out        
#SBATCH --error=gurobi-test.%j.err          
 
module load Gurobi
gurobi_cl Threads=1 $GUROBI_HOME/examples/data/coins.lp

This will just process an example file provided by the coders of gurobi. You will find an output like this in your "gurobi-test.JOBID.out"-file:

Set parameter Threads to value 1

Gurobi Optimizer version 7.0.2 build v7.0.2rc1 (linux64)
Copyright (c) 2017, Gurobi Optimization, Inc.

Read LP format model from file /cm/shared/uniol/software/Gurobi/9.5.2/examples/data/coins.lp
Reading time = 0.01 seconds
: 4 rows, 9 columns, 16 nonzeros
Optimize a model with 4 rows, 9 columns and 16 nonzeros
Variable types: 4 continuous, 5 integer (0 binary)
Coefficient statistics:
  Matrix range     [6e-02, 7e+00]
  Objective range  [1e-02, 1e+00]
  Bounds range     [5e+01, 1e+03]
  RHS range        [0e+00, 0e+00]
Found heuristic solution: objective -0
Presolve removed 1 rows and 5 columns
Presolve time: 0.00s
Presolved: 3 rows, 4 columns, 9 nonzeros
Variable types: 0 continuous, 4 integer (0 binary)

Root relaxation: objective 1.134615e+02, 3 iterations, 0.00 seconds

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0  113.46154    0    1   -0.00000  113.46154      -     -    0s
H    0     0                     113.4500000  113.46154  0.01%     -    0s

Explored 0 nodes (3 simplex iterations) in 0.00 seconds
Thread count was 1 (of 24 available processors)

Solution count 2: 113.45 -0 
Pool objective bound 113.45

Optimal solution found (tolerance 1.00e-04)
Best objective 1.134500000000e+02, best bound 1.134500000000e+02, gap 0.0000%

Running Gurobi in Parallel

This is possible by adding the line

#SBATCH --cpus-per-task=24

to the job script example above. The number of CPUs (meaning cores) per task can be set to any number up to the number of available cores on a single node (which is 24 for most nodes).

The command to run Gurobi can be modified in this way:

... Threads=$SLURM_CPUS_PER_TASK ...

Using Gurobi on your local work station

If you want to use Gurobi Optimizer on your own computer (within the University network) you can do so. Follow the instructions in Quickstart Guide to install Gurobi 9.5.2 (you need to create an account to download the software). Instead of retrieving your own named license you can install a license file named gurobi.lic in the location as described in guide. The license file must contain the following two lines:

TOKENSERVER=gurobi.license.uni-oldenburg.de
PORT=27001

Documentation

The full documentation can be found here.