Details about the purpose of FLUKA can be found on the official web page which states:
FLUKA is a fully integrated particle physics MonteCarlo simulation package. It has many applications in high energy experimental physics and engineering, shielding, detector and telescope design, cosmic ray studies, dosimetry, medical physics and radio-biology.
Manuals about the usage of FLUKA can be found there as well, there is also an active mailing list to ask questions.
Please note that the FLUKA user license allows the free usage of FLUKA for non-commericial scientific research. By using FLUKA on the cluster you accept the user license.
fluka is installed as a so called generic software module. This means, it is available on most of our environments. This implies, that every module on each of our different environments actually links to the same software path:
These versions are installed and and currently available ...
... on envirnoment hpc-uniol-env:
... on environment hpc-env/6.4:
... on environment hpc-env/8.3:
The most current version of Fluka, fluka/2021.2-Singularity, is installed as a Singularity container. If you are already accustomed to working with containers e.g. Docker, you should not have any problems getting started. But even if you are not familiar with containarized software, we made it as easy as possible to get started with fluka.
Usually, to call an executable from a singularity container, you have to go through singularity like this: singularity exec <container_name.sif> <command_inside_container> .
But on our cluster we set up some aliases for the most important functions:
- rfluka <arguments> <input_file>
- actually calls rfluka like this: singularity run <container_image> <your_arguments+file>
- flukaexec <executable> <arguments> <input_file>
- executes any command in case just calling rfluka is not enought.
- e.g.: flukaexec ls -la /usr/local/fluka ## print fluka install directory and exit container session
- Only recommendet for more experienced linux users.
- opens a bash shell within the container. Here, you can find the fluka dir @ /usr/local/fluka
- exit with CMD + D or exit
- opens FLAIR (FLUKA Advanced Interface). Since this uses X11 forwarding, your ssh connection must be estabilshed with -y or -x
Naturally, these aliases only work during the time fluka/***-Singularity is loaded.
Using FLUKA with the HPC cluster
Since there many people working with the HPC cluster, its important that everyone has an equal chance to do so. Therefore, every job should be processed by SLURM.
For this reason, you have to create a jobscript for your tasks.
#!/bin/bash #SBATCH --ntasks=1 #SBATCH --mem=2G #SBATCH --partition=carl.p #SBATCH --time=0-2:00 #SBATCH --job-name FLUKA-TEST #SBATCH --output=fluka-test.%j.out #SBATCH --error=fluka-test.%j.err #load fluka module load fluka #change these settings to fit your files and needs INPUTFILE=example # without .inp FIRSTCYCLE=1 LASTCYCLE=5 #run fluka code rfluka -N $(expr $FIRSTCYCLE - 1) -M $LASTCYCLE $INPUTFILE
Save the jobscript as e.g. "flukatest.job". The example input file has to be in the same folder as the jobscript. If you havent copied the file yet, you can do it with the following commands:
cp $FLUPRO/example.inp .
If your jobscript and the example file are ready, you can submit your job with the command
sbatch -p carl.p flukatest.job (if you named you jobscript differently, add the right jobname here)
After the job has finished, you will find a number of new files in the directory where you submitted your job.
The manual is provided as a pdf-file. You can find this file here.