Difference between revisions of "OpenACC Workshop"

From HPC users
Jump to navigationJump to search
 
(5 intermediate revisions by the same user not shown)
Line 2: Line 2:


[[media:oldenburg_openacc.pdf|Agenda]]<br>
[[media:oldenburg_openacc.pdf|Agenda]]<br>
[[media:OpenACC_Workshop.pdf|Welcome and Introduction]]
[[media:OpenACC_Workshop.pdf|Welcome and Introduction]]<br>
Day 1 Morning Lecture: [[media:intro1.pdf|Introduction OpenACC I]]<br>
Day 1 Afternoon Lecture: [[media:intro2.pdf|Introduction OpenACC II]]<br>
Day 2 Morning Lecture: [[media:advanced.pdf|Advanced OpenACC]]<br>
Day 2 Afternoon Lecture: [[media:libraries.pdf|GPU-enabled Numerical Library]]<br>


== Quick Guide OpenACC ==
== Quick Guide OpenACC ==
Line 15: Line 19:


More info in the [[OpenACC|OpenACC Introduction]].
More info in the [[OpenACC|OpenACC Introduction]].
== Workshop ==
To copy data from the course directory:
cp -r /user/gilu2568/<dir> .
For profiling first get allocation:
salloc -p mpcg.p --gres=gpu:1
Create a script that runs an application, e.g. sincos.sh in the sincos example
#!/bin/bash
module load hpc-uniol-env
module load PGI
module load CUDA-Toolkit
./sincos $@
Start the visual profiler in the background
nvvp &
Find out hostname for allocation
srun hostname

Latest revision as of 16:31, 29 March 2017

Slides

Agenda
Welcome and Introduction
Day 1 Morning Lecture: Introduction OpenACC I
Day 1 Afternoon Lecture: Introduction OpenACC II
Day 2 Morning Lecture: Advanced OpenACC
Day 2 Afternoon Lecture: GPU-enabled Numerical Library

Quick Guide OpenACC

Modules to load:

module load PGI CUDA-Toolkit

Command to compile:

pgcc -acc -ta=tesla:cc60 -o executable code.c

Command to run:

srun -p mpcg.p --gres=gpu:1 ./executable

Alternatively use partition cfdg.p.

More info in the OpenACC Introduction.

Workshop

To copy data from the course directory:

cp -r /user/gilu2568/<dir> .

For profiling first get allocation:

salloc -p mpcg.p --gres=gpu:1

Create a script that runs an application, e.g. sincos.sh in the sincos example

#!/bin/bash
module load hpc-uniol-env
module load PGI
module load CUDA-Toolkit
./sincos $@

Start the visual profiler in the background

nvvp &

Find out hostname for allocation

srun hostname