Difference between revisions of "FDTD Solutions / Lumerical 2016"

From HPC users
Jump to navigationJump to search
Line 40: Line 40:


For example, if you after loading the module run the commands
For example, if you after loading the module run the commands
  cp $EBROOTFDTD_SOLUTIONS/opt/lumerical/fdtd/examples/paralleltest.fsp .
  cp $EBROOTFDTD_SOLUTIONS/examples/paralleltest.fsp .
  fdtd-run-slurm.sh -n 24 paralleltest.fsp
  fdtd-run-slurm.sh -n 24 paralleltest.fsp
the test case will be executed with 24 parallel tasks (freely distributed across the compute nodes as needed). The results of the simulation are written to the same file as the input fsp file (it seems, so it is probably a good idea to make a copy of that file first), and there is an additional log-file (and a slurm-<jobid>.out).
the test case will be executed with 24 parallel tasks (freely distributed across the compute nodes as needed). The results of the simulation are written to the same file as the input fsp file (it seems, so it is probably a good idea to make a copy of that file first), and there is an additional log-file (and a slurm-<jobid>.out).
The script <tt>fdtd-run-slurm.sh</tt> comes with a number of options, which can be seen from
<pre>
$ fdtd-run-slurm.sh -h
The calling convention for fdtd-run-slurm.sh is:
fdtd-run-slurm.sh [<options>] fsp1 [fsp2 ... [fspN]]
The arguments are as follows:
fsp*      An FDTD Solutions project file. One is required, but
          multiple can be specified on one command line
-n        The number of processes to use for the job(s).
          If no argument is given a default value of 8 is used
-N        The number of nodes to use for the job(s).
          If no argument is given SLURM will distribute the processes
          as resources are available (may not be optimal).
-m        The number of processes (tasks) per node to use.
          Exclusive with -n option, if not used the number of processes
          is determined by the value given with -n.
-p        The partition to use for the job(s).
          If no argument is given, the default partition carl.p is used-
-h        Print this help. No job is started
</pre>


=== The expert way ===
=== The expert way ===

Revision as of 15:18, 14 May 2019

== Introduction Lumerical FDTD Solutions is a software package for solving 3D Maxwell’s Equations using Finite Difference Time Domain method.

Installed Version

The currently installed versions are:

On environment hpc-uniol-env 
FDTD_Solutions/8.20.1634
On environment hpc-env/6.4 
FDTD_Solutions/8.20.1731
FDTD_Solutions/8.21.1933 (D)

Using FDTD Solutions GUI

If you need to work with the GUI (graphical user interface) function, it is mandatory to log in correctly. When logging in, you have to add the option -X at your SSH command. The option makes sure that the program's GUI is forwarded to your device.

ssh abcd1234@carl.hpc.uni-oldenburg.de -X

Of course, this means that your device must be able to display graphical elements such as browsers, office programs or the like. When you are logged in correctly, you just have to activate the module and start the program's GUI:

module load hpc-env/6.4   #for the newest version
module load FDTD_Solutions
fdtd-solutions&

Now FDTD_Solutions should pop up at your display.

Hint: Should you have trouble with X-forwarding (e.g. GUI is shown only fragmentarily), you could try to log in with Remote Desktop.

Note: The GUI requires a Design license and if you see a license error message the most likely cause is that someone else is using the GUI. The GUI should mainly be used to prepare a .ftd file which is then processes in batch mode (see below). Any calculation started by the GUI will be carried out on the login node. This should only be used for small test cases or to determine the time and memory requirements for a job.

Should you need help getting started, maybe the developer's guide might help.

Using FDTD Solutions in parallel batch mode

The recommended way of using FDTD Solutions is in batch mode on the compute nodes. This can be achieved in several ways (none of which uses the GUI).

The easy way

After you have loaded the module for FDTD Solutions, you can use the command

fdtd-run-slurm.sh -n <n> your_model.fsp

where <n> is the number of parallel tasks (default is 8). The file your_model.fsp describes your model and you can add more fsp-files to the command. For each fsp-file, the command will create a job script and submit is to the cluster. The job will then run as soon as the are enough resources available (the command will estimate the required resources (time and memory) for you).

For example, if you after loading the module run the commands

cp $EBROOTFDTD_SOLUTIONS/examples/paralleltest.fsp .
fdtd-run-slurm.sh -n 24 paralleltest.fsp

the test case will be executed with 24 parallel tasks (freely distributed across the compute nodes as needed). The results of the simulation are written to the same file as the input fsp file (it seems, so it is probably a good idea to make a copy of that file first), and there is an additional log-file (and a slurm-<jobid>.out).

The script fdtd-run-slurm.sh comes with a number of options, which can be seen from

$ fdtd-run-slurm.sh -h
The calling convention for fdtd-run-slurm.sh is:

fdtd-run-slurm.sh [<options>] fsp1 [fsp2 ... [fspN]]

The arguments are as follows:

 fsp*      An FDTD Solutions project file. One is required, but
           multiple can be specified on one command line

 -n        The number of processes to use for the job(s).
           If no argument is given a default value of 8 is used

 -N        The number of nodes to use for the job(s).
           If no argument is given SLURM will distribute the processes
           as resources are available (may not be optimal).

 -m        The number of processes (tasks) per node to use.
           Exclusive with -n option, if not used the number of processes
           is determined by the value given with -n.

 -p        The partition to use for the job(s).
           If no argument is given, the default partition carl.p is used-

 -h        Print this help. No job is started

The expert way

Alternatively, you can just write your own job script (instead of the automatically generated one). This allows you to better control how the job is run on the cluster and maybe use additional options for FDTD Solutions.

details will follow soon