Difference between revisions of "Queues and resource allocation"
Line 133: | Line 133: | ||
192h of running time. Regarding the long queue, there is no upper bound on the running time and a job with proper allocated resources might be put in this queue. | 192h of running time. Regarding the long queue, there is no upper bound on the running time and a job with proper allocated resources might be put in this queue. | ||
Further, note that the long queue features one complex value more than the | Further, note that the long queue features one complex value more than the short queue, namely <code>longrun</code>. Details about this resource are: | ||
<nowiki> | <nowiki> | ||
$ qconf -sc | grep "longrun\|#" | $ qconf -sc | grep "longrun\|#" |
Revision as of 16:09, 26 August 2013
The thing about queues is that, in general, you don't have to worry about them. Ideally you only specify resources for the job you are about to submit. In doing so you provide enough information to the scheduler to decide in which queue the job belongs in. Hence, you explicitly allocate resources and implicitly choose a queue. However, in some cases, namely when it comes to the problem of running a job on, say, particular hardware components of the cluster, it is beneficial to know the resources that need to be allocated in order to access a proper queue running on that component.
Albeit you (as a user) should worry more about specifying resources instead of targeting queues it is useful to disentangle the relationship between certain queues that are implemented on the HPC system and the resources that need to be specified in order for the scheduler to address that queue. Also some of you might be familiar with the concept of queues and prefer to think in terms of them.
Listing all possible queues
Now, thinking in terms of queues, you might be interested to see which queues there are on the HPC system. Logged in to your HPC account, you obtain a full list of all
possible queues a job might be placed in by typing the command qconf -sql
. qconf
is a grid engine configuration tool which, among other
things, allows you to list existing queues and queue configurations. In casual terms, the sequence of options -sql
demands: show (s
) queue (q
) list (l
).
As a result you might find the following list of queues:
cfd_him_long.q cfd_him_shrt.q cfd_lom_long.q cfd_lom_serl.q cfd_lom_shrt.q cfd_xtr_expr.q cfd_xtr_iact.q glm_dlc_long.q glm_dlc_shrt.q glm_qdc_long.q glm_qdc_shrt.q mpc_big_long.q mpc_big_shrt.q mpc_std_long.q mpc_std_shrt.q mpc_xtr_ctrl.q mpc_xtr_iact.q mpc_xtr_subq.q uv100_smp_long.q uv100_smp_shrt.q
Obtaining elaborate information for a particular queue
So as to obtain more details about the configuration of a particular queue you just need to specify that queue. E.g. to get elaborate
information on the queue mpc_std_shrt.q
, just type qconf -sq mpc_std_shrt.q
, which yields
qname mpc_std_shrt.q hostlist @mpcs seq_no 10000,[mpcs001.mpinet.cluster=10001], \ [mpcs002.mpinet.cluster=10002], \ ... [mpcs123.mpinet.cluster=10123], \ [mpcs124.mpinet.cluster=10124] load_thresholds np_load_avg=1.75,slots=0 suspend_thresholds NONE nsuspend 1 suspend_interval 00:05:00 priority 0 min_cpu_interval 00:05:00 processors UNDEFINED qtype BATCH ckpt_list NONE pe_list impi impi41 linda molcas mpich mpich2_mpd mpich2_smpd \ openmpi smp mdcs rerun FALSE slots 12 tmpdir /scratch shell /bin/bash prolog root@/cm/shared/apps/sge/scripts/prolog_mpc.sh epilog root@/cm/shared/apps/sge/scripts/epilog_mpc.sh shell_start_mode posix_compliant starter_method NONE suspend_method NONE resume_method NONE terminate_method NONE notify 00:00:60 owner_list NONE user_lists herousers xuser_lists NONE subordinate_list NONE complex_values h_vmem=23G,h_fsize=800G,cluster=hero projects NONE xprojects NONE calendar NONE initial_state default s_rt INFINITY h_rt 192:0:0 s_cpu INFINITY h_cpu INFINITY s_fsize INFINITY h_fsize INFINITY s_data INFINITY h_data INFINITY s_stack INFINITY h_stack INFINITY s_core INFINITY h_core INFINITY s_rss INFINITY h_rss INFINITY s_vmem INFINITY h_vmem INFINITY
Among the listed resource attributes some stand out:
pe_list
: specifies the list of parallel environments available for the queue.hostlist
: specifies the list of hosts on which the respective queue is implemented.Here, the name of the hostlist is
@mpcs
. You can view the full list by means of the commandqconf -shgrpl @mpcs
, where-shgrpl
stands for show (s
) host group (hgrp
) list (l
).comples_values
: A list of complex resource attributes a user might allocate for his jobs using theqsub -l
option.E.g., the queue configuration value
h_vmem
is used for the virtual memory size, limiting the amount of total memory a job might consume. An entry in thecomplex_values
list of the queue configuration defines the total available amount of virtual memory on a host or a queue.slots
: number of slots available on the host. They might be shared among all the queues that run on the host.h_rt
: specifies a requestable resource of type time. A submitted job is only eligible to run in this queue, if the specified maximal value ofh_rt=192
h is not exceeded.userlist
: list of users that are eligible to place jobs in the queue.
Requestable resources
The type and amount of requestable resources differs from queue to queue. To facilitate intuition compare, e.g., the resources for mpc_std_shrt.q
and mpc_std_long.q
:
$ qconf -sq mpc_std_shrt.q | grep "qname\|hostlist\|complex_values\|h_rt" qname mpc_std_shrt.q hostlist @mpcs complex_values h_vmem=23G,h_fsize=800G,cluster=hero h_rt 192:0:0 $ qconf -sq mpc_std_long.q | grep "qname\|hostlist\|complex_values\|h_rt" qname mpc_std_long.q hostlist @mpcs complex_values h_vmem=23G,h_fsize=800G,cluster=hero,longrun=true h_rt INFINITY
Note that both queues run on the same hosts, i.e. both have identical hostlists. However, the requestable resource h_rt
and the list of complex values associated to both queues differs. At this point, details
on the resource h_rt
can once more be obtained using the qconf
command:
> qconf -sc | grep "h_rt\|#" #name shortcut type relop requestable consumable default urgency #---------------------------------------------------------------------------------------------------- h_rt h_rt TIME <= YES NO 0:0:0 0
As can be seen, the relation operator associated to h_rt
reads lower or equal. I.e., so as to be eligible to be placed in the short queue, a job is not allowed to request more than
192h of running time. Regarding the long queue, there is no upper bound on the running time and a job with proper allocated resources might be put in this queue.
Further, note that the long queue features one complex value more than the short queue, namely longrun
. Details about this resource are:
$ qconf -sc | grep "longrun\|#" #name shortcut type relop requestable consumable default urgency #---------------------------------------------------------------------------------------------------- longrun lr BOOL == FORCED NO FALSE 0
So, longrun
is of type BOOL and has the default value FALSE. In order to place a job in the long queue one has to explicitly request to set longrun=true
, see here.
As a detail, consider the requestable resource h_vmem
. Details about this resource are:
$ qconf -sc | grep "h_vmem\|#" #name shortcut type relop requestable consumable default urgency #---------------------------------------------------------------------------------------------------- h_vmem h_vmem MEMORY <= YES YES 1200M 0
I.e. it is specified as being a consumable resource. Say, you submit a single slot job to the short queue (which, by default, offers 23G per host), requesting h_vmem=4G
. Then,
this amount of memory is consumed, leaving 19G for further usage.