Difference between revisions of "Non-public templates: Coordination of Scientific Computing"

From HPC users
Jump to navigationJump to search
 
(14 intermediate revisions by the same user not shown)
Line 273: Line 273:
hendrik.spieker@uni-oldenburg.de  rexi0814 300GB Ende September 2013  
hendrik.spieker@uni-oldenburg.de  rexi0814 300GB Ende September 2013  
wilke.dononelli@uni-oldenburg.de  juro9204 700GB Ende Dezember 2013
wilke.dononelli@uni-oldenburg.de  juro9204 700GB Ende Dezember 2013
  </nowiki>
eike.mayland.quellhorst@uni-oldenburg.de auko1937  500GB Ende März 2014
</nowiki>


== Cluster downtime ==
== Cluster downtime ==
Line 527: Line 528:
Oliver
Oliver
   </nowiki>
   </nowiki>
== Ranking of accumulated running times ==
Data accumulated from full accounting file at <tt>/cm/shared/apps/sge/6.2u5p2/default/common/</tt>:
  <nowiki>
# accumulated running time by user:
# (rank)(userName)(nJobs)(accRunTime sec)(accRunTime Y:D:H:M:S) // (realName)
1  : geri2579 90754 8762221362 277:309:14:22:42    // (Christoph Norrenbrock)
2  : zatu0050   986 8059695042 255:208:12:10:42    // (Gerald Steinfeld)
3  : dumu7717 221971 6018064781 190:303:12:39:41    // (Jan Mitschker)
4  : mexi8700   1378 3973014405 125:358:23:06:45    // (Robert Guenther)
5  : lewo8864   530 3756698540 119:045:07:22:20    // (Hannes Hochstein)
6  : raga4343   4121 3185617277 101:005:13:41:17    // (Ivan Herraez Hernandez)
7  : noje7983   2017 3032096167   96:053:16:56:07    // (Hamid Rahimi)
8  : pali6150   6795 2561650013   81:083:17:26:53    // (Timo Dewenter)
9  : lick1639   1593 2507677765   79:189:01:09:25    // (none)
10 : kuxo7281 387622 2469088171   78:107:09:49:31    // (Christoph Echtermeyer)
11 : mujo4069 28887 2416553044   76:229:08:44:04    // (Dennis Hinrichs)
12 : zugo5243   2194 2349003067   74:177:12:51:07    // (Bjoern Witha)
13 : sobe5479   277 1940540600   61:194:23:03:20    // (Mohamed Cherif)
14 : talu7946   1848 1731319160   54:328:09:59:20    // (Elia Daniele)
15 : mapa5066   881 1649263082   52:108:16:38:02    // (Muhammad Mumtaz Ahmad)
16 : juro9204 52786 1598644949   50:252:20:02:29    // (Wilke Dononelli)
17 : paxu8307   201 1508284764   47:301:23:59:24    // (Martin Doerenkaemper)
18 : haft0127   6143 1502848389   47:239:01:53:09    // (Alexander Hartmann)
19 : wane3073 47826 1358193190   43:024:19:53:10    // (Pascal Fieth)
20 : ruxi6902 24936 1328856767   42:050:06:52:47    // (Niko Moritz)
21 : xuzu2182   208 1273593751   40:140:16:02:31    // (Habib Faraz)
22 : jopp2853   2250 1251690772   39:252:03:52:52    // (Matthias Schramm)
23 : axto3490   1482 1203500764   38:059:09:46:04    // (Lena Albers)
24 : mama5378   813 1007058034   31:340:18:20:34    // (Erik Asplund)
25 : zaer0019   1647 932450249   29:207:05:57:29    // (none)
26 : tugt2159   698 848673355   26:332:14:35:55    // (none)
27 : axar4346   1048 821531669   26:018:11:14:29    // (Wided Medjroubi)
28 : nero7893 238888 781741923   24:287:22:32:03    // (Karsten Looschen)
29 : tefi0368   522 749369848   23:278:06:17:28    // (Thomas Uwe Mueller)
30 : rexi0814   7053 682259243   21:231:12:27:23    // (Hendrik Spieker)
31 : hoeb8124   549 674763740   21:144:18:22:20    // (Matti Reissmann)
32 : fupa5629   2644 660537352   20:345:02:35:52    // (Carlos Peralta)
33 : joji0479 19113 618288962   19:221:02:56:02    // (Bjoern Ahrens)
34 : zett7687 11998 600740860   19:018:00:27:40    // (Gunnar Claussen)
35 : alxo9476 246372 562359994   17:303:19:06:34    // (Oliver Melchert)
36 : norg2515   159 482991772   15:115:04:22:52    // (Francisco Santos Alamillos)
37 : mohu0982 42701 454587831   14:151:10:23:51    // (Hendrik Schawe)
38 : xuer9386     72 422703008   13:147:09:30:08    // (Hugues Ambroise)
39 : dihl9738   5007 420689897   13:124:02:18:17    // (Stefan Albensoeder)
40 : rumu5289   331 417312644   13:085:00:10:44    // (Dennis Lutters)
41 : pord4261     46 408761150   12:351:00:45:50    // (Nils Kirrkamm)
42 : seho2708   258 371352453   11:283:01:27:33    // (Bastian Dose)
43 : hael3199   298 364805754   11:207:06:55:54    // (none)
44 : juad6850     70 355027290   11:094:02:41:30    // (Bernhard Stoevesandt)
45 : mazo4669 176603 324777634   10:109:00:00:34    // (Joerg-Hendrik Bach)
46 : guha6809   421 323568625   10:095:00:10:25    // (Nils Burchardt)
47 : axex3705 145689 316742167   10:015:23:56:07    // (Bjoern Wolff)
48 : dege2737   316 315657278   10:003:10:34:38    // (Andre Schaefer)
49 : beex6806   305 312700988   9:334:05:23:08    // (Sebastian Grashorn)
50 : rehi3280   598 287544412   9:043:01:26:52    // (Patrick Zark)
51 : elbi1717   163 271589443   8:223:09:30:43    // (Chai Heng Lim)
52 : arbu5607   375 224697248   7:045:15:54:08    // (none)
53 : repe1429   2066 223569730   7:032:14:42:10    // (Gabriele Tomaschun)
54 : peft9847   4548 219441763   6:349:20:02:43    // (Stephan Spaeth)
55 : auwo0040   978 207991853   6:217:07:30:53    // (Nicole Stoffels)
56 : pedo3100   301 198306391   6:105:05:06:31    // (Nils Ayral)
57 : bofe5314   137 186738201   5:336:07:43:21    // (Terno Ohsawa)
58 : gusu8312   2709 166969970   5:107:12:32:50    // (Elisabeth Stuetz)
59 : muck2227   5513 165158455   5:086:13:20:55    // (Reinhard Leidl)
60 : ergi3581   247 163879188   5:071:17:59:48    // (Jan Warfsmann)
61 : kefi1701   2548 154479289   4:327:22:54:49    // (Markus Manssen)
62 : doal7591   275 150498526   4:281:21:08:46    // (Henrik Beil)
63 : fewu9781     32 146667522   4:237:12:58:42    // (Jakob Raphael Spiegelberg)
64 : dael3266     44 138395157   4:141:19:05:57    // (none)
65 : penu5836   118 130948417   4:055:14:33:37    // (none)
66 : eddo9274   231 127215033   4:012:09:30:33    // (Karsten Lettmann)
67 : adje6680   136 126437656   4:003:09:34:16    // (Olaf Bininda-Emonds)
68 : zehu1974   276 115601883   3:242:23:38:03    // (Robert Roehse)
69 : asto4412   154 110407032   3:182:20:37:12    // (Marcel David Fabian)
70 : wole2741   1404 109671551   3:174:08:19:11    // (Enno Gent)
71 : xuhe4555   143 104670156   3:116:11:02:36    // (Florian Habecker)
72 : febu8581     38 102032534   3:085:22:22:14    // (Shreya Sah)
73 : gira3297 49356   92688463   2:342:18:47:43    // (Marc Rene Schaedler)
74 : xaed4158     60   89232564   2:302:18:49:24    // (Lukas Halekotte)
75 : axfa8508   712   80076221   2:196:19:23:41    // (Daniel Ritterskamp)
76 : alpe3589   2190   73408223   2:119:15:10:23    // (Ralf Buschermoehle)
77 : bamo9780   9758   69839925   2:078:07:58:45    // (Robert Rehr)
78 : alpo8118   360   68073708   2:057:21:21:48    // (Janek Greskowiak)
79 : gare6232     45   66389844   2:038:09:37:24    // (Michael Schwarz)
80 : woza3934   131   66234272   2:036:14:24:32    // (Stefanie Ruehlicke)
81 : kasu8272   815   61053268   1:341:15:14:28    // (Benjamin Wahl)
82 : adto6352     54   54360744   1:264:04:12:24    // (Christian Lasar)
83 : xift8589   1329   53751418   1:257:02:56:58    // (Bintoro Anang Subagyo)
84 : zuka9781     25   53362136   1:252:14:48:56    // (none)
85 : edfi4106   206   50693100   1:221:17:25:00    // (Wai-Leung Yim)
86 : asju8096   7772   50006862   1:213:18:47:42    // (Hendrik Kayser)
87 : fewo4259     83   49631568   1:209:10:32:48    // (Julia Schloen)
88 : foxu9815     35   49561836   1:208:15:10:36    // (Sonja Drueke)
89 : esgi3777 29098   49357992   1:206:06:33:12    // (Arne-Freerk Meyer)
90 : liro0805 60351   49192498   1:204:08:34:58    // (Christian Hinrichs)
91 : hoke3495   644   46876265   1:177:13:11:05    // (Constantin Junk)
92 : diab3109   647   44309313   1:147:20:08:33    // (Bettina Gertjerenken)
93 : weaf4518   117   43860592   1:142:15:29:52    // (Daniel Ahlers)
94 : beau4118     57   38689804   1:082:19:10:04    // (Crispin Reinhold)
95 : jolo0127     82   38508653   1:080:16:50:53    // (Angela Josupeit)
96 : teaf1672   1899   36772835   1:060:14:40:35    // (Markus Niemann)
97 : sawo0024   3610   32917355   1:015:23:42:35    // (Fabian Gieseke)
98 : abgu0243     42   31588128   1:000:14:28:48    // (Thorsten Kluener)
99 : tasi6754   601   30416694   0:352:01:04:54    // (Thorsten Kolling)
100: nufa8270   212   28109178   0:325:08:06:18    // (Feifei Xiong)
101: tode0315   705   28012306   0:324:05:11:46    // (none)
102: rael0338     25   26086608   0:301:22:16:48    // (Florian Loose)
103: moge1512   259   25676148   0:297:04:15:48    // (Alexander Buss)
104: rawa6912   132   23203817   0:268:13:30:17    // (Rajat Karnatak)
105: meex7858   679   22044487   0:255:03:28:07    // (Nils Andre Treiber)
106: sino6087   330   20640056   0:238:21:20:56    // (Valeria Angelino)
107: dofo5522   9426   20060762   0:232:04:26:02    // (Hendrike Klein-Hennig)
108: ralo6199   107   18768812   0:217:05:33:32    // (Hugues Ambroise)
109: pedi6862     37   17999921   0:208:07:58:41    // (Jose-Placido Parra Viol)
110: lozi7895   4173   16955613   0:196:05:53:33    // (Felix Thole)
111: leck7200     13   15078720   0:174:12:32:00    // (Francisco Toja-Silva)
112: reeb1775 195462   14259016   0:165:00:50:16    // (Alexey Ryabov)
113: xojo9092     23   12855242   0:148:18:54:02    // (Kajari Bera)
114: tezi2895     73   11734024   0:135:19:27:04    // (none)
115: auzu2321   130   10635434   0:123:02:17:14    // (Liudmila Moskaleva)
116: wuge3108     46   9794544   0:113:08:42:24    // (none)
117: naji9738     17   9635228   0:111:12:27:08    // (Henning Grossekappenberg)
118: fesi4140   232   8495898   0:098:07:58:18    // (Lueder von Bremen)
119: muxu6688   1222   7733649   0:089:12:14:09    // (Robert Schadek)
120: kano8824   449   7594554   0:087:21:35:54    // (none)
121: hupe3583   200   6816092   0:078:21:21:32    // (Zacharais Njam Mokom)
122: gepp0026     63   6463839   0:074:19:30:39    // (Jan Vogelsang)
123: nixi9106     18   6266786   0:072:12:46:26    // (Henning Schepker)
124: lulo2927   102   4829755   0:055:21:35:55    // (Lukas Vollmer)
125: kuli5479     97   4799076   0:055:13:04:36    // (Vincent Hess)
126: buza0896   482   4094419   0:047:09:20:19    // (Iko Pieper)
127: jurf9330   133   3944954   0:045:15:49:14    // (Frederik Haack)
128: sidu8566   221   3320357   0:038:10:19:17    // (none)
129: zana6011   459   3219100   0:037:06:11:40    // (Davide Trabucchi)
130: esas0656   2720   2375967   0:027:11:59:27    // (Hauke Beck)
131: fupu4553     4   1843637   0:021:08:07:17    // (Vasken Ketchedjian)
132: nine4710     12   1211383   0:014:00:29:43    // (Rainer Koch)
133: kisa9270     15     830556   0:009:14:42:36    // (Murali Sukumaran)
134: bogo2286     40     798252   0:009:05:44:12    // (Timo Gerkmann)
135: sona3432     4     762062   0:008:19:41:02    // (Marie Arndt)
136: boch5350   4447     475622   0:005:12:07:02    // (hpc-Clonebusters)
137: zurn7015     21     361603   0:004:04:26:43    // (Heidelinde Roeder)
138: medi4340     2     272832   0:003:03:47:12    // (Vanessa Schakau)
139: jihu5122   488     231738   0:002:16:22:18    // (Marc Bromm)
140: gise4802     25     217621   0:002:12:27:01    // (Maria Tschikin)
141: lodu8387     14     187264   0:002:04:01:04    // (Anna Vanselow)
142: daes8547   151     133827   0:001:13:10:27    // (Stefan Rach)
143: gaha8290     3     126961   0:001:11:16:01    // (Ksenia Guseva)
144: limo1478 53766     94097   0:001:02:08:17    // (Maxim Klimenko)
145: guxa1456     38     88294   0:001:00:31:34    // (Derya Dalga)
146: kode4290     8     38620   0:000:10:43:40    // (Martin Klein-Hennig)
147: joho0429     16     23596   0:000:06:33:16    // (Ante Jukic)
148: feze2916     21     21620   0:000:06:00:20    // (Martin Reiche)
149: fiwi0088     20     15467   0:000:04:17:47    // (Dorothee Hodapp)
150: auko1937     6     10623   0:000:02:57:03    // (Eike Mayland-Quellhorst)
151: sott5485     42       8872   0:000:02:27:52    // (none)
152: kako0048     2       6984   0:000:01:56:24    // (Thomas Greve)
153: fime4215     8       6360   0:000:01:46:00    // (Hauke Wurps)
154: merd1369     3       4352   0:000:01:12:32    // (hpc-guest012)
155: zeas7445     52       4097   0:000:01:08:17    // (Ina Kodrasi)
156: giku5867     2       3636   0:000:01:00:36    // (Philipp Kraemer)
157: lega0306     1       2893   0:000:00:48:13    // (Thomas Breckel)
158: root     29       2565   0:000:00:42:45    // (root)
159: teer6901     1       1176   0:000:00:19:36    // (Rainer Beutelmann)
160: argu7102     6       1089   0:000:00:18:09    // (hpc-guest001)
161: tund5075     6       1084   0:000:00:18:04    // (Juergen Weiss)
162: gaje2471     2       501   0:000:00:08:21    // (Jochem Rieger)
163: mimo4729     8       284   0:000:00:04:44    // (Benjamin Cauchi)
164: garu0840     4       198   0:000:00:03:18    // (Angelina Paulmann)
165: beba2086     1       111   0:000:00:01:51    // (Maria Wieckhusen)
166: esfu4434     4         26   0:000:00:00:26    // (Reemda Jaeschke)
167: nusi9376     1         11   0:000:00:00:11    // (Thomas Kaspereit)
168: xonu1606     5         3   0:000:00:00:03    // (Christoph Gerken)
169: pebu4515     8         2   0:000:00:00:02    // (Nikolaos Fytas)
170: nime9670     3         0   0:000:00:00:00    // (Chandan Kumar)
171: fide6340     1         0   0:000:00:00:00    // (Carsten Engelberts)
# accumulated running time by ag:
# (rank)(agName)(accRunTime sec)(accRunTime Y:D:H:M:S)(nUsers)
0  : fw                  33026502409 1047:096:04:26:49  36
1  : agcompphys          17388083088 551:135:23:04:48  14
2  : agtheochem          11095713332 351:307:14:35:32  23
3  : iwes                5502029821 174:170:21:37:01  6
4  : agmolchem          3762707990 119:114:20:39:50  9
5  : agmodelling        2563438875   81:104:10:21:15  4
6  : agcondmat          2461079978   78:014:17:19:38  3
7  : agmediphys          1884234991   59:273:06:36:31  7
8  : agphysocean        850829692   26:357:13:34:52  7
9  : arv                422703008   13:147:09:30:08  1
10  : agcompint          371704009   11:287:03:06:49  3
11  : hrz                182115152   5:282:19:32:32  3
12  : agsystematics      126437656   4:003:09:34:16  1
13  : agsigproc          105130412   3:121:18:53:32  8
14  : agses                73408223   2:119:15:10:23  1
15  : aghydrogeo            72872784   2:113:10:26:24  2
16  : agfieldtheo          53751418   1:257:02:56:58  1
17  : agenvinf              49192498   1:204:08:34:58  1
18  : agstatphys            36772835   1:060:14:40:35  1
19  : agcompchem            27718087   0:320:19:28:07  3
20  : agcoordchem          26086608   0:301:22:16:48  1
21  : agcomplexsys          23692381   0:274:05:13:01  3
22  : agdistsys            7733649   0:089:12:14:09  1
23  : agnanoopt            6463839   0:074:19:30:39  1
24  : agsofteng              569722   0:006:14:15:22  3
25  : aggeneralpsych          133827   0:001:13:10:27  1
26  : agpsychdh              21620   0:000:06:00:20  1
27  : agplantbiodiv          10623   0:000:02:57:03  1
28  : hpcguest                  5441   0:000:01:30:41  2
29  : aganimalbiodiv            3636   0:000:01:00:36  1
30  : agbiopsych                2893   0:000:00:48:13  2
31  :                          2565   0:000:00:42:45  1
32  : agzoophys                1176   0:000:00:19:36  1
33  : agancp                    501   0:000:00:08:21  1
34  : agaccounting                37   0:000:00:00:37  2
35  : agmediainf                  0   0:000:00:00:00  1
  </nowiki>
== Usage of the different parallel environments ==
=== PE usage HERO since 2013-05-01 ===
  <nowiki>
# (PE Name)(nJobs)(accumRuntime sec)(accumRunTime Y:D:H:M:S)(nUsers)
NONE            241384  1957862608   62:030:10:43:28  38
impi            282    100606056   3:069:10:07:36  4
impi41          719    598668582   18:359:00:49:42  1
linda          2          116920   0:001:08:28:40  1
mdcs            7762      93353674   2:350:11:34:34  20
molcas          205769  4795307831 152:021:05:57:11  4
mpich2_mpd      1              4   0:000:00:00:04  1
openmpi        218    201576523   6:143:01:28:43  9
openmpi_ib      39        78408448   2:177:12:07:28  3
smp            80326  3350843949 106:092:21:59:09  28
smp_long        4        47440724   1:184:01:58:44  2
  </nowiki>
=== PE usage FLOW since 2013-05-01 ===
  <nowiki>
# (PE Name)(nJobs)(accumRuntime sec)(accumRunTime Y:D:H:M:S)(nUsers)
NONE            6363      15067552   0:174:09:25:52  26
impi            706    1730302019   54:316:15:26:59  11
impi41          1673    1532440078   48:216:13:47:58  15
impi41_long    46      676586304   21:165:20:38:24  1
mdcs            36          38949   0:000:10:49:09  2
mpich2_mpd      1              4   0:000:00:00:04  1
openmpi        6          49016   0:000:13:36:56  2
openmpi_ib      3249    3283513937 104:043:15:12:17  20
openmpi_ib_long 144    2625107532   83:088:04:32:12  3
smp            2791    196156830   6:080:08:00:30  4
starccmp        106      43781208   1:141:17:26:48  1
  </nowiki>


== HPC tutorial ==
== HPC tutorial ==
Line 1,106: Line 853:
[HPC-HERO or HPC-FLOW] Contribution to the list of HPC publications
[HPC-HERO or HPC-FLOW] Contribution to the list of HPC publications


to the coordinator of scientific computing (currently: oliver.melchert@uni-oldenburg.de). It would be highly appreciated if  
to the coordinator of scientific computing (position currently substituted by: oliver.melchert@uni-oldenburg.de). It would be highly appreciated if  
you could provide the [http://en.wikipedia.org/wiki/Digital_object_identifier  digital object identifier] (DOI) that refers to your
you could provide the [http://en.wikipedia.org/wiki/Digital_object_identifier  digital object identifier] (DOI) that refers to your
article. If the journal you published your article(s) in offers to export citations
article within that mail. If the journal you published your article(s) in offers to export citations
you might alternatively send one of the formats supported by the journal (preverably: BibTex).
you might alternatively send one of the formats supported by the journal (preverably: BibTex).


Line 1,115: Line 862:
were supported by simulations on the HPC facilities.
were supported by simulations on the HPC facilities.


<h3><span class="mw-headline">2012</span></h3>
<ol start="1">
<li> Claussen, G. and Apolo, L. and Melchert, O. and Hartmann, A. K., <br /> Analysis of the loop length distribution for the negative-weight percolation problem in dimensions d=2 through d=6, <br /> ''Physical Review E'' '''86''', 5 (2012), 10.1103/PhysRevE.86.056708. </li>
</ol>
<h3><span class="mw-headline">2013</span></h3>
<ol start="2">
<li> Melchert, O., <br /> Percolation thresholds on planar Euclidean relative-neighborhood graphs, <br /> ''Physical Review E'' '''87''', 4 (2013), 10.1103/PhysRevE.87.042106. </li>
<li> Melchert, O. and Hartmann, A. K., <br /> Information-theoretic approach to ground-state phase transitions for two- and three-dimensional frustrated spin systems, <br /> ''Physical Review E'' '''87''', 2 (2013), 10.1103/PhysRevE.87.022107. </li>
<li> Melchert, O., <br /> Universality class of the two-dimensional randomly distributed growing-cluster percolation model, <br /> ''Physical Review E'' '''87''', 2 (2013), 10.1103/PhysRevE.87.022115. </li>
<li> Norrenbrock, C. and Melchert, O. and Hartmann, A. K., <br /> Paths in the minimally weighted path model are incompatible with Schramm-Loewner evolution, <br /> ''Physical Review E'' '''87''', 3 (2013), 10.1103/PhysRevE.87.032142. </li>
<li> Melchert, O. and Hartmann, A. K., <br /> Typical and large-deviation properties of minimum-energy paths on disordered hierarchical lattices, <br /> ''The European Physical Journal B'' '''86''', 7 (2013), 10.1140/epjb/e2013-40230-1. </li>
</ol>




* Wiki Ref-generator: http://reftag.appspot.com/doiweb.py?doi=10.1103%2FPhysRevE.87.022107
* Wiki Ref-generator: http://reftag.appspot.com/doiweb.py?doi=10.1103%2FPhysRevE.87.022107
* style example: http://en.wikipedia.org/wiki/Wikipedia:Citing_sources
* style example: http://en.wikipedia.org/wiki/Wikipedia:Citing_sources
<ref name="MelchertHartmann2013">{{cite journal|last1=Melchert|first1=O.|last2=Hartmann|first2=A. K.|title=Information-theoretic approach to ground-state phase transitions for two- and three-dimensional frustrated spin systems|journal=Physical Review E|volume=87|issue=2|year=2013|issn=1539-3755|doi=10.1103/PhysRevE.87.022107}}</ref>


== List of user wiki pages ==
== List of user wiki pages ==
Line 1,149: Line 904:


http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1
== MISC ==
=== Limited resource quota sets ===
Slot limits for different user groups are set using resource quota sets (rqs)
  <nowiki>
alxo9476@hero01:~$ qconf -srqs
{
  name        max_slots_for_express_queue_FLOW
  description  "limits number of slots for express queue on FLOW"
  enabled      TRUE
  limit        users {@flowusers} queues {cfd_xtr_expr.q} to slots=40
}
{
  name        max_slots_for_pe_mdcs
  description  "limits number of slots for PE mdcs"
  enabled      TRUE
  limit        users {*} pes {mdcs} to slots=36
}
{
  name        max_slots_for_user_groups_HERO
  description  "limits number of slots of users on HERO"
  enabled      TRUE
  limit        users {@herousers} to slots=360
}
  </nowiki>

Latest revision as of 09:00, 21 March 2014

Here, for documentation, completeness and availability I will list some templates of e-mails and further things I used on a regular basis.

Application for a new user account

So as to apply for a new user account, an eligible user needs to specify three things:

  • his/her anonymous user-name in the form abcd1234,
  • the working group (or ideally the unix-group) he will be associated to, and
  • an approximate data until when the user account will be needed.

No university user account, yet

If the user has no university-wide anonymous user account, yet, he first needs to apply for one. An exemplary e-mail with advice on how to get such a (guest) user account is listed below

 
Sehr geehrter Herr NAME,

um einen Nutzeraccount für das HPC System erhalten zu können müssen Sie bereits
über einen universitätsweiten, anonymen Nutzeraccount verfügen.  Als Gast einer
Arbeitsgruppe können sie einen entsprechenden Guest-Account bei den IT-Diensten
beantragen. Besuchen Sie dazu bitte die Seite

http://www.uni-oldenburg.de/itdienste/services/nutzerkonto/gaeste-der-universitaet/

und wählen Sie die Option "Gastkonto einrichten". Starten sie den Workflow für
das Anlegen eines Gastkontos. Tragen Sie als Verantwortlichen den Leiter der
universitären Organisationseinheit ein, der Ihr Vorhaben unterstützt. Bitten
Sie diesen, die E-Mail die er erhält zu öffnen, den darin enthaltenen Link
aufzurufen und den Antrag zu genehmigen. Das Konto wird dann automatisch
erstellt. Ihr anonymer Nutzeraccount wird die Form "abcd1234" haben.

Um nun ihren Nutzeraccount für das HPC System freischalten zu können senden Sie
mir bitte folgende Details:

1) den anonymen Nutzernamen für den der HPC account erstellt werden soll,
2) den Namen der Arbeitsgruppe der Sie zugeordnet werden sollen,
3) einen voraussichtlichen Gültigkeitszeitraum für den benötigten HPC account.

Sobald Ihr HPC account aktiviert ist werde ich mich mit weiteren Informationen
bei Ihnen melden.

Mit freundlichen Grüßen
Oliver Melchert
  



User account HPC system: Mail to IT-Services

Once the user supplied the above information, you can apply for a HPC user account at the IT-Service using an e-mail similar to:

 
Mail to: felix.thole@uni-oldenburg.de; juergen.weiss@uni-oldenburg.de
Betreff: [HPC-HERO] Einrichtung eines Nutzeraccounts

Sehr geehrter Herr Thole,
sehr geehrter Herr Weiss,

Hiermit bitte ich um die Einrichtung eines HPC Accounts für 
Herrn NAME

abcd124; UNIX-GROUP

der Account wird voraussichtlich bis DATUM benötigt.

Mit freundlichen Grüßen
Oliver Melchert
   

If no proper unix group exists, yet, send instead an email similar to the following:

 
Mail to: felix.thole@uni-oldenburg.de; juergen.weiss@uni-oldenburg.de
Betreff: [HPC-HERO] Einrichtung eines Nutzeraccounts

Hallo Felix,
hallo Jürgen,

Hiermit bitte ich um die Einrichtung eines HPC Accounts für Herrn NAME

abcd1234

der Account wird voraussichtlich bis DATUM benötigt.

Herr NAME ist Mitarbeiter der AG "AG-NAME" (AG-URL) von Herrn Prof. NAME AG-LEITER. 
Die entsprechede AG hat noch keine eigene Unix Group! Kann daher eine neue Unix Group 
für die AG angelegt und in die bestehende Gruppenhierarchie eingebunden werden?

Ich schlage hier den Namen 

agUNIX-GROUP-NAME

für die Unix Gruppe vor. Die AG gehört zur Fak. FAKULTAET.

Mit freundlichen Grüßen
Oliver Melchert
  

User account HPC system: Mail back to user

As soon as you get feedback from the IT-Services that the account was created, send an email to the user similar to the following:

 
Betreff: [HPC-HERO] HPC user account

Sehr geehrter Herr NAME,

die IT-Dienste haben Ihren HPC Account bereits freigeschaltet. Ihr Loginname
ist

abcd1234

und Sie sind der Unix-gruppe

UNIX-GROUP-NAME

zugeordnet. 

Sie verfügen über 100GB Plattenspeicher auf dem lokalen Filesystem (mit
vollem Backup). Wenn Sie über einen begrenzten Zeitraum mehr Speicherplatz
benötigen können Sie mich gerne diesbezüglich anschreiben. Ihren aktuellen
Speicherverbrauch auf dem HPC System können Sie mittels "iquota" einsehen. An
jedem Sonntag werden Sie eine Email mit dem Betreff "Your weekly HPC Quota
Report" erhalten, die Ihren aktuellen Speicherverbrauch zusammenfasst.

Anbei sende ich Ihnen einen Link zu unserem HPC user wiki, auf dem Sie weitere
Details über das lokale HPC System erhalten 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Main_Page

Der Beitrag "Brief Introduction to HPC Computing" unter
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Brief_Introduction_to_HPC_Computing
illustriert einige einfache Beispiele zur Nutzung der verschiedenen
(hauptsächlich parallelen) Anwendungsumgebungen die auf HERO zur Verfügung
stehen und ist daher besonders zu empfehlen. Er diskutiert außerdem einige
andere Themen, wie z.B. geeignetes Alloziieren von Ressourcen und Debugging.

Wenn Sie planen die parallelen Ressourcen von MATLAB auf HERO zu nutzen kann
ich Ihnen die Beiträge "MATLAB Distributed Computing Server" (MDCS) unter 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=MATLAB_Distributing_Computing_Server 
und "MATLAB Examples using MDCS" unter
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Matlab_Examples_using_MDCS
empfehlen. Der erste Beiträge zeigt wie man das lokale Nutzerprofil für die
Nutzung von MATLAB auf HERO konfigurieren kann und der Zweite beinhaltet einige
Beispiele und diskutiert gelegentlich auftretende Probleme im Umgang mit MDCS.

Viele Grüße
Oliver Melchert
  

english variant of the above email:

 
Betreff: [HPC-HERO] HPC user account

Dear NAME,

the IT-Services were now able to activate your HPC account. Your login name to
the HPC system is 

abcd1234

and you are integrated in the group

UNIX-GROUP-NAME

Per default you have 100GB of storage on the local filesystem which is fully
backed up. If you need some more storage over a limited period in time you can
contact me. Note that you can check your memory consumption on the HPC system
via the command "iquota". In addition, on each Sunday you will receive an
email, titled "Your weekly HPC Quota Report", summarizing your current memory
usage. 

Below I sent you a link to the HPC user wiki where you can find further 
details on the HPC system
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Main_Page

In particular I recommend the "Brief Introduction to HPC Computing" at
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Brief_Introduction_to_HPC_Computing
which illustrates several basic examples related to different (mostly parallel)
environments the HPC system HERO offers and discusses a variety of other
topics, as, e.g., proper resource allocation and debugging. 

Further, if you plan to use the parallel capabilities of MATLAB on HERO, I
recommend the "MATLAB Distributed Computing Server" (MDCS) page at 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=MATLAB_Distributing_Computing_Server 
and the "MATLAB Examples using MDCS" wiki page at
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Matlab_Examples_using_MDCS
These pages summarize how to properly set up your profile for using MATLAB on HERO
and discuss some of the frequently appearing problems.

With kind regards
Oliver
  

User account HPC system: Mail back to user; Fak 2 (STATA users)

New users from Fak 2 most likely want to use the STATA software. An adapted version of the above email reads

 
Dear MY_NAME,

the IT-Services activated your HPC account already. Your login name to
the HPC system is 

LOGIN_NAME

and you are associated to the unix group

UNIX_GROUP

This is also reflected by the structure of the filesystem on the HCP system.

Per default you have 100GB of storage on the local filesystem which is fully
backed up. If you need some more storage over a limited period in time you can
contact me. Note that you can check your memory consumption on the HPC system
via the command "iquota". In addition, on each Sunday you will receive an
email, titled "Your weekly HPC Quota Report", summarizing your current memory
usage. 

Below I sent you a link to the HPC user wiki where you can find further details
on the HPC system: 

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Main_Page

If you plan to use the parallel capabilities of STATA on HERO, I recommend the
"STATA" entry at

Main Page > Application Software and Libraries > Mathematics/Scripting > STATA

see: http://wiki.hpcuser.uni-oldenburg.de/index.php?title=STATA
The above page summarizes how to access the HPC System and how to successfully 
submit a STATA job. 

With kind regards
Dr. Oliver Melchert
  

Temporary extension of disk quota

Sometimes a user from the theoretical chemistry group needs an temporary extension of the available backed-up disk space. Ask him to provide

  • the total amount of disk space needed (he might check his current limit by means of the unix command iquota)
  • an estimated data until the extension is required

Mail to IT-Servies

Then send an email similar to the one listed below to the IT-Service

 
Mail to: felix.thole@uni-oldenburg.de; juergen.weiss@uni-oldenburg.de
Betreff: [HPC-HERO] Erhöhung des verfügbaren Festplattenspeichers eines Nutzers 

Hallo Felix,
hallo Jürgen,

der HPC User NAME

abcd1234; UNIX-GROUP

hat darum gebeten seinen Disk Quota vorübergehend zu erhöhen. Er bittet 
um eine Erhöhung auf ein Gesamtvolumen von

500GB

die bis Ende Dezember 2013 benötigt wird. Danach kann er die 
Daten entsprechend archivieren und der Disk Quota könne wider
zurückgesetzt werden.

Viele Grüße,
Oliver
  

List of users with nonstandard quota

Users that currently enjoy an extended disk quota:

 
NAME                              ID            MEM       LIMIT
jan.mitschker@uni-oldenburg.de    dumu7717 1TB   no limit given
hendrik.spieker@uni-oldenburg.de  rexi0814 300GB Ende September 2013 
wilke.dononelli@uni-oldenburg.de  juro9204 700GB Ende Dezember 2013
eike.mayland.quellhorst@uni-oldenburg.de  auko1937  500GB Ende März 2014

Cluster downtime

In case there needs to be a maintenance downtime for the cluster, send an email similar to the following to the mailing list of the HPC users

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] Maintenance downtime 11-13 June 2013 (announcement)

Dear Users of the HPC facilities,

this is to inform you about an overly due THREE-DAY MAINTENANCE DOWNTIME

FROM: Tuesday 11th June 2013, 7 am 
TO: Thursday 13th June 2013, 16 pm

This downtime window is required for essential maintenance work regarding
particular hardware components of HERO. Ultimately, the scheduled downtime will
fix longstanding issues caused by malfunctioning network switches.  Please note
that all running Jobs will be killed if they are not finished up to 11th June 7
am. During the scheduled downtime, all queues and filesystems will be
unavailable.  We expect the HPC facilities to resume on Thursday afternoon. 

I will remind you about the upcoming three-day maintenance downtime in 
unregular intervals.

Please accept my apologies for any inconvenience caused
Oliver Melchert
  

In case the downtime needs to be extended send an email similar to:

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] Delay returning the HPC system HERO to production status

Dear Users of the HPC Facilities,

we currently experience a DELAY RETURNING THE hpc SYSTEM TO PRODUCTION STAUTS
since the necessary change of the hardware components took longer than
originally expected. The HPC facilities are expected to finally resume service
by

Friday 14th June 2013, 15:00 

We will notify you as soon as everything is back online. 

With kind regards
Oliver Melchert
  

you do not need to supply much details, yet. However, if another extension is necessary, you should provide some details otherwise prepare for complaints by the users. So, your email could look similar to:

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] Further delay returning the HPC system HERO to production status

Dear Users of the HPC Facilities,

as communicated already yesterday, we currently experience a DELAY RETURNING 
THE hpc SYSTEM TO PRODUCTION STATUS. The delay results from difficulties related to 
the maintenance work on the hardware components of HERO.

The original schedule for the maintenance work could not be kept. Some details
of the maintenance process are listed below:

According to the IT-services, the replacement of the old (malfunctioning)
network switches by IBM engineers worked out well (with no delay). However, the
configuration of the components by pro-com engineers took longer that the
previously estimated single day, causing the current delay.  Once the
configuration process is completed, the IT-service staff needs to perform
several tests, firmware updates and application test which will take
approximately one day.  After the last step is completed, the HPC facilities
will finally return to production status.

In view of the above difficulties we ask for your understanding that the HPC
facilities will not be up until today 15:00. We hope that the HPC facilities
resume service by 

Monday 17th June 2013, 16:00 

We will notify you as soon as everything is back online and apologize for the 
inconvenience.
 
With kind regards
Oliver Melchert
  

once the HPC is up and ready send an email similar to:

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] HPC systems have returned to production

Dear Users of the HPC Facilities,

this is to inform you that the maintenance work on the HPC systems have been
completed and the HPC component HERO has returned to production: HERO accepts
logins and has already started to process jobs.

Thank you for your patience and please accept my apologies for the extension of
the maintenance downtime and any inconvenience this might have caused
Oliver Melchert 
  


MOLCAS academic license

My question to the MOLCAS contact

 
Dear Dr. Veryazov,

my Name is Oliver Melchert and currently I'm in charge of the coordination of
the scientific computing at the University of Oldenburg. Previously this
position was occupied by Reinhard Leidl who had correspondence with you.

I write to you since I have a question regarding a licensed Software product
which was purchased earlier for our local HPC facilities. 

The Software product I'm referring to is the Quantum Chemistry Software MOLCAS,
for which we own an academic group license which will expire on 18.10.2013.

Now, my question is, in order to extend the license validity what steps do I
have to follow and can you guide me through these steps?

With kind regards
Dr. Oliver Melchert  

And their response

 
Dear Dr. Melchert,
In order to update the academic license for Molcas you should place a 
new order http://www.molcas.org/order.html
Please, print and sign the forms generated during the ordering.
These forms should be sent to me (e-mail attachment is OK).
After receiving the forms I will send you the updated license file.

There are two possibilities for the payment. By default - we will send 
you an invoice to be paid via bank transfer.
It is also possible to pay by a credit card.

     Best Regards,
                 Valera.

-- 
=================================================================
Valera Veryazov         * Tel:   +46-46-222 3110
Theoretical Chemistry   * Fax:   +46-46-222 8648
Chemical Center,        *
P.O.B. 124              * Valera.Veryazov@teokem.lu.se
S-221 00 Lund, Sweden   * http://www.teokem.lu.se/~valera

About MOLCAS: http://www.molcas.org/
-----------------------------------------------------------------
  


Large Matlab Jobs

Some Matlab users send jobs with the maximally allowed number of workers (i.e. slots in Matlab jargon), i.e. 36. Usually these Jobs get distributed over lots of hosts. E.g.:

 
job-ID  prior   name       user         state submit/start at     queue                  master ja-task-ID 
----------------------------------------------------------------------------------------------------------
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs004 MASTER        
                                                                  mpc_std_shrt.q@mpcs004 SLAVE         
                                                                  mpc_std_shrt.q@mpcs004 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs008 SLAVE         
                                                                  mpc_std_shrt.q@mpcs008 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs032 SLAVE         
                                                                  mpc_std_shrt.q@mpcs032 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs034 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs036 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs038 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs043 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs045 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs052 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs066 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs070 SLAVE         
                                                                  mpc_std_shrt.q@mpcs070 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs076 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs080 SLAVE         
                                                                  mpc_std_shrt.q@mpcs080 SLAVE         
                                                                  mpc_std_shrt.q@mpcs080 SLAVE         
                                                                  mpc_std_shrt.q@mpcs080 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs087 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs089 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs090 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs091 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs099 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs107 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs110 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs111 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs112 SLAVE         
                                                                  mpc_std_shrt.q@mpcs112 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE    
  

If the jobs have lots of I/O this puts a big strain on the filesystem. For these large jobs the "parallel job memory issue" is a problem. I.e. the master process has to account (in terms of memory) for all the connections to the other host machines. Then, if the master process runs out of memory the job gets killed. More common are 8 slot jobs and even more common are jobs with even less slots.


Login problems

Every now and then external guests or regular users try to login to the HPC system from outside the university and the straightforward attempt via

 ssh abcd1234@hero.hpc.uni-oldenburg.de

fails, of course. Then, the user might report

 
Dear Oliver,

My name is Pavel Paulau, 
I tried today for the first time to log in in cluster: 

ssh exwi4008@hero.hpc.uni-oldenburg.de 

and got message: 
"Permission denied, please try again."

Could You say what is the reason? What should I do to get access?

Thanks.
Kind wishes,
Pavel
  

A possible response then might read

 
Dear Pavel,

on the first sight your command line statement looks right, provided that you
try to login to the HPC system from a terminal within the University of
Oldenburg.  I also checked that your HPC account indeed exists (and it does :)).

As pointed out in the HPC user wiki it makes a difference whether you attempt
to login from a Terminal within the University of Oldenburg or from outside the
university:
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Logging_in_to_the_system#From_within_the_University_.28intranet.29 

In case you want to login to the HPC system from outside the university I
recommend to setup a VPN connection via the gateway vpn2.uni-oldenburg.de as
pointed out in the above user-wiki entry. However, sometimes, even though the
VPN tunnel is correctly set up, the login procedure might fail due to problems
resolving the hostname. Then you might try to access the cluster via the ip
address of the master node. Just establish the VPN tunnel and try to access
HERO via

ssh exwi4008@10.140.1.61 

this should resolve the name issues.

With kind regards
Oliver
  

HPC tutorial

Requesting seminar rooms

 
Hallo Herr Melchert,
die Buchungen habe ich eingetragen!

Gruß
Silke Harms


Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg

Telefon: 0441 / 798-2483


-----Ursprüngliche Nachricht-----
Von: Oliver Melchert 
Gesendet: Donnerstag, 24. Oktober 2013 10:28
An: Silke Ulrike Harms
Betreff: RE: Anfrage Raum für Einzel-/Blockveranstaltung

Hallo Frau Harms,

vielen Dank für die Liste der freien Termine. Nach Rücksprache mit meinem Kollegen würden wir gerne folgende Räume/Zeiten buchen: 

W04 1-162: 
Di: 19.11.13 - 14-16 Uhr
Mi: 20.11.13 - 16-18 Uhr

W01 0-008:
Do: 21.11.13 - 09-12 Uhr

der Einzelveranstaltung/Blockveranstaltung ist keine Nr. zugeordnet, es ist ein Pilotprojekt das, wenn erfolgreich, in den kommenden Semestern regulär (dann mit Veranstaltungsnummer) angeboten werden soll. Der Name der Veranstaltung lautet "A brief HPC Tutorial" und wird von Dr. Oliver Melchert und Dr. Stefan Albensoeder angeboten.

Mit herzlichen Grüßen
Oliver Melchert

________________________________________
From: Silke Ulrike Harms
Sent: Wednesday, October 23, 2013 10:09 AM
To: Oliver Melchert
Subject: AW: Anfrage Raum für Einzel-/Blockveranstaltung

Hallo Herr Melchert,
Sie können die freien Zeiten über Stud.IP in den jeweiligen Räumen einsehen (unter Raumbelegungen).
Ich habe Ihnen jetzt die derzeitigen Lücken rausgesucht:
W01 0-008
Mo 11.11.13 - 10-12 Uhr, Di 12.11.13 - 12-14 Uhr, Mi 13.11.13 - 08-10 + 16-18 Uhr, Do 14.11.13 - 08-10 + 14-16 Uhr, Fr 15.11.13 - 12-14 Uhr Mo 18.11.13 - ab 16 Uhr, Di 19.11.13 - 12-14 Uhr, Mi 20.11.13 - 08-10 Uhr, Do 21.11.13 - 08-12 + 14-16 Uhr, Fr 22.11.13 - ab 12 Uhr Mo 25.11.13 - ab 16 Uhr, Di 26.11.13 - 12-14 Uhr,  Mi 27.11.13 - 08-10 Uhr, Do 28.11.13 - 08-12 + 14-16 Uhr

W04 1-162
Di 12./19./26.11.13 - jeweils 14-16 Uhr
Mi 13./20./27.11.13 - jeweils ab 16 Uhr
Fr 15./22./29.11.13 - jeweils ab 14 Uhr

Bitte entscheiden Sie sich schnell, weil es zurzeit noch vielen Anfragen/Buchungen gibt.

Gruß
Silke Harms



Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg

Telefon: 0441 / 798-2483


-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Dienstag, 22. Oktober 2013 11:13
An: Silke Ulrike Harms
Betreff: RE: Anfrage Raum für Einzel-/Blockveranstaltung

Sehr geehrte Frau Harms,

vielen Dank für Ihre Antwort. Aufgrund von Urlaub/Krankeit konnten mein Kollege und ich uns auf keinen der vorgeschlagenen Ternime festlegen.

Hiermit möchte erneut einen Anfrage für dieselben Räume (siehe unten angehängte e-Mail) im Zeitraum

11.11.2013 - 29.11.2013

stellen.

Mit herzlichen Grüßen
Oliver Melchert

________________________________________
From: Silke Ulrike Harms
Sent: Monday, September 23, 2013 1:42 PM
To: Oliver Melchert
Subject: AW: Anfrage Raum für Einzel-/Blockveranstaltung

Guten Tag Herr Melchert,
ich kann Ihnen folgende Raumangebote machen:
Montag 14. Oktober 2013
14-20 Uhr - W04 1-162 + 16-20 Uhr - W01 0-008 Dienstag 15. Oktober 2013
14-16 Uhr - W04 1-162 + 18-20 Uhr - W01 0-008 Mittwoch 16. Oktober 2013
16-20 Uhr - W04 1-162 + 08-12 Uhr - W01 0-008

Bitte teilen Sie mir mit, welche Buchungen ich vornehmen soll und unter welcher Nr. ich Ihre Veranstaltung finde (und die Räume buchen soll).

Gruß
Silke Harms



Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg

Telefon: 0441 / 798-2483


-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Freitag, 20. September 2013 10:00
An: Silke Ulrike Harms
Betreff: Anfrage Raum für Einzel-/Blockveranstaltung

Sehr geehrte Frau Harms,

mein Name ist Oliver Melchert und ich begleite derzeit die Stelle des "Koordinators für das wissenschaftliche Rechnen". Für die Nutzer des Oldenburger Großrechners möchte ich, zusammen mit einem Kollegen, die Einzelveranstaltung/Blockveranstaltung "A brief HPC tutorial" anbieten. Für die Veranstaltung planen wir die Dauer von 4 x 1.5 Stunden ein.  Optimal wäre es, wenn wir an zwei aufeinanderfolgenden Tagen jeweils 2 x 1.5h anbieten könnten.

Wir rechnen mit max. 30 Teilnehmern und suchen einen geeigneten Raum für unser Vorhaben. Wir würden, sofern das möglich ist, die Veranstaltung gerne an zwei aufeinanderfolgenden Tagen im Zeitraum Oktober/November in den Wochen

14.10. - 19.10.
oder
28.10. - Ende November

anbieten. Die Veranstaltung soll neben Vorträgen auch praktische Übungen bieten.  Daher wäre es optimal, wenn wir für den letzten 1.5h Block in einen Rechnerraum ausweichen könnten. Da die meisten Nutzer am Standord Wechloy sitzen wäre es super wenn wir dort einen Seminarraum finden könnten. Geeignete Räume wären z.B.

W2-1-143
W2-1-148
W3-1-156
W4-1-162

Ein geeigneter Rechnerraum wäre z.B.

W1-0-008

Bezüglich der Urzeit sind wir flexibel. Wäre denn überhaupt noch Raum unsere geplante Veranstaltung unterzubringen?

Mit freundlichen Grüßen
Dr. Oliver Melchert
  

Mail to users

 
Betr.: [HPC-HERO] Tutorial on High Performance Computing (19.-21. Nov)

Dear User of the HPC System,

this is to announce the first tutorial on "High Performance Computing" which
will take place from 19.11.2013 to 21.11.2013. More precisely, the tutorial
will be split into three sessions. The first two sessions feature the parts
0.-IV. (listed below) and are held at the following dates:

Seminar-Room: W04 1-162:
Tue, 19.11.13 - 14-16 Uhr
Wed, 20.11.13 - 16-18 Uhr

The third session (part V.) comprises practical exercises which are meant to
illustrate some of the content presented in the earlier parts and is held at:

Computer-Lab: W01 0-008:
Thu, 21.11.13 - 09-12 Uhr

The target audience of this 1st HPC tutorial are new Users of the local HPC
system, for whom, in order to benefit from the tutorial, the skills of reading
and writing C-programs are of avail. However, we are optimistic that we will
be able to announce a quite similar tutorial for all Matlab-focused users,
soon. If you would like to attend the HPC tutorial, please sent a brief response
to this email.

The planned programme of this 1st HPC tutorial is

0. Introduction to HPC
   1. Motivation
   2. Architectures
   3. Overview over parallel models 

I. Cluster Overview:
   1. System Overview
   2. Modification of user environments via "module"
   3. Available compiler
   4. Available parallel environments
   5. Available Libraries
   6. Performance hints

II. Introduction to the usage of SGE:
    1. Introduction
    2. General Job submission 
    3. Single Slot jobs 
    4. Parallel Jobs 
    5. Monitoring and Controlling jobs 
      
III. Debugging and Profiling:
    1. Compiling programs for debugging
    2. Tracking memory issues
    3. Profiling

IV. Misc:
    1. Logging in from outside the university
    2. Mounting the HPC home directory
    3. Parallel environment memory issue
    4. Importance of allocating proper resources
   
V. Exercises (Computer-Lab):
    1. Try out the examples given in part II
    2. Estimate pi using Monte Carlo simulation
       (code provided serial+parallel using mpi;
       compile, submit and monitor jobs for different
       parameter settings)

With kind regards 
Oliver Melchert and Stefan Albensoeder
  

Confirmation for users

 
Dear USER,

this is to confirm your registration for the first tutorial on 
"High Performance Computing" which will be held at the following 
dates:

Seminar-Room W04 1-162:
Tue, 19.11.13 - 14-16 Uhr
Wed, 20.11.13 - 16-18 Uhr

Computer-Lab W01 0-008:
Thu, 21.11.13 - 09-12 Uhr

Thank you for signing up
Oliver Melchert and Stefan Albensoeder
  

Mail to IT Services

Contact the IT services and ask to make sure that the participants of the HPC Tutorial can logon to the HPC system from the Computer lab.

 
Hallo Oliver,

wir haben das Subnetz freigeschaltet.
Kannst du mal probieren ob alles funktioniert.

Heute kann ich nicht an deiner Veranstaltung teilnehmen, da ich schon um 17 Uhr einen Termin habe.

Viele Grüße 
Felix

-----Ursprüngliche Nachricht-----
Von: Oliver Melchert 
Gesendet: Mittwoch, 20. November 2013 10:14
An: Jürgen Weiß; Felix Thole
Betreff: IP Adressen in Raum W01-0-008

Hallo Jürgen,
hallo Felix,

ich habe die IP Adressen der Rechner im Raum W01-0-008
nachgeschaut. Die ersten 3 Oktetts lauten auf:

134.106.45.XXX

Die Übungen sollen morgen von 9-12 Uhr in diesem 
Raum stattfinden.

Ist die obige Information ausreichend oder soll ich 
eine genaue Liste der vollständigen IP Adressen 
senden?

Viele Grüße
Oliver  

User-Wiki entry

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1

Corresponding mail to all users


 
Betr.: [HPC-HERO] Accompanying documents for HPC Tutorial

Dear User of the HPC System,

this is to inform you that the User Wiki page that collects the material 
related to the first tutorial on "High Performance Computing", which
took place from 19.11.2013 to 21.11.2013, is available at

Main Page > Basic Information > Examples > HPC Tutorial No1

under the link

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1

We would like to thank all users of the HPC components FLOW/HERO that 
attended this first HPC tutorial and we are looking forward to host a 
further educational workshop tailored to suit all "MATLAB distributed
computing server" (MDCS) users at the end of January 2014 (more 
information will follow in due time).

Best regards
Oliver Melchert and Stefan Albensoeder
  

List of HPC publications

This page is intended to list publications that were supported by simulations on the HPC components FLOW/HERO. If you want to contribute to this list, please send an e-mail with subject:

[HPC-HERO or HPC-FLOW] Contribution to the list of HPC publications

to the coordinator of scientific computing (position currently substituted by: oliver.melchert@uni-oldenburg.de). It would be highly appreciated if you could provide the digital object identifier (DOI) that refers to your article within that mail. If the journal you published your article(s) in offers to export citations you might alternatively send one of the formats supported by the journal (preverably: BibTex).

NOTE: We kindly ask you to acknowledge the HPC components FLOW/HERO within research articles that were supported by simulations on the HPC facilities.

2012

  1. Claussen, G. and Apolo, L. and Melchert, O. and Hartmann, A. K.,
    Analysis of the loop length distribution for the negative-weight percolation problem in dimensions d=2 through d=6,
    Physical Review E 86, 5 (2012), 10.1103/PhysRevE.86.056708.

2013

  1. Melchert, O.,
    Percolation thresholds on planar Euclidean relative-neighborhood graphs,
    Physical Review E 87, 4 (2013), 10.1103/PhysRevE.87.042106.
  2. Melchert, O. and Hartmann, A. K.,
    Information-theoretic approach to ground-state phase transitions for two- and three-dimensional frustrated spin systems,
    Physical Review E 87, 2 (2013), 10.1103/PhysRevE.87.022107.
  3. Melchert, O.,
    Universality class of the two-dimensional randomly distributed growing-cluster percolation model,
    Physical Review E 87, 2 (2013), 10.1103/PhysRevE.87.022115.
  4. Norrenbrock, C. and Melchert, O. and Hartmann, A. K.,
    Paths in the minimally weighted path model are incompatible with Schramm-Loewner evolution,
    Physical Review E 87, 3 (2013), 10.1103/PhysRevE.87.032142.
  5. Melchert, O. and Hartmann, A. K.,
    Typical and large-deviation properties of minimum-energy paths on disordered hierarchical lattices,
    The European Physical Journal B 86, 7 (2013), 10.1140/epjb/e2013-40230-1.


List of user wiki pages

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Brief_Introduction_to_HPC_Computing

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Matlab_Examples_using_MDCS

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Queues_and_resource_allocation

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Unix_groups

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Mounting_Directories_of_FLOW_and_HERO#OSX

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=File_system (Snapshot functionality)

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=STATA

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Memory_Overestimation

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Debugging

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Profiling_using_gprof

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1

MISC

Limited resource quota sets

Slot limits for different user groups are set using resource quota sets (rqs)

 
alxo9476@hero01:~$ qconf -srqs
{
   name         max_slots_for_express_queue_FLOW
   description  "limits number of slots for express queue on FLOW"
   enabled      TRUE
   limit        users {@flowusers} queues {cfd_xtr_expr.q} to slots=40
}
{
   name         max_slots_for_pe_mdcs
   description  "limits number of slots for PE mdcs"
   enabled      TRUE
   limit        users {*} pes {mdcs} to slots=36
}
{
   name         max_slots_for_user_groups_HERO
   description  "limits number of slots of users on HERO"
   enabled      TRUE
   limit        users {@herousers} to slots=360
}