Difference between revisions of "Non-public templates: Coordination of Scientific Computing"

From HPC users
Jump to navigationJump to search
 
(26 intermediate revisions by the same user not shown)
Line 273: Line 273:
hendrik.spieker@uni-oldenburg.de  rexi0814 300GB Ende September 2013  
hendrik.spieker@uni-oldenburg.de  rexi0814 300GB Ende September 2013  
wilke.dononelli@uni-oldenburg.de  juro9204 700GB Ende Dezember 2013
wilke.dononelli@uni-oldenburg.de  juro9204 700GB Ende Dezember 2013
  </nowiki>
eike.mayland.quellhorst@uni-oldenburg.de auko1937  500GB Ende März 2014
</nowiki>


== Cluster downtime ==
== Cluster downtime ==
Line 528: Line 529:
   </nowiki>
   </nowiki>


== Ranking of accumulated running times ==
== HPC tutorial ==
 
=== Requesting seminar rooms ===


Data accumulated from full accounting file at <tt>/cm/shared/apps/sge/6.2u5p2/default/common/</tt>:
   <nowiki>
   <nowiki>
# accumulated running time by user:
Hallo Herr Melchert,
# (rank)(userName)(nJobs)(accRunTime sec)(accRunTime Y:D:H:M:S) // (realName)
die Buchungen habe ich eingetragen!
1  : geri2579    90754  8762221362      277:309:14:22:42    // (Christoph Norrenbrock)
 
2  : dumu7717  221971  5999577984      190:089:13:26:24    // (Jan Mitschker)
Gruß
3  : pali6150    6795  2561650013        81:083:17:26:53    // (Timo Dewenter)
Silke Harms
4  : kuxo7281  387622  2465015468        78:060:06:31:08    // (Christoph Echtermeyer)
 
5  : mujo4069    28887  2416553044        76:229:08:44:04    // (Dennis Hinrichs)
 
6  : juro9204    52786  1586555718        50:112:21:55:18    // (Wilke Dononelli)
Raum- und Veranstaltungsbüro
7  : haft0127    6143  1502848389        47:239:01:53:09    // (Alexander Hartmann)
Dezernat 4 / Gebäudemanagement
8  : ruxi6902    24936  1221936621        38:272:18:50:21    // (Niko Moritz)
Carl von Ossietzky Universität Oldenburg
9  : nero7893  238888  781741169        24:287:22:19:29    // (Karsten Looschen)
 
10 : rexi0814    7053  627322585        19:325:16:16:25    // (Hendrik Spieker)
Telefon: 0441 / 798-2483
11 : joji0479    19113  618288962        19:221:02:56:02    // (Bjoern Ahrens)
 
12 : zett7687    11998  600740860        19:018:00:27:40    // (Gunnar Claussen)
 
13 : alxo9476  246372  561958009        17:299:03:26:49    // (Oliver Melchert)
-----Ursprüngliche Nachricht-----
14 : mohu0982    42701  454324289        14:148:09:11:29    // (Hendrik Schawe)
Von: Oliver Melchert
15 : wane3073    47826  419461422        13:109:21:03:42    // (Pascal Fieth)
Gesendet: Donnerstag, 24. Oktober 2013 10:28
16 : mazo4669  176603  324777634        10:109:00:00:34    // (Joerg-Hendrik Bach)
An: Silke Ulrike Harms
17 : axex3705  145689  316742167        10:015:23:56:07    // (Bjoern Wolff)
Betreff: RE: Anfrage Raum für Einzel-/Blockveranstaltung
18 : repe1429    2066  213681179        6:283:03:52:59    // (Gabriele Tomaschun)
19 : kefi1701    2548  154479289        4:327:22:54:49    // (Markus Manssen)
20 : wole2741    1404  109671463        3:174:08:17:43    // (Enno Gent)
21 : axto3490    1482  101515111        3:079:22:38:31    // (Lena Albers)
22 : gira3297    49356    92688463        2:342:18:47:43    // (Marc Rene Schaedler)
23 : axfa8508      712    80076221        2:196:19:23:41    // (Daniel Ritterskamp)
24 : mama5378      813    76950830        2:160:15:13:50    // (Erik Asplund)
25 : alpe3589    2190    73408223        2:119:15:10:23    // (Ralf Buschermoehle)
26 : alpo8118      360    68031701        2:057:09:41:41    // (Janek Greskowiak)
27 : tefi0368      522    64070458        2:011:13:20:58    // (Thomas Uwe Mueller)
28 : mapa5066      881    61950919        1:352:00:35:19    // (Muhammad Mumtaz Ahmad)
29 : hoeb8124      549    56335561        1:287:00:46:01    // (Matti Reissmann)
30 : noje7983    2017    55047182        1:272:02:53:02    // (Hamid Rahimi)
31 : bamo9780    9758    52383188        1:241:06:53:08    // (Robert Rehr)
32 : jopp2853    2250    52360926        1:241:00:42:06    // (Matthias Schramm)
33 : liro0805    60351    49192498        1:204:08:34:58    // (Christian Hinrichs)
34 : esgi3777    29098    48937263        1:201:09:41:03    // (Arne-Freerk Meyer)
35 : asju8096    7772    47230815        1:181:15:40:15    // (Hendrik Kayser)
36 : mexi8700    1378    44485489        1:149:21:04:49    // (Robert Guenther)
37 : diab3109      647    44309313        1:147:20:08:33    // (Bettina Gertjerenken)
38 : rumu5289      331    38484335        1:080:10:05:35    // (Dennis Lutters)
39 : lick1639    1593    38325087        1:078:13:51:27    // (none)
40 : teaf1672    1899    36772835        1:060:14:40:35    // (Markus Niemann)
41 : dihl9738    5007    35053168        1:040:16:59:28    // (Stefan Albensoeder)
42 : sawo0024    3610    32917355        1:015:23:42:35    // (Fabian Gieseke)
43 : tasi6754      601    30416694        0:352:01:04:54    // (Thorsten Kolling)
44 : zatu0050      986    29077086        0:336:12:58:06    // (Gerald Steinfeld)
45 : guha6809      421    27004428        0:312:13:13:48    // (Nils Burchardt)
46 : dege2737      316    26305116        0:304:10:58:36    // (Andre Schaefer)
47 : auwo0040      978    24376479        0:282:03:14:39    // (Nicole Stoffels)
48 : rehi3280      598    24030935        0:278:03:15:35    // (Patrick Zark)
49 : rawa6912      132    23203817        0:268:13:30:17    // (Rajat Karnatak)
50 : peft9847    4548    23156304        0:268:00:18:24    // (Stephan Spaeth)
51 : axar4346    1048    23057256        0:266:20:47:36    // (Wided Medjroubi)
52 : meex7858      679    22044487        0:255:03:28:07    // (Nils Andre Treiber)
53 : asto4412      154    20590474        0:238:07:34:34    // (Marcel David Fabian)
54 : dofo5522    9426    20060762        0:232:04:26:02    // (Hendrike Klein-Hennig)
55 : fupa5629    2644    20016755        0:231:16:12:35    // (Carlos Peralta)
56 : norg2515      159    19599825        0:226:20:23:45    // (Francisco Santos Alamillos)
57 : pedo3100      301    19443490        0:225:00:58:10    // (Nils Ayral)
58 : raga4343    4121    17908760        0:207:06:39:20    // (Ivan Herraez Hernandez)
59 : talu7946    1848    16004957        0:185:05:49:17    // (Elia Daniele)
60 : ergi3581      247    14169686        0:164:00:01:26    // (Jan Warfsmann)
61 : febu8581      38    13468370        0:155:21:12:50    // (Shreya Sah)
62 : hael3199      298    12721996        0:147:05:53:16    // (none)
63 : doal7591      275    12545249        0:145:04:47:29    // (Henrik Beil)
64 : adje6680      136    11221361        0:129:21:02:41    // (Olaf Bininda-Emonds)
65 : hoke3495      644    9400935        0:108:19:22:15    // (Constantin Junk)
66 : reeb1775  195462    8877633        0:102:18:00:33    // (Alexey Ryabov)
67 : muck2227    5513    8760822        0:101:09:33:42    // (Reinhard Leidl)
68 : xuhe4555      143    8722513        0:100:22:55:13    // (Florian Habecker)
69 : sobe5479      277    8073869        0:093:10:44:29    // (Mohamed Cherif)
70 : lewo8864      530    8016602        0:092:18:50:02    // (Hannes Hochstein)
71 : zaer0019    1647    7986934        0:092:10:35:34    // (none)
72 : hupe3583      200    6816092        0:078:21:21:32    // (Zacharais Njam Mokom)
73 : zugo5243    2194    6728438        0:077:21:00:38    // (Bjoern Witha)
74 : woza3934      131    6460674        0:074:18:37:54    // (Stefanie Ruehlicke)
75 : zehu1974      276    6297073        0:072:21:11:13    // (Robert Roehse)
76 : xuzu2182      208    6028628        0:069:18:37:08    // (Habib Faraz)
77 : xift8589    1329    5978239        0:069:04:37:19    // (Bintoro Anang Subagyo)
78 : moge1512      259    5958491        0:068:23:08:11    // (Alexander Buss)
79 : fewu9781      32    5556223        0:064:07:23:43    // (Jakob Raphael Spiegelberg)
80 : edfi4106      206    5123299        0:059:07:08:19    // (Wai-Leung Yim)
81 : nufa8270      212    4938852        0:057:03:54:12    // (Feifei Xiong)
82 : kuli5479      97    4799076        0:055:13:04:36    // (Vincent Hess)
83 : paxu8307      201    4731926        0:054:18:25:26    // (Martin Doerenkaemper)
84 : adto6352      54    4530062        0:052:10:21:02    // (Christian Lasar)
85 : jolo0127      82    4340981        0:050:05:49:41    // (Angela Josupeit)
86 : lozi7895    4173    4268724        0:049:09:45:24    // (Felix Thole)
87 : buza0896      482    4094419        0:047:09:20:19    // (Iko Pieper)
88 : beex6806      305    3980809        0:046:01:46:49    // (Sebastian Grashorn)
89 : jurf9330      133    3944954        0:045:15:49:14    // (Frederik Haack)
90 : juad6850      70    3632885        0:042:01:08:05    // (Bernhard Stoevesandt)
91 : tugt2159      698    3393637        0:039:06:40:37    // (none)
92 : fesi4140      232    3317465        0:038:09:31:05    // (Lueder von Bremen)
93 : arbu5607      375    3313414        0:038:08:23:34    // (none)
94 : beau4118      57    3263987        0:037:18:39:47    // (Crispin Reinhold)
95 : zana6011      459    3219100        0:037:06:11:40    // (Davide Trabucchi)
96 : seho2708      258    3209773        0:037:03:36:13    // (Bastian Dose)
97 : pord4261      46    3186136        0:036:21:02:16    // (Nils Kirrkamm)
98 : elbi1717      163    2873124        0:033:06:05:24    // (Chai Heng Lim)
99 : sidu8566      221    2828467        0:032:17:41:07    // (none)
100: abgu0243      42    2536063        0:029:08:27:43    // (Thorsten Kluener)
101: sino6087      330    2483522        0:028:17:52:02    // (Valeria Angelino)
102: esas0656    2720    2375967        0:027:11:59:27    // (Hauke Beck)
103: penu5836      118    2252853        0:026:01:47:33    // (none)
104: rael0338      25    2173884        0:025:03:51:24    // (Florian Loose)
105: eddo9274      231    2031643        0:023:12:20:43    // (Karsten Lettmann)
106: gusu8312    2709    2000390        0:023:03:39:50    // (Elisabeth Stuetz)
107: weaf4518      117    1960521        0:022:16:35:21    // (Daniel Ahlers)
108: xaed4158      60    1877839        0:021:17:37:19    // (Lukas Halekotte)
109: tode0315      705    1708668        0:019:18:37:48    // (none)
110: xuer9386      72    1654003        0:019:03:26:43    // (Hugues Ambroise)
111: xojo9092      23    1566894        0:018:03:14:54    // (Kajari Bera)
112: muxu6688    1222    1405291        0:016:06:21:31    // (Robert Schadek)
113: wuge3108      46    1224412        0:014:04:06:52    // (none)
114: gare6232      45    1177848        0:013:15:10:48    // (Michael Schwarz)
115: zuka9781      25    1126226        0:013:00:50:26    // (none)
116: auzu2321      130      887319        0:010:06:28:39    // (Liudmila Moskaleva)
117: fewo4259      83      812040        0:009:09:34:00    // (Julia Schloen)
118: naji9738      17      803827        0:009:07:17:07    // (Henning Grossekappenberg)
119: kasu8272      815      787089        0:009:02:38:09    // (Benjamin Wahl)
120: sona3432        4      762062        0:008:19:41:02    // (Marie Arndt)
121: bofe5314      137      747591        0:008:15:39:51    // (Terno Ohsawa)
122: dael3266      44      628808        0:007:06:40:08    // (none)
123: boch5350    4447      475622        0:005:12:07:02    // (hpc-Clonebusters)
124: foxu9815      35      447902        0:005:04:25:02    // (Sonja Drueke)
125: pedi6862      37      447610        0:005:04:20:10    // (Jose-Placido Parra Viol)
126: kano8824      449      395237        0:004:13:47:17    // (none)
127: leck7200      13      235605        0:002:17:26:45    // (Francisco Toja-Silva)
128: gise4802      25      217621        0:002:12:27:01    // (Maria Tschikin)
129: gepp0026      63      206307        0:002:09:18:27    // (Jan Vogelsang)
130: tezi2895      73      177902        0:002:01:25:02    // (none)
131: nixi9106      18      175225        0:002:00:40:25    // (Henning Schepker)
132: lulo2927      102      152011        0:001:18:13:31    // (Lukas Vollmer)
133: gaha8290        3      126961        0:001:11:16:01    // (Ksenia Guseva)
134: ralo6199      107      110025        0:001:06:33:45    // (Hugues Ambroise)
135: nine4710      12      101010        0:001:04:03:30    // (Rainer Koch)
136: limo1478    53766      94097        0:001:02:08:17    // (Maxim Klimenko)
137: kisa9270      15      69213        0:000:19:13:33    // (Murali Sukumaran)
138: fupu4553        4      51252        0:000:14:14:12    // (Vasken Ketchedjian)
139: jihu5122      488      30033        0:000:08:20:33    // (Marc Bromm)
140: bogo2286      40      29448        0:000:08:10:48    // (Timo Gerkmann)
141: zurn7015      21      23579        0:000:06:32:59    // (Heidelinde Roeder)
142: daes8547      151      21065        0:000:05:51:05    // (Stefan Rach)
143: lodu8387      14      11941        0:000:03:19:01    // (Anna Vanselow)
144: auko1937        6      10623        0:000:02:57:03    // (Eike Mayland-Quellhorst)
145: medi4340        2        8543        0:000:02:22:23    // (Vanessa Schakau)
146: guxa1456      38        7023        0:000:01:57:03    // (Derya Dalga)
147: kode4290        8        3842        0:000:01:04:02    // (Martin Klein-Hennig)
148: lega0306        1        2893        0:000:00:48:13    // (Thomas Breckel)
149: feze2916      21        2705        0:000:00:45:05    // (Martin Reiche)
150: zeas7445      52        2301        0:000:00:38:21    // (Ina Kodrasi)
151: joho0429      16        2033        0:000:00:33:53    // (Ante Jukic)
152: fiwi0088      20        1702        0:000:00:28:22    // (Dorothee Hodapp)
153: sott5485      42        1354        0:000:00:22:34    // (none)
154: argu7102        6        1089        0:000:00:18:09    // (hpc-guest001)
155: fime4215        8        530        0:000:00:08:50    // (Hauke Wurps)
156: gaje2471        2        501        0:000:00:08:21    // (Jochem Rieger)
157: giku5867        2        303        0:000:00:05:03    // (Philipp Kraemer)
158: kako0048        2        291        0:000:00:04:51    // (Thomas Greve)
159: mimo4729        8        284        0:000:00:04:44    // (Benjamin Cauchi)
160: garu0840        4        198        0:000:00:03:18    // (Angelina Paulmann)
161: root          29        165        0:000:00:02:45    // (root)
162: teer6901        1        147        0:000:00:02:27    // (Rainer Beutelmann)
163: tund5075        6        127        0:000:00:02:07    // (Juergen Weiss)
164: beba2086        1        111        0:000:00:01:51    // (Maria Wieckhusen)
165: merd1369        3          74        0:000:00:01:14    // (hpc-guest012)
166: esfu4434        4          26        0:000:00:00:26    // (Reemda Jaeschke)
167: nusi9376        1          11        0:000:00:00:11    // (Thomas Kaspereit)
168: xonu1606        5          3        0:000:00:00:03    // (Christoph Gerken)
169: pebu4515        8          2        0:000:00:00:02    // (Nikolaos Fytas)
170: nime9670        3          0        0:000:00:00:00    // (Chandan Kumar)
171: fide6340        1          0        0:000:00:00:00    // (Carsten Engelberts)


# accumulated running time by ag:
Hallo Frau Harms,
# (rank)(agName)(accRunTime sec)(accRunTime Y:D:H:M:S)(nUsers)
0  : agcompphys                16448685039      521:213:07:10:39      14
1  : agtheochem                8774834718      278:090:14:05:18      23
2  : agmodelling              2553971024        80:359:20:23:44      4
3  : agcondmat                2461079978        78:014:17:19:38      3
4  : agmediphys                1739915619        55:062:21:53:39      7
5  : fw                        418542668        13:099:05:51:08      36
6  : agcompint                  371704009        11:287:03:06:49      3
7  : agmolchem                  321270004        10:068:09:40:04      9
8  : iwes                        92596585        2:341:17:16:25      6
9  : agses                      73408223        2:119:15:10:23      1
10  : aghydrogeo                  72830777        2:112:22:46:17      2
11  : agsigproc                  57538354        1:300:22:52:34      8
12  : agenvinf                    49192498        1:204:08:34:58      1
13  : agstatphys                  36772835        1:060:14:40:35      1
14  : agcomplexsys                23354357        0:270:07:19:17      3
15  : hrz                        13029673        0:150:19:21:13      3
16  : agphysocean                11595939        0:134:05:05:39      7
17  : agsystematics              11221361        0:129:21:02:41      1
18  : agcompchem                  6128714        0:070:22:25:14      3
19  : agfieldtheo                  5978239        0:069:04:37:19      1
20  : agcoordchem                  2173884        0:025:03:51:24      1
21  : arv                          1654003        0:019:03:26:43      1
22  : agdistsys                    1405291        0:016:06:21:31      1
23  : agsofteng                    569722        0:006:14:15:22      3
24  : agnanoopt                    206307        0:002:09:18:27      1
25  : aggeneralpsych                21065        0:000:05:51:05      1
26  : agplantbiodiv                  10623        0:000:02:57:03      1
27  : agbiopsych                      2893        0:000:00:48:13      2
28  : agpsychdh                      2705        0:000:00:45:05      1
29  : hpcguest                        1163        0:000:00:19:23      2
30  : agancp                          501        0:000:00:08:21      1
31  : aganimalbiodiv                  303        0:000:00:05:03      1
32  :                                  165        0:000:00:02:45      1
33  : agzoophys                        147        0:000:00:02:27      1
34  : agaccounting                      37        0:000:00:00:37      2
35  : agmediainf                        0        0:000:00:00:00      1


vielen Dank für die Liste der freien Termine. Nach Rücksprache mit meinem Kollegen würden wir gerne folgende Räume/Zeiten buchen:
W04 1-162:
Di: 19.11.13 - 14-16 Uhr
Mi: 20.11.13 - 16-18 Uhr
W01 0-008:
Do: 21.11.13 - 09-12 Uhr
der Einzelveranstaltung/Blockveranstaltung ist keine Nr. zugeordnet, es ist ein Pilotprojekt das, wenn erfolgreich, in den kommenden Semestern regulär (dann mit Veranstaltungsnummer) angeboten werden soll. Der Name der Veranstaltung lautet "A brief HPC Tutorial" und wird von Dr. Oliver Melchert und Dr. Stefan Albensoeder angeboten.
Mit herzlichen Grüßen
Oliver Melchert
________________________________________
From: Silke Ulrike Harms
Sent: Wednesday, October 23, 2013 10:09 AM
To: Oliver Melchert
Subject: AW: Anfrage Raum für Einzel-/Blockveranstaltung
Hallo Herr Melchert,
Sie können die freien Zeiten über Stud.IP in den jeweiligen Räumen einsehen (unter Raumbelegungen).
Ich habe Ihnen jetzt die derzeitigen Lücken rausgesucht:
W01 0-008
Mo 11.11.13 - 10-12 Uhr, Di 12.11.13 - 12-14 Uhr, Mi 13.11.13 - 08-10 + 16-18 Uhr, Do 14.11.13 - 08-10 + 14-16 Uhr, Fr 15.11.13 - 12-14 Uhr Mo 18.11.13 - ab 16 Uhr, Di 19.11.13 - 12-14 Uhr, Mi 20.11.13 - 08-10 Uhr, Do 21.11.13 - 08-12 + 14-16 Uhr, Fr 22.11.13 - ab 12 Uhr Mo 25.11.13 - ab 16 Uhr, Di 26.11.13 - 12-14 Uhr,  Mi 27.11.13 - 08-10 Uhr, Do 28.11.13 - 08-12 + 14-16 Uhr
W04 1-162
Di 12./19./26.11.13 - jeweils 14-16 Uhr
Mi 13./20./27.11.13 - jeweils ab 16 Uhr
Fr 15./22./29.11.13 - jeweils ab 14 Uhr
Bitte entscheiden Sie sich schnell, weil es zurzeit noch vielen Anfragen/Buchungen gibt.
Gruß
Silke Harms
Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg
Telefon: 0441 / 798-2483
-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Dienstag, 22. Oktober 2013 11:13
An: Silke Ulrike Harms
Betreff: RE: Anfrage Raum für Einzel-/Blockveranstaltung
Sehr geehrte Frau Harms,
vielen Dank für Ihre Antwort. Aufgrund von Urlaub/Krankeit konnten mein Kollege und ich uns auf keinen der vorgeschlagenen Ternime festlegen.
Hiermit möchte erneut einen Anfrage für dieselben Räume (siehe unten angehängte e-Mail) im Zeitraum
11.11.2013 - 29.11.2013
stellen.
Mit herzlichen Grüßen
Oliver Melchert
________________________________________
From: Silke Ulrike Harms
Sent: Monday, September 23, 2013 1:42 PM
To: Oliver Melchert
Subject: AW: Anfrage Raum für Einzel-/Blockveranstaltung
Guten Tag Herr Melchert,
ich kann Ihnen folgende Raumangebote machen:
Montag 14. Oktober 2013
14-20 Uhr - W04 1-162 + 16-20 Uhr - W01 0-008 Dienstag 15. Oktober 2013
14-16 Uhr - W04 1-162 + 18-20 Uhr - W01 0-008 Mittwoch 16. Oktober 2013
16-20 Uhr - W04 1-162 + 08-12 Uhr - W01 0-008
Bitte teilen Sie mir mit, welche Buchungen ich vornehmen soll und unter welcher Nr. ich Ihre Veranstaltung finde (und die Räume buchen soll).
Gruß
Silke Harms
Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg
Telefon: 0441 / 798-2483
-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Freitag, 20. September 2013 10:00
An: Silke Ulrike Harms
Betreff: Anfrage Raum für Einzel-/Blockveranstaltung
Sehr geehrte Frau Harms,
mein Name ist Oliver Melchert und ich begleite derzeit die Stelle des "Koordinators für das wissenschaftliche Rechnen". Für die Nutzer des Oldenburger Großrechners möchte ich, zusammen mit einem Kollegen, die Einzelveranstaltung/Blockveranstaltung "A brief HPC tutorial" anbieten. Für die Veranstaltung planen wir die Dauer von 4 x 1.5 Stunden ein.  Optimal wäre es, wenn wir an zwei aufeinanderfolgenden Tagen jeweils 2 x 1.5h anbieten könnten.
Wir rechnen mit max. 30 Teilnehmern und suchen einen geeigneten Raum für unser Vorhaben. Wir würden, sofern das möglich ist, die Veranstaltung gerne an zwei aufeinanderfolgenden Tagen im Zeitraum Oktober/November in den Wochen
14.10. - 19.10.
oder
28.10. - Ende November
anbieten. Die Veranstaltung soll neben Vorträgen auch praktische Übungen bieten.  Daher wäre es optimal, wenn wir für den letzten 1.5h Block in einen Rechnerraum ausweichen könnten. Da die meisten Nutzer am Standord Wechloy sitzen wäre es super wenn wir dort einen Seminarraum finden könnten. Geeignete Räume wären z.B.
W2-1-143
W2-1-148
W3-1-156
W4-1-162
Ein geeigneter Rechnerraum wäre z.B.
W1-0-008
Bezüglich der Urzeit sind wir flexibel. Wäre denn überhaupt noch Raum unsere geplante Veranstaltung unterzubringen?
Mit freundlichen Grüßen
Dr. Oliver Melchert
   </nowiki>
   </nowiki>


== Usage of the different parallel environments ==
=== Mail to users ===


   <nowiki>
   <nowiki>
# (PE Name)(nJobs)(accumRuntime sec)(accumRunTime Y:D:H:M:S)(nUsers)
Betr.: [HPC-HERO] Tutorial on High Performance Computing (19.-21. Nov)
NONE                    1889850        23765160753      753:214:17:32:33      113
 
ansys                  160                1728858        0:020:00:14:18      3
Dear User of the HPC System,
impi                    15624            163695023        5:069:14:50:23      29
 
impi41                  2597              77974431        2:172:11:33:51      16
this is to announce the first tutorial on "High Performance Computing" which
impi41_long            46                1184370        0:013:16:59:30      1
will take place from 19.11.2013 to 21.11.2013. More precisely, the tutorial
impi_test              2                       5        0:000:00:00:05      1
will be split into three sessions. The first two sessions feature the parts
java                    355                  7618        0:000:02:06:58      1
0.-IV. (listed below) and are held at the following dates:
java_job_array          89                    1373        0:000:00:22:53      2
 
linda                  72                4605247        0:053:07:14:07      3
Seminar-Room: W04 1-162:
linda_long              6                 11243499        0:130:03:11:39      2
Tue, 19.11.13 - 14-16 Uhr
make                    1                       2        0:000:00:00:02      1
Wed, 20.11.13 - 16-18 Uhr
mdce                    162                  11037        0:000:03:03:57      1
 
mdcs                    7802              39937469        1:097:05:44:29      20
The third session (part V.) comprises practical exercises which are meant to
molcas                  224502          6460534736      204:314:16:58:56      6
illustrate some of the content presented in the earlier parts and is held at:
molcas_cluster          586                1049253        0:012:03:27:33      1
 
molcas_cluster_long     1                    5946        0:000:01:39:06      1
Computer-Lab: W01 0-008:
molcas_long            3                  762060        0:008:19:41:00      1
Thu, 21.11.13 - 09-12 Uhr
molcas_smp              26                  613101        0:007:02:18:21      1
 
mpich                  13                  133836        0:001:13:10:36      3
The target audience of this 1st HPC tutorial are new Users of the local HPC
mpich2_mpd              927                4210039        0:048:17:27:19      3
system, for whom, in order to benefit from the tutorial, the skills of reading
mvapich                2                      10         0:000:00:00:10      1
and writing C-programs are of avail. However, we are optimistic that we will
openmpi                2009              25887849        0:299:15:04:09      20
be able to announce a quite similar tutorial for all Matlab-focused users,
openmpi_ib              16871            294649501        9:125:07:05:01      40
soon. If you would like to attend the HPC tutorial, please sent a brief response
openmpi_ib_long        713              66437180        2:038:22:46:20      8
to this email.
smp                    99320          2689147891        85:099:09:31:31      52
 
smp_long                44                14779112        0:171:01:18:32      12
The planned programme of this 1st HPC tutorial is
starccmp                174                2024219        0:023:10:16:59      4
 
0. Introduction to HPC
  1. Motivation
  2. Architectures
  3. Overview over parallel models
 
I. Cluster Overview:
  1. System Overview
  2. Modification of user environments via "module"
  3. Available compiler
  4. Available parallel environments
  5. Available Libraries
  6. Performance hints
 
II. Introduction to the usage of SGE:
    1. Introduction
    2. General Job submission
    3. Single Slot jobs
    4. Parallel Jobs
    5. Monitoring and Controlling jobs
        
III. Debugging and Profiling:
    1. Compiling programs for debugging
    2. Tracking memory issues
    3. Profiling
 
IV. Misc:
    1. Logging in from outside the university
     2. Mounting the HPC home directory
    3. Parallel environment memory issue
    4. Importance of allocating proper resources
 
V. Exercises (Computer-Lab):
    1. Try out the examples given in part II
    2. Estimate pi using Monte Carlo simulation
      (code provided serial+parallel using mpi;
      compile, submit and monitor jobs for different
      parameter settings)
 
With kind regards
Oliver Melchert and Stefan Albensoeder
  </nowiki>
 
=== Confirmation for users ===
 
  <nowiki>
Dear USER,
 
this is to confirm your registration for the first tutorial on
"High Performance Computing" which will be held at the following
dates:
 
Seminar-Room W04 1-162:
Tue, 19.11.13 - 14-16 Uhr
Wed, 20.11.13 - 16-18 Uhr
 
Computer-Lab W01 0-008:
Thu, 21.11.13 - 09-12 Uhr
 
Thank you for signing up
Oliver Melchert and Stefan Albensoeder
  </nowiki>
 
=== Mail to IT Services ===
 
Contact the IT services and ask to make sure that the participants of the HPC Tutorial can logon to the HPC system from the
Computer lab.
 
  <nowiki>
Hallo Oliver,
 
wir haben das Subnetz freigeschaltet.
Kannst du mal probieren ob alles funktioniert.
 
Heute kann ich nicht an deiner Veranstaltung teilnehmen, da ich schon um 17 Uhr einen Termin habe.
 
Viele Grüße
Felix
 
-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Mittwoch, 20. November 2013 10:14
An: Jürgen Weiß; Felix Thole
Betreff: IP Adressen in Raum W01-0-008
 
Hallo Jürgen,
hallo Felix,
 
ich habe die IP Adressen der Rechner im Raum W01-0-008
nachgeschaut. Die ersten 3 Oktetts lauten auf:
 
134.106.45.XXX
 
Die Übungen sollen morgen von 9-12 Uhr in diesem
Raum stattfinden.
 
Ist die obige Information ausreichend oder soll ich
eine genaue Liste der vollständigen IP Adressen
senden?
 
Viele Grüße
Oliver  </nowiki>
 
=== User-Wiki entry ===
 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1
 
Corresponding mail to all users
 
 
  <nowiki>
Betr.: [HPC-HERO] Accompanying documents for HPC Tutorial
 
Dear User of the HPC System,
 
this is to inform you that the User Wiki page that collects the material
related to the first tutorial on "High Performance Computing", which
took place from 19.11.2013 to 21.11.2013, is available at
 
Main Page > Basic Information > Examples > HPC Tutorial No1
 
under the link
 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1
 
We would like to thank all users of the HPC components FLOW/HERO that
attended this first HPC tutorial and we are looking forward to host a
further educational workshop tailored to suit all "MATLAB distributed
computing server" (MDCS) users at the end of January 2014 (more
information will follow in due time).
 
Best regards
Oliver Melchert and Stefan Albensoeder
   </nowiki>
   </nowiki>
== List of HPC publications ==
This page is intended to list publications that were supported by simulations on the HPC components FLOW/HERO. If you want to
contribute to this list, please send an e-mail with subject:
[HPC-HERO or HPC-FLOW] Contribution to the list of HPC publications
to the coordinator of scientific computing (position currently substituted by: oliver.melchert@uni-oldenburg.de). It would be highly appreciated if
you could provide the [http://en.wikipedia.org/wiki/Digital_object_identifier  digital object identifier] (DOI) that refers to your
article within that mail. If the journal you published your article(s) in offers to export citations
you might alternatively send one of the formats supported by the journal (preverably: BibTex).
'''NOTE:''' We kindly ask you
to [[Acknowledging_the_HPC_facilities | acknowledge the HPC components FLOW/HERO]] within research articles that
were supported by simulations on the HPC facilities.
<h3><span class="mw-headline">2012</span></h3>
<ol start="1">
<li> Claussen, G. and Apolo, L. and Melchert, O. and Hartmann, A. K., <br /> Analysis of the loop length distribution for the negative-weight percolation problem in dimensions d=2 through d=6, <br /> ''Physical Review E'' '''86''', 5 (2012), 10.1103/PhysRevE.86.056708. </li>
</ol>
<h3><span class="mw-headline">2013</span></h3>
<ol start="2">
<li> Melchert, O., <br /> Percolation thresholds on planar Euclidean relative-neighborhood graphs, <br /> ''Physical Review E'' '''87''', 4 (2013), 10.1103/PhysRevE.87.042106. </li>
<li> Melchert, O. and Hartmann, A. K., <br /> Information-theoretic approach to ground-state phase transitions for two- and three-dimensional frustrated spin systems, <br /> ''Physical Review E'' '''87''', 2 (2013), 10.1103/PhysRevE.87.022107. </li>
<li> Melchert, O., <br /> Universality class of the two-dimensional randomly distributed growing-cluster percolation model, <br /> ''Physical Review E'' '''87''', 2 (2013), 10.1103/PhysRevE.87.022115. </li>
<li> Norrenbrock, C. and Melchert, O. and Hartmann, A. K., <br /> Paths in the minimally weighted path model are incompatible with Schramm-Loewner evolution, <br /> ''Physical Review E'' '''87''', 3 (2013), 10.1103/PhysRevE.87.032142. </li>
<li> Melchert, O. and Hartmann, A. K., <br /> Typical and large-deviation properties of minimum-energy paths on disordered hierarchical lattices, <br /> ''The European Physical Journal B'' '''86''', 7 (2013), 10.1140/epjb/e2013-40230-1. </li>
</ol>
* Wiki Ref-generator: http://reftag.appspot.com/doiweb.py?doi=10.1103%2FPhysRevE.87.022107
* style example: http://en.wikipedia.org/wiki/Wikipedia:Citing_sources


== List of user wiki pages ==
== List of user wiki pages ==
Line 802: Line 902:


http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Profiling_using_gprof
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Profiling_using_gprof
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1
== MISC ==
=== Limited resource quota sets ===
Slot limits for different user groups are set using resource quota sets (rqs)
  <nowiki>
alxo9476@hero01:~$ qconf -srqs
{
  name        max_slots_for_express_queue_FLOW
  description  "limits number of slots for express queue on FLOW"
  enabled      TRUE
  limit        users {@flowusers} queues {cfd_xtr_expr.q} to slots=40
}
{
  name        max_slots_for_pe_mdcs
  description  "limits number of slots for PE mdcs"
  enabled      TRUE
  limit        users {*} pes {mdcs} to slots=36
}
{
  name        max_slots_for_user_groups_HERO
  description  "limits number of slots of users on HERO"
  enabled      TRUE
  limit        users {@herousers} to slots=360
}
  </nowiki>

Latest revision as of 10:00, 21 March 2014

Here, for documentation, completeness and availability I will list some templates of e-mails and further things I used on a regular basis.

Application for a new user account

So as to apply for a new user account, an eligible user needs to specify three things:

  • his/her anonymous user-name in the form abcd1234,
  • the working group (or ideally the unix-group) he will be associated to, and
  • an approximate data until when the user account will be needed.

No university user account, yet

If the user has no university-wide anonymous user account, yet, he first needs to apply for one. An exemplary e-mail with advice on how to get such a (guest) user account is listed below

 
Sehr geehrter Herr NAME,

um einen Nutzeraccount für das HPC System erhalten zu können müssen Sie bereits
über einen universitätsweiten, anonymen Nutzeraccount verfügen.  Als Gast einer
Arbeitsgruppe können sie einen entsprechenden Guest-Account bei den IT-Diensten
beantragen. Besuchen Sie dazu bitte die Seite

http://www.uni-oldenburg.de/itdienste/services/nutzerkonto/gaeste-der-universitaet/

und wählen Sie die Option "Gastkonto einrichten". Starten sie den Workflow für
das Anlegen eines Gastkontos. Tragen Sie als Verantwortlichen den Leiter der
universitären Organisationseinheit ein, der Ihr Vorhaben unterstützt. Bitten
Sie diesen, die E-Mail die er erhält zu öffnen, den darin enthaltenen Link
aufzurufen und den Antrag zu genehmigen. Das Konto wird dann automatisch
erstellt. Ihr anonymer Nutzeraccount wird die Form "abcd1234" haben.

Um nun ihren Nutzeraccount für das HPC System freischalten zu können senden Sie
mir bitte folgende Details:

1) den anonymen Nutzernamen für den der HPC account erstellt werden soll,
2) den Namen der Arbeitsgruppe der Sie zugeordnet werden sollen,
3) einen voraussichtlichen Gültigkeitszeitraum für den benötigten HPC account.

Sobald Ihr HPC account aktiviert ist werde ich mich mit weiteren Informationen
bei Ihnen melden.

Mit freundlichen Grüßen
Oliver Melchert
  



User account HPC system: Mail to IT-Services

Once the user supplied the above information, you can apply for a HPC user account at the IT-Service using an e-mail similar to:

 
Mail to: felix.thole@uni-oldenburg.de; juergen.weiss@uni-oldenburg.de
Betreff: [HPC-HERO] Einrichtung eines Nutzeraccounts

Sehr geehrter Herr Thole,
sehr geehrter Herr Weiss,

Hiermit bitte ich um die Einrichtung eines HPC Accounts für 
Herrn NAME

abcd124; UNIX-GROUP

der Account wird voraussichtlich bis DATUM benötigt.

Mit freundlichen Grüßen
Oliver Melchert
   

If no proper unix group exists, yet, send instead an email similar to the following:

 
Mail to: felix.thole@uni-oldenburg.de; juergen.weiss@uni-oldenburg.de
Betreff: [HPC-HERO] Einrichtung eines Nutzeraccounts

Hallo Felix,
hallo Jürgen,

Hiermit bitte ich um die Einrichtung eines HPC Accounts für Herrn NAME

abcd1234

der Account wird voraussichtlich bis DATUM benötigt.

Herr NAME ist Mitarbeiter der AG "AG-NAME" (AG-URL) von Herrn Prof. NAME AG-LEITER. 
Die entsprechede AG hat noch keine eigene Unix Group! Kann daher eine neue Unix Group 
für die AG angelegt und in die bestehende Gruppenhierarchie eingebunden werden?

Ich schlage hier den Namen 

agUNIX-GROUP-NAME

für die Unix Gruppe vor. Die AG gehört zur Fak. FAKULTAET.

Mit freundlichen Grüßen
Oliver Melchert
  

User account HPC system: Mail back to user

As soon as you get feedback from the IT-Services that the account was created, send an email to the user similar to the following:

 
Betreff: [HPC-HERO] HPC user account

Sehr geehrter Herr NAME,

die IT-Dienste haben Ihren HPC Account bereits freigeschaltet. Ihr Loginname
ist

abcd1234

und Sie sind der Unix-gruppe

UNIX-GROUP-NAME

zugeordnet. 

Sie verfügen über 100GB Plattenspeicher auf dem lokalen Filesystem (mit
vollem Backup). Wenn Sie über einen begrenzten Zeitraum mehr Speicherplatz
benötigen können Sie mich gerne diesbezüglich anschreiben. Ihren aktuellen
Speicherverbrauch auf dem HPC System können Sie mittels "iquota" einsehen. An
jedem Sonntag werden Sie eine Email mit dem Betreff "Your weekly HPC Quota
Report" erhalten, die Ihren aktuellen Speicherverbrauch zusammenfasst.

Anbei sende ich Ihnen einen Link zu unserem HPC user wiki, auf dem Sie weitere
Details über das lokale HPC System erhalten 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Main_Page

Der Beitrag "Brief Introduction to HPC Computing" unter
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Brief_Introduction_to_HPC_Computing
illustriert einige einfache Beispiele zur Nutzung der verschiedenen
(hauptsächlich parallelen) Anwendungsumgebungen die auf HERO zur Verfügung
stehen und ist daher besonders zu empfehlen. Er diskutiert außerdem einige
andere Themen, wie z.B. geeignetes Alloziieren von Ressourcen und Debugging.

Wenn Sie planen die parallelen Ressourcen von MATLAB auf HERO zu nutzen kann
ich Ihnen die Beiträge "MATLAB Distributed Computing Server" (MDCS) unter 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=MATLAB_Distributing_Computing_Server 
und "MATLAB Examples using MDCS" unter
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Matlab_Examples_using_MDCS
empfehlen. Der erste Beiträge zeigt wie man das lokale Nutzerprofil für die
Nutzung von MATLAB auf HERO konfigurieren kann und der Zweite beinhaltet einige
Beispiele und diskutiert gelegentlich auftretende Probleme im Umgang mit MDCS.

Viele Grüße
Oliver Melchert
  

english variant of the above email:

 
Betreff: [HPC-HERO] HPC user account

Dear NAME,

the IT-Services were now able to activate your HPC account. Your login name to
the HPC system is 

abcd1234

and you are integrated in the group

UNIX-GROUP-NAME

Per default you have 100GB of storage on the local filesystem which is fully
backed up. If you need some more storage over a limited period in time you can
contact me. Note that you can check your memory consumption on the HPC system
via the command "iquota". In addition, on each Sunday you will receive an
email, titled "Your weekly HPC Quota Report", summarizing your current memory
usage. 

Below I sent you a link to the HPC user wiki where you can find further 
details on the HPC system
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Main_Page

In particular I recommend the "Brief Introduction to HPC Computing" at
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Brief_Introduction_to_HPC_Computing
which illustrates several basic examples related to different (mostly parallel)
environments the HPC system HERO offers and discusses a variety of other
topics, as, e.g., proper resource allocation and debugging. 

Further, if you plan to use the parallel capabilities of MATLAB on HERO, I
recommend the "MATLAB Distributed Computing Server" (MDCS) page at 
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=MATLAB_Distributing_Computing_Server 
and the "MATLAB Examples using MDCS" wiki page at
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Matlab_Examples_using_MDCS
These pages summarize how to properly set up your profile for using MATLAB on HERO
and discuss some of the frequently appearing problems.

With kind regards
Oliver
  

User account HPC system: Mail back to user; Fak 2 (STATA users)

New users from Fak 2 most likely want to use the STATA software. An adapted version of the above email reads

 
Dear MY_NAME,

the IT-Services activated your HPC account already. Your login name to
the HPC system is 

LOGIN_NAME

and you are associated to the unix group

UNIX_GROUP

This is also reflected by the structure of the filesystem on the HCP system.

Per default you have 100GB of storage on the local filesystem which is fully
backed up. If you need some more storage over a limited period in time you can
contact me. Note that you can check your memory consumption on the HPC system
via the command "iquota". In addition, on each Sunday you will receive an
email, titled "Your weekly HPC Quota Report", summarizing your current memory
usage. 

Below I sent you a link to the HPC user wiki where you can find further details
on the HPC system: 

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Main_Page

If you plan to use the parallel capabilities of STATA on HERO, I recommend the
"STATA" entry at

Main Page > Application Software and Libraries > Mathematics/Scripting > STATA

see: http://wiki.hpcuser.uni-oldenburg.de/index.php?title=STATA
The above page summarizes how to access the HPC System and how to successfully 
submit a STATA job. 

With kind regards
Dr. Oliver Melchert
  

Temporary extension of disk quota

Sometimes a user from the theoretical chemistry group needs an temporary extension of the available backed-up disk space. Ask him to provide

  • the total amount of disk space needed (he might check his current limit by means of the unix command iquota)
  • an estimated data until the extension is required

Mail to IT-Servies

Then send an email similar to the one listed below to the IT-Service

 
Mail to: felix.thole@uni-oldenburg.de; juergen.weiss@uni-oldenburg.de
Betreff: [HPC-HERO] Erhöhung des verfügbaren Festplattenspeichers eines Nutzers 

Hallo Felix,
hallo Jürgen,

der HPC User NAME

abcd1234; UNIX-GROUP

hat darum gebeten seinen Disk Quota vorübergehend zu erhöhen. Er bittet 
um eine Erhöhung auf ein Gesamtvolumen von

500GB

die bis Ende Dezember 2013 benötigt wird. Danach kann er die 
Daten entsprechend archivieren und der Disk Quota könne wider
zurückgesetzt werden.

Viele Grüße,
Oliver
  

List of users with nonstandard quota

Users that currently enjoy an extended disk quota:

 
NAME                              ID            MEM       LIMIT
jan.mitschker@uni-oldenburg.de    dumu7717 1TB   no limit given
hendrik.spieker@uni-oldenburg.de  rexi0814 300GB Ende September 2013 
wilke.dononelli@uni-oldenburg.de  juro9204 700GB Ende Dezember 2013
eike.mayland.quellhorst@uni-oldenburg.de  auko1937  500GB Ende März 2014

Cluster downtime

In case there needs to be a maintenance downtime for the cluster, send an email similar to the following to the mailing list of the HPC users

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] Maintenance downtime 11-13 June 2013 (announcement)

Dear Users of the HPC facilities,

this is to inform you about an overly due THREE-DAY MAINTENANCE DOWNTIME

FROM: Tuesday 11th June 2013, 7 am 
TO: Thursday 13th June 2013, 16 pm

This downtime window is required for essential maintenance work regarding
particular hardware components of HERO. Ultimately, the scheduled downtime will
fix longstanding issues caused by malfunctioning network switches.  Please note
that all running Jobs will be killed if they are not finished up to 11th June 7
am. During the scheduled downtime, all queues and filesystems will be
unavailable.  We expect the HPC facilities to resume on Thursday afternoon. 

I will remind you about the upcoming three-day maintenance downtime in 
unregular intervals.

Please accept my apologies for any inconvenience caused
Oliver Melchert
  

In case the downtime needs to be extended send an email similar to:

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] Delay returning the HPC system HERO to production status

Dear Users of the HPC Facilities,

we currently experience a DELAY RETURNING THE hpc SYSTEM TO PRODUCTION STAUTS
since the necessary change of the hardware components took longer than
originally expected. The HPC facilities are expected to finally resume service
by

Friday 14th June 2013, 15:00 

We will notify you as soon as everything is back online. 

With kind regards
Oliver Melchert
  

you do not need to supply much details, yet. However, if another extension is necessary, you should provide some details otherwise prepare for complaints by the users. So, your email could look similar to:

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] Further delay returning the HPC system HERO to production status

Dear Users of the HPC Facilities,

as communicated already yesterday, we currently experience a DELAY RETURNING 
THE hpc SYSTEM TO PRODUCTION STATUS. The delay results from difficulties related to 
the maintenance work on the hardware components of HERO.

The original schedule for the maintenance work could not be kept. Some details
of the maintenance process are listed below:

According to the IT-services, the replacement of the old (malfunctioning)
network switches by IBM engineers worked out well (with no delay). However, the
configuration of the components by pro-com engineers took longer that the
previously estimated single day, causing the current delay.  Once the
configuration process is completed, the IT-service staff needs to perform
several tests, firmware updates and application test which will take
approximately one day.  After the last step is completed, the HPC facilities
will finally return to production status.

In view of the above difficulties we ask for your understanding that the HPC
facilities will not be up until today 15:00. We hope that the HPC facilities
resume service by 

Monday 17th June 2013, 16:00 

We will notify you as soon as everything is back online and apologize for the 
inconvenience.
 
With kind regards
Oliver Melchert
  

once the HPC is up and ready send an email similar to:

 
Mail to: hpc-hero@listserv.uni-oldenburg.de
Betreff: [HPC-HERO] HPC systems have returned to production

Dear Users of the HPC Facilities,

this is to inform you that the maintenance work on the HPC systems have been
completed and the HPC component HERO has returned to production: HERO accepts
logins and has already started to process jobs.

Thank you for your patience and please accept my apologies for the extension of
the maintenance downtime and any inconvenience this might have caused
Oliver Melchert 
  


MOLCAS academic license

My question to the MOLCAS contact

 
Dear Dr. Veryazov,

my Name is Oliver Melchert and currently I'm in charge of the coordination of
the scientific computing at the University of Oldenburg. Previously this
position was occupied by Reinhard Leidl who had correspondence with you.

I write to you since I have a question regarding a licensed Software product
which was purchased earlier for our local HPC facilities. 

The Software product I'm referring to is the Quantum Chemistry Software MOLCAS,
for which we own an academic group license which will expire on 18.10.2013.

Now, my question is, in order to extend the license validity what steps do I
have to follow and can you guide me through these steps?

With kind regards
Dr. Oliver Melchert  

And their response

 
Dear Dr. Melchert,
In order to update the academic license for Molcas you should place a 
new order http://www.molcas.org/order.html
Please, print and sign the forms generated during the ordering.
These forms should be sent to me (e-mail attachment is OK).
After receiving the forms I will send you the updated license file.

There are two possibilities for the payment. By default - we will send 
you an invoice to be paid via bank transfer.
It is also possible to pay by a credit card.

     Best Regards,
                 Valera.

-- 
=================================================================
Valera Veryazov         * Tel:   +46-46-222 3110
Theoretical Chemistry   * Fax:   +46-46-222 8648
Chemical Center,        *
P.O.B. 124              * Valera.Veryazov@teokem.lu.se
S-221 00 Lund, Sweden   * http://www.teokem.lu.se/~valera

About MOLCAS: http://www.molcas.org/
-----------------------------------------------------------------
  


Large Matlab Jobs

Some Matlab users send jobs with the maximally allowed number of workers (i.e. slots in Matlab jargon), i.e. 36. Usually these Jobs get distributed over lots of hosts. E.g.:

 
job-ID  prior   name       user         state submit/start at     queue                  master ja-task-ID 
----------------------------------------------------------------------------------------------------------
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs004 MASTER        
                                                                  mpc_std_shrt.q@mpcs004 SLAVE         
                                                                  mpc_std_shrt.q@mpcs004 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs008 SLAVE         
                                                                  mpc_std_shrt.q@mpcs008 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs032 SLAVE         
                                                                  mpc_std_shrt.q@mpcs032 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs034 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs036 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs038 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs043 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs045 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs052 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs066 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs070 SLAVE         
                                                                  mpc_std_shrt.q@mpcs070 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs076 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs080 SLAVE         
                                                                  mpc_std_shrt.q@mpcs080 SLAVE         
                                                                  mpc_std_shrt.q@mpcs080 SLAVE         
                                                                  mpc_std_shrt.q@mpcs080 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs087 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs089 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs090 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs091 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs099 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs107 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs110 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs111 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs112 SLAVE         
                                                                  mpc_std_shrt.q@mpcs112 SLAVE         
1040328 0.51109 Job16      nixi9106     r     10/07/2013 18:19:48 mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE         
                                                                  mpc_std_shrt.q@mpcs117 SLAVE    
  

If the jobs have lots of I/O this puts a big strain on the filesystem. For these large jobs the "parallel job memory issue" is a problem. I.e. the master process has to account (in terms of memory) for all the connections to the other host machines. Then, if the master process runs out of memory the job gets killed. More common are 8 slot jobs and even more common are jobs with even less slots.


Login problems

Every now and then external guests or regular users try to login to the HPC system from outside the university and the straightforward attempt via

 ssh abcd1234@hero.hpc.uni-oldenburg.de

fails, of course. Then, the user might report

 
Dear Oliver,

My name is Pavel Paulau, 
I tried today for the first time to log in in cluster: 

ssh exwi4008@hero.hpc.uni-oldenburg.de 

and got message: 
"Permission denied, please try again."

Could You say what is the reason? What should I do to get access?

Thanks.
Kind wishes,
Pavel
  

A possible response then might read

 
Dear Pavel,

on the first sight your command line statement looks right, provided that you
try to login to the HPC system from a terminal within the University of
Oldenburg.  I also checked that your HPC account indeed exists (and it does :)).

As pointed out in the HPC user wiki it makes a difference whether you attempt
to login from a Terminal within the University of Oldenburg or from outside the
university:
http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Logging_in_to_the_system#From_within_the_University_.28intranet.29 

In case you want to login to the HPC system from outside the university I
recommend to setup a VPN connection via the gateway vpn2.uni-oldenburg.de as
pointed out in the above user-wiki entry. However, sometimes, even though the
VPN tunnel is correctly set up, the login procedure might fail due to problems
resolving the hostname. Then you might try to access the cluster via the ip
address of the master node. Just establish the VPN tunnel and try to access
HERO via

ssh exwi4008@10.140.1.61 

this should resolve the name issues.

With kind regards
Oliver
  

HPC tutorial

Requesting seminar rooms

 
Hallo Herr Melchert,
die Buchungen habe ich eingetragen!

Gruß
Silke Harms


Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg

Telefon: 0441 / 798-2483


-----Ursprüngliche Nachricht-----
Von: Oliver Melchert 
Gesendet: Donnerstag, 24. Oktober 2013 10:28
An: Silke Ulrike Harms
Betreff: RE: Anfrage Raum für Einzel-/Blockveranstaltung

Hallo Frau Harms,

vielen Dank für die Liste der freien Termine. Nach Rücksprache mit meinem Kollegen würden wir gerne folgende Räume/Zeiten buchen: 

W04 1-162: 
Di: 19.11.13 - 14-16 Uhr
Mi: 20.11.13 - 16-18 Uhr

W01 0-008:
Do: 21.11.13 - 09-12 Uhr

der Einzelveranstaltung/Blockveranstaltung ist keine Nr. zugeordnet, es ist ein Pilotprojekt das, wenn erfolgreich, in den kommenden Semestern regulär (dann mit Veranstaltungsnummer) angeboten werden soll. Der Name der Veranstaltung lautet "A brief HPC Tutorial" und wird von Dr. Oliver Melchert und Dr. Stefan Albensoeder angeboten.

Mit herzlichen Grüßen
Oliver Melchert

________________________________________
From: Silke Ulrike Harms
Sent: Wednesday, October 23, 2013 10:09 AM
To: Oliver Melchert
Subject: AW: Anfrage Raum für Einzel-/Blockveranstaltung

Hallo Herr Melchert,
Sie können die freien Zeiten über Stud.IP in den jeweiligen Räumen einsehen (unter Raumbelegungen).
Ich habe Ihnen jetzt die derzeitigen Lücken rausgesucht:
W01 0-008
Mo 11.11.13 - 10-12 Uhr, Di 12.11.13 - 12-14 Uhr, Mi 13.11.13 - 08-10 + 16-18 Uhr, Do 14.11.13 - 08-10 + 14-16 Uhr, Fr 15.11.13 - 12-14 Uhr Mo 18.11.13 - ab 16 Uhr, Di 19.11.13 - 12-14 Uhr, Mi 20.11.13 - 08-10 Uhr, Do 21.11.13 - 08-12 + 14-16 Uhr, Fr 22.11.13 - ab 12 Uhr Mo 25.11.13 - ab 16 Uhr, Di 26.11.13 - 12-14 Uhr,  Mi 27.11.13 - 08-10 Uhr, Do 28.11.13 - 08-12 + 14-16 Uhr

W04 1-162
Di 12./19./26.11.13 - jeweils 14-16 Uhr
Mi 13./20./27.11.13 - jeweils ab 16 Uhr
Fr 15./22./29.11.13 - jeweils ab 14 Uhr

Bitte entscheiden Sie sich schnell, weil es zurzeit noch vielen Anfragen/Buchungen gibt.

Gruß
Silke Harms



Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg

Telefon: 0441 / 798-2483


-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Dienstag, 22. Oktober 2013 11:13
An: Silke Ulrike Harms
Betreff: RE: Anfrage Raum für Einzel-/Blockveranstaltung

Sehr geehrte Frau Harms,

vielen Dank für Ihre Antwort. Aufgrund von Urlaub/Krankeit konnten mein Kollege und ich uns auf keinen der vorgeschlagenen Ternime festlegen.

Hiermit möchte erneut einen Anfrage für dieselben Räume (siehe unten angehängte e-Mail) im Zeitraum

11.11.2013 - 29.11.2013

stellen.

Mit herzlichen Grüßen
Oliver Melchert

________________________________________
From: Silke Ulrike Harms
Sent: Monday, September 23, 2013 1:42 PM
To: Oliver Melchert
Subject: AW: Anfrage Raum für Einzel-/Blockveranstaltung

Guten Tag Herr Melchert,
ich kann Ihnen folgende Raumangebote machen:
Montag 14. Oktober 2013
14-20 Uhr - W04 1-162 + 16-20 Uhr - W01 0-008 Dienstag 15. Oktober 2013
14-16 Uhr - W04 1-162 + 18-20 Uhr - W01 0-008 Mittwoch 16. Oktober 2013
16-20 Uhr - W04 1-162 + 08-12 Uhr - W01 0-008

Bitte teilen Sie mir mit, welche Buchungen ich vornehmen soll und unter welcher Nr. ich Ihre Veranstaltung finde (und die Räume buchen soll).

Gruß
Silke Harms



Raum- und Veranstaltungsbüro
Dezernat 4 / Gebäudemanagement
Carl von Ossietzky Universität Oldenburg

Telefon: 0441 / 798-2483


-----Ursprüngliche Nachricht-----
Von: Oliver Melchert
Gesendet: Freitag, 20. September 2013 10:00
An: Silke Ulrike Harms
Betreff: Anfrage Raum für Einzel-/Blockveranstaltung

Sehr geehrte Frau Harms,

mein Name ist Oliver Melchert und ich begleite derzeit die Stelle des "Koordinators für das wissenschaftliche Rechnen". Für die Nutzer des Oldenburger Großrechners möchte ich, zusammen mit einem Kollegen, die Einzelveranstaltung/Blockveranstaltung "A brief HPC tutorial" anbieten. Für die Veranstaltung planen wir die Dauer von 4 x 1.5 Stunden ein.  Optimal wäre es, wenn wir an zwei aufeinanderfolgenden Tagen jeweils 2 x 1.5h anbieten könnten.

Wir rechnen mit max. 30 Teilnehmern und suchen einen geeigneten Raum für unser Vorhaben. Wir würden, sofern das möglich ist, die Veranstaltung gerne an zwei aufeinanderfolgenden Tagen im Zeitraum Oktober/November in den Wochen

14.10. - 19.10.
oder
28.10. - Ende November

anbieten. Die Veranstaltung soll neben Vorträgen auch praktische Übungen bieten.  Daher wäre es optimal, wenn wir für den letzten 1.5h Block in einen Rechnerraum ausweichen könnten. Da die meisten Nutzer am Standord Wechloy sitzen wäre es super wenn wir dort einen Seminarraum finden könnten. Geeignete Räume wären z.B.

W2-1-143
W2-1-148
W3-1-156
W4-1-162

Ein geeigneter Rechnerraum wäre z.B.

W1-0-008

Bezüglich der Urzeit sind wir flexibel. Wäre denn überhaupt noch Raum unsere geplante Veranstaltung unterzubringen?

Mit freundlichen Grüßen
Dr. Oliver Melchert
  

Mail to users

 
Betr.: [HPC-HERO] Tutorial on High Performance Computing (19.-21. Nov)

Dear User of the HPC System,

this is to announce the first tutorial on "High Performance Computing" which
will take place from 19.11.2013 to 21.11.2013. More precisely, the tutorial
will be split into three sessions. The first two sessions feature the parts
0.-IV. (listed below) and are held at the following dates:

Seminar-Room: W04 1-162:
Tue, 19.11.13 - 14-16 Uhr
Wed, 20.11.13 - 16-18 Uhr

The third session (part V.) comprises practical exercises which are meant to
illustrate some of the content presented in the earlier parts and is held at:

Computer-Lab: W01 0-008:
Thu, 21.11.13 - 09-12 Uhr

The target audience of this 1st HPC tutorial are new Users of the local HPC
system, for whom, in order to benefit from the tutorial, the skills of reading
and writing C-programs are of avail. However, we are optimistic that we will
be able to announce a quite similar tutorial for all Matlab-focused users,
soon. If you would like to attend the HPC tutorial, please sent a brief response
to this email.

The planned programme of this 1st HPC tutorial is

0. Introduction to HPC
   1. Motivation
   2. Architectures
   3. Overview over parallel models 

I. Cluster Overview:
   1. System Overview
   2. Modification of user environments via "module"
   3. Available compiler
   4. Available parallel environments
   5. Available Libraries
   6. Performance hints

II. Introduction to the usage of SGE:
    1. Introduction
    2. General Job submission 
    3. Single Slot jobs 
    4. Parallel Jobs 
    5. Monitoring and Controlling jobs 
      
III. Debugging and Profiling:
    1. Compiling programs for debugging
    2. Tracking memory issues
    3. Profiling

IV. Misc:
    1. Logging in from outside the university
    2. Mounting the HPC home directory
    3. Parallel environment memory issue
    4. Importance of allocating proper resources
   
V. Exercises (Computer-Lab):
    1. Try out the examples given in part II
    2. Estimate pi using Monte Carlo simulation
       (code provided serial+parallel using mpi;
       compile, submit and monitor jobs for different
       parameter settings)

With kind regards 
Oliver Melchert and Stefan Albensoeder
  

Confirmation for users

 
Dear USER,

this is to confirm your registration for the first tutorial on 
"High Performance Computing" which will be held at the following 
dates:

Seminar-Room W04 1-162:
Tue, 19.11.13 - 14-16 Uhr
Wed, 20.11.13 - 16-18 Uhr

Computer-Lab W01 0-008:
Thu, 21.11.13 - 09-12 Uhr

Thank you for signing up
Oliver Melchert and Stefan Albensoeder
  

Mail to IT Services

Contact the IT services and ask to make sure that the participants of the HPC Tutorial can logon to the HPC system from the Computer lab.

 
Hallo Oliver,

wir haben das Subnetz freigeschaltet.
Kannst du mal probieren ob alles funktioniert.

Heute kann ich nicht an deiner Veranstaltung teilnehmen, da ich schon um 17 Uhr einen Termin habe.

Viele Grüße 
Felix

-----Ursprüngliche Nachricht-----
Von: Oliver Melchert 
Gesendet: Mittwoch, 20. November 2013 10:14
An: Jürgen Weiß; Felix Thole
Betreff: IP Adressen in Raum W01-0-008

Hallo Jürgen,
hallo Felix,

ich habe die IP Adressen der Rechner im Raum W01-0-008
nachgeschaut. Die ersten 3 Oktetts lauten auf:

134.106.45.XXX

Die Übungen sollen morgen von 9-12 Uhr in diesem 
Raum stattfinden.

Ist die obige Information ausreichend oder soll ich 
eine genaue Liste der vollständigen IP Adressen 
senden?

Viele Grüße
Oliver  

User-Wiki entry

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1

Corresponding mail to all users


 
Betr.: [HPC-HERO] Accompanying documents for HPC Tutorial

Dear User of the HPC System,

this is to inform you that the User Wiki page that collects the material 
related to the first tutorial on "High Performance Computing", which
took place from 19.11.2013 to 21.11.2013, is available at

Main Page > Basic Information > Examples > HPC Tutorial No1

under the link

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1

We would like to thank all users of the HPC components FLOW/HERO that 
attended this first HPC tutorial and we are looking forward to host a 
further educational workshop tailored to suit all "MATLAB distributed
computing server" (MDCS) users at the end of January 2014 (more 
information will follow in due time).

Best regards
Oliver Melchert and Stefan Albensoeder
  

List of HPC publications

This page is intended to list publications that were supported by simulations on the HPC components FLOW/HERO. If you want to contribute to this list, please send an e-mail with subject:

[HPC-HERO or HPC-FLOW] Contribution to the list of HPC publications

to the coordinator of scientific computing (position currently substituted by: oliver.melchert@uni-oldenburg.de). It would be highly appreciated if you could provide the digital object identifier (DOI) that refers to your article within that mail. If the journal you published your article(s) in offers to export citations you might alternatively send one of the formats supported by the journal (preverably: BibTex).

NOTE: We kindly ask you to acknowledge the HPC components FLOW/HERO within research articles that were supported by simulations on the HPC facilities.

2012

  1. Claussen, G. and Apolo, L. and Melchert, O. and Hartmann, A. K.,
    Analysis of the loop length distribution for the negative-weight percolation problem in dimensions d=2 through d=6,
    Physical Review E 86, 5 (2012), 10.1103/PhysRevE.86.056708.

2013

  1. Melchert, O.,
    Percolation thresholds on planar Euclidean relative-neighborhood graphs,
    Physical Review E 87, 4 (2013), 10.1103/PhysRevE.87.042106.
  2. Melchert, O. and Hartmann, A. K.,
    Information-theoretic approach to ground-state phase transitions for two- and three-dimensional frustrated spin systems,
    Physical Review E 87, 2 (2013), 10.1103/PhysRevE.87.022107.
  3. Melchert, O.,
    Universality class of the two-dimensional randomly distributed growing-cluster percolation model,
    Physical Review E 87, 2 (2013), 10.1103/PhysRevE.87.022115.
  4. Norrenbrock, C. and Melchert, O. and Hartmann, A. K.,
    Paths in the minimally weighted path model are incompatible with Schramm-Loewner evolution,
    Physical Review E 87, 3 (2013), 10.1103/PhysRevE.87.032142.
  5. Melchert, O. and Hartmann, A. K.,
    Typical and large-deviation properties of minimum-energy paths on disordered hierarchical lattices,
    The European Physical Journal B 86, 7 (2013), 10.1140/epjb/e2013-40230-1.


List of user wiki pages

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Brief_Introduction_to_HPC_Computing

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Matlab_Examples_using_MDCS

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Queues_and_resource_allocation

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Unix_groups

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Mounting_Directories_of_FLOW_and_HERO#OSX

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=File_system (Snapshot functionality)

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=STATA

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Memory_Overestimation

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Debugging

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=Profiling_using_gprof

http://wiki.hpcuser.uni-oldenburg.de/index.php?title=HPC_Tutorial_No1

MISC

Limited resource quota sets

Slot limits for different user groups are set using resource quota sets (rqs)

 
alxo9476@hero01:~$ qconf -srqs
{
   name         max_slots_for_express_queue_FLOW
   description  "limits number of slots for express queue on FLOW"
   enabled      TRUE
   limit        users {@flowusers} queues {cfd_xtr_expr.q} to slots=40
}
{
   name         max_slots_for_pe_mdcs
   description  "limits number of slots for PE mdcs"
   enabled      TRUE
   limit        users {*} pes {mdcs} to slots=36
}
{
   name         max_slots_for_user_groups_HERO
   description  "limits number of slots of users on HERO"
   enabled      TRUE
   limit        users {@herousers} to slots=360
}