Difference between revisions of "File system and Data Management"
Schwietzer (talk | contribs) |
|||
Line 1: | Line 1: | ||
The HPC cluster offers access to two important file systems, namely a parallel file system connected to the communication network and an NFS-mounted file system connected via the campus network. In additions, the compute nodes of the CARL cluster have local storage devices for high I/O demands. The different file systems available are documented below. '''Please follow the guidelines given for best pratice below.''' | The HPC cluster offers access to two important file systems, namely a parallel file system connected to the communication network and an NFS-mounted file system connected via the campus network. In additions, the compute nodes of the CARL cluster have local storage devices for high I/O demands. The different file systems available are documented below. '''Please follow the guidelines given for best pratice below.''' |
Revision as of 07:58, 10 June 2020
The HPC cluster offers access to two important file systems, namely a parallel file system connected to the communication network and an NFS-mounted file system connected via the campus network. In additions, the compute nodes of the CARL cluster have local storage devices for high I/O demands. The different file systems available are documented below. Please follow the guidelines given for best pratice below.
Storage Hardware
A GPFS Storage Server (GSS) serves as a parallel file system for the HPC cluster. The total (net) capacity of this file system is about 900 TB and the read/write performance is up to 17/12 GB/s over FDR Infiniband. It is possible to mount the GPFS on your local machine using SMB/NFS (via the 10GE campus network). The GSS should be used as the primary storage device for HPC, in particular for data that is read/written by the compute nodes. The GSS offers no backup functionality (i.e. deleted data can not be recovered).
The central storage system of the IT services is used to provide the NFS-mounted $HOME-directories and other directories, namely $DATA, $GROUP, and $OFFSITE. The central storage system is provided by an IBM Elastic Storage Server (ESS). It offers very high availability, snapshots, and backup and should be used for long-term storage, in particular everything that cannot be recovered easily (program codes, initial conditions, final results, ...).
File Systems
The following file systems are available on the cluster
File System | Environment Variable | Path | Device | Data Transfer | Backup | Used for | Comments |
---|---|---|---|---|---|---|---|
Home | $HOME | /user/abcd1234 | ESS | NFS over 10GE | yes | critical data that cannot easily be reproduced (program codes, initial conditions, results from data analysis) | high-availability file-system, snapshot functionality, can be mounted on local workstation |
Data | $DATA | /nfs/data/abcd1234 | ESS | NFS over 10GE | yes | important data from simulations for long term (project duration) storage | access from the compute nodes is slow but possible, can be mounted on local workstation |
Group | $GROUP | /nfs/group/<groupname> | ESS | NFS over 10GE | yes | similar to $DATA but for data shared within a group | available upon request to Scientific Computing |
Work | $WORK | /gss/work/abcd1234 | GSS | FDR Infiniband | no | data storage for simulations at runtime, for pre- and post-processing, short term (weeks) storage | parallel file-system for fast read/write access from compute nodes, temporarily store larger amounts of data |
Scratch | $TMPDIR | /scratch/<job-specific-dir> | local disk or SSD | local | no | temporary data storage during job runtime | directory is created at job startup and deleted at completion, job script can copy data to other file systems if needed |
Offsite | $OFFSITE | /nfs/offsite/user/abcd1234 | ESS | NFS over 10G | yes | long-term data storage, use for data currently not actively needed (project finished) | only available on the login nodes and as SMB share, access can be slower in the future |
Snapshots on the ESS
Snapshots are enables in some of the directories on the ESS, namely $DATA, $GROUP, and $HOME. You can find these snapshots by changing to the directory .snapshots with the command
cd .snapshots
If you look at whats in this directory, you will find folders name liked this: ess-data-2017-XX-XX. In these folders is a snapshot of every $DATA and $HOME directory from every user.
The following applies to $HOME: The snapshot creation intervals change over time. Snapshots of the current day are taken hourly, so you can quickly correct recent errors. However, the hourly snapshots are deleted the following day, so the hourly interval only applies to the current day. One snapshot per day remains for one month, however in order to recover files even after a longer period of time.
For $DATA a simpler principle applies: Here a snapshot is created once per day for 30 days.
After one month, the oldest snapshots are being deleted. That means, that this is not a long-time backup solution.
---> Example
On the 17th of May, our IT assistant tried to optimize a script but ended up making it unusable. Unfortunately, he didn't made any backup and saved his 'progress'. Now he could either try to fix the code, or he could make use of our storage systems' snapshot feature. Wisely, he chose the latter and restored a copy from the 16th of May.
[erle1100@hpcl004 JobScriptExamples]$ cd .snapshots
[erle1100@hpcl004 .snapshots]$ ll
total 43
[... shortened for the sake of clarity ...]
drwxr-xr-x 6 erle1100 hrz 4096 May 9 14:10 @GMT-2019.05.15-21.00.00-hpc_user-daily
drwxr-xr-x 6 erle1100 hrz 4096 May 9 14:10 @GMT-2019.05.16-21.00.00-hpc_user-daily
drwxr-xr-x 6 erle1100 hrz 4096 May 9 14:10 @GMT-2019.05.17-08.00.00-hpc_user-hourly
drwxr-xr-x 6 erle1100 hrz 4096 May 9 14:10 @GMT-2019.05.17-09.00.00-hpc_user-hourly
drwxr-xr-x 6 erle1100 hrz 4096 May 9 14:10 @GMT-2019.05.17-10.00.00-hpc_user-hourly
[erle1100@hpcl004 .snapshots]$ cd @GMT-2019.05.16-21.00.00-hpc_user-daily/
[erle1100@hpcl004 @GMT-2019.05.16-21.00.00-hpc_user-daily]$ ll
total 4
drwxr-xr-x 4 erle1100 hrz 4096 May 2 04:58 CHEM
drwxr-xr-x 2 erle1100 hrz 4096 May 2 04:58 Perl
drwxr-xr-x 3 erle1100 hrz 4096 Apr 24 14:47 PHYS
drwxr-xr-x 2 erle1100 hrz 4096 May 9 14:37 SLURM
[erle1100@hpcl004 @GMT-2019.05.16-21.00.00-hpc_user-daily]$ cp CHEM/useful_script.sh ../../../CHEM/ && cd ../../../CHEM/
Quotas
Quotas are used to limit the storage capacity for each user on each file system (except $TMPDIR which is not persistent after job completion). The following table gives an overview of the default quotas. Users with a particular high demand in storage can contact Scientific Computing to have their quotas increased (reasonably and depending available space).
File System | Hard Quota | Soft Quota | Grace Period |
---|---|---|---|
$HOME | 10 TB | 1 TB | 30 days |
$DATA | 25 TB | 20 TB | 30 days |
$WORK | 50 TB | 25 TB | 30 days |
$OFFSITE | 30 TB | 25 TB | 30 days |
Formerly, $WORK was the only file system which had soft and hard quotas and therefore a grace period.
But since 11/2020,a higher temporary storage capacity (hard quota) had been added on $HOME, $DATA and $OFFSITE and therefore every file system has a grace period of 30 days now.
A hard limit means, that you cannot write more data to the file system than allowed by your personal quota. Once the quota has been reached any write operation will fail (including those from running simulations).
A soft limit, on the other hand, will only trigger a grace period during which you can still write data to the file system (up to the hard limit). Only when the grace period is over you can no longer write data, again including from running simulations (also note that you cannot write data even if you below the hard but above the soft limit).
That means, you can store data for as long as you want while you are below the soft limit. When you get above the soft limit (e.g. during a long, high I/O simulation on $WORK) the grace period starts. You can still produce more data within the grace period and below the hard limit. Once you delete enough of your data on $WORK to get below the soft limit the grace period is reset. This system forces you to clean up data no longer needed on a regular basis and helps to keep the GPFS storage system usable for everyone.
Scratch space / TempDir
In addition to the storage in $HOME, $WORK and $DATA, the nodes also have local space called $TMPDIR. The amount of space is different for every node type:
- mpcs-nodes: approx. 800 GB (HDD)
- mpcl-nodes: approx. 800 GB (HDD)
- mpcp-nodes: approx. 1.7 TB (SSD)
- mpcb-nodes: approx. 1.7 TB (SSD)
- mpcg-nodes: approx. 800 GB (HDD)
Important note: This storage space is only used for temporary created files during the execution of the job. After the job has finished these files will be deleted. If you need theses files, you have to add a "copy"-command to your jobscript, e.g.
cp $TMPDIR/54321_abcd1234/important_file.txt $DATA/54321_abcd1234/
If you need local storage for your job, please add the following line to your jobscript
#SBATCH --gres=tmpdir:100G
This will reserve 100GB local, temporary storage for your job.
Keep in mind that you can add only one gres (generic resource) per jobscript. Multiple lines will not be accepted. If you need an additional GPU for your job, the "--gres"-command should look like this:
#SBATCH --gres=tmpdir:100G,gpu:1
rsync: File transfer between different user accounts
Sometimes you may want to exchange files from one account to another. This often applies to users who had to change their account, for example because they switched from their student account to their new employee account.
In this case, you may want to use rsync when logged in to your old account:
rsync -avz $HOME/source_directory abcd1234@carl.hpc.uni-oldenburg.de:/user/abcd1234/target_directory
Where abcd1234 is the new account. You will have to type in the password of the targeted account to proceed with the command.
- -a mandatory. It transfers the access rights to the new user.
- -v optional. You will see every action for every copied file that rsync moves. This can spam your whole terminal session, so you may want to redirect the output into a file, if you want to keep track of every copied file afterwards.
- rsync -avz $HOME/source_directory abcd1234@carl.hpc.uni-oldenburg.de:/user/abcd1234/target_directory > copy-log.txt 2>&1
- -z optional. Reduces the amount of data to be transferred.
Managing access rights of your folders
Different to the directory structure on HERO and FLOW, the folder hierarchy on CARL and EDDY is flat and less clustered together. We no longer have multiple directories stacked in each other. This leads to an inevitable change in the access right management. If you dont change the access rights for your directory on the cluster, the command "ls -l /user" will show you something like this:
If you look at the first line shown in the screenshot above, you can see the following informations:
drwx--S--- 3 abgu0243 agtheochem 226 2. Feb 11:55 abgu0243
drwx--S---
- the first letter marks the filetype, in this case its a "d" which means we are looking at a directory.
- the following chars are the access rights: "rwx--S---" means: Only the owner can read, write and execute the folder and everything thats in it.
- The "S" stands for "Set Group ID". This will, additional to the rights of the executing (read and write respectively) User, apply the rights of the group which owns the filed/directory etc. This was an temporary option to ensure safety while we were opening the cluster for the public. Its possible that the "S" isnt set when you are looking at this guide, that is okay and intended.
abgu0243
- current owner of the file/directory
agtheochem
- current group of the file/directory
- this is your primary group. You can check your secondary group with the command "groups $(whoami)". It will output something like this: "abcd1234: group1 group2".
abgu0243
- current name of the file/directory
Basically we will need three commands to modify our access rights on the cluster:
- chmod
- chgrp
- chown
You will most likey just need one or maybe two of these commands, nevertheless we listed and described them for the integrity of this wiki.
chmod
The command chmod will directly modify the access rights of your desired file/directory. chmod has two different modes, symbolic- and octal-mode, but the syntaxes are pretty much the same:
- symbolic mode
chmod [OPTION]... MODE[,MODE]... FILE...
- octal-mode
chmod [OPTION]... OCTAL‐MODE FILE...
The following table will show the three different user categories for each of the modes.
Usertype | Symbolic mode | Octal-mode |
---|---|---|
Owner of the file | u | 1. digit |
Group of the file | g | 2. digit |
Other users | o | 3. digit |
Owner, group and others | a |
Possible access rights are: r = read, w = write, x = execute
Examples for the symbolic mode
- allow your group to read, but not to write or execute your file
chmod g=r yourfile.txt
- "=" will always clear every access right and set the ones you want. For example, if the file mentioned above was readable, writable and executable, the command "chmod g=r yourfile.txt" will make the file only readable. Beside that you can use "+" to add specific rights and "-" to remove them.
- allow other users to read and write, but not to execute your file
chmod o=rw yourfile.txt or chmod o+rw yourfile.txt
Examples for the octal mode
For comparison, we will be using the same examples as in the symbolic mode shown above:
- allow your group to read, but not to write or execute your file (owner can still read, write and execute)
chmod 744 yourfile.txt
- allow other users to read and write, but not to execute your file
chmod 766 yourfile.txt
An easy way to calculate these numbers is using the following tool: chmod-calculator.com
chgrp
The command chgrp (or change group) will change the group of your file/directory. The syntax of the command looks like this:
chgrp [OPTION] GROUP FILE
Example
Change the group of your file/directory
chgrp yourgrp /user/abcd1234/randomdirectory
If you are reading this part of the wiki, you might be looking for a way to change the group of your files after you changed your unix-group. A short example on how to do that can be found here.
chown
The command chown (or: change owner) does what you think it does: it changes the owner of a file or directory. The syntax of the command looks like this:
chown [OPTION]... [OWNER][:[GROUP FILE...
Note: In the most cases you will not need to use this command!
Examples
- How can prevent anyone else from reading, modifying, or even seeing files/directories in my directories?
chmod u+rwx $HOME/your_folder
- How can I grant access to a file/folder for another member of my primary group?
chmod g+x $HOME/your_folder
- Members of your group will only see the folder that you mentioned, other folders aren't visible.
- Adding "rx" will grant the same rights, but everything else is visible (not accessible though). Please note, that the +xr privileges allow the group members to copy the corresponding files into their own directories.
chmod g+rx $HOME/your_folder
- How can I grant access to a file/folder for another member of one of my secondary groups?
chgrp -R your_secondary_group $HOME/your_folder
- This will grant "your_secondary_group" the set rights for this group to your folder "$HOME/your_folder".
- How can I grant access to a file/folder for anyone without anyone being able to access files/folders stored in the same directory?
chmod o+x $HOME/your_folder
NOTE: We used "+" in every example. Remember that using "+" (instead of "=", which overwrites rights) will keep all other rights that are applied to the file/folder and will only change the ones you added!
Storage Systems Best Practice
Here are some general remarks on using the storage systems efficiently:
- Store all files and data that cannot be reproduced (easily) on $HOME for highest protection.
- Carefully select files and data for storage on $HOME and compress them if possible as storage on $HOME is the most expensive.
- Store large data files that can be reproduced with some effort (re-running a simulation) on $DATA.
- Data created or used during the run-time of a simulation should be stored in $WORK and avoid reading from or writing to $HOME, in particular if you are dealing with large data files.
- If possible avoid writing many small files to any file system.
- Clean up your data regularly.
Moving Data to $OFFSITE
The central storage system now offers a limited amount of storage space to store data files which are no longer actively used but cannot be deleted yet. This data can be moved to the $OFFSITE directory to free space in other file systems/directories. Currently, $OFFSITE is located on the ESS but a different (slower and less expensive) storage system may be used in the future. The default quota for $OFFSITE is 25TB.
The following steps are recommended to move data files to $OFFSITE:
- Collect all files/directories in a separate directory, e.g. project_xyz (create a new one if needed). Make sure to only collect the data files that you absolutely must keep. Make sure that a description of the data is available, too.
- Consider to pack the directory project_xyz into a single archive, e.g. using the command
tar cvf $OFFSITE/project_xyz.tar project_xyz/ > $OFFSITE/project_xyz.file.lst
in particular if you have many individual files less than 1MB in size. With this command, the archive is created directly in your $OFFSITE directory and you can skip the rsync-step below. In addition, a text file project_xyz.file.lst is created next to the archive listing all the files and directories in the archive. - Consider to compress your data files or the archive from the previous step, e.g. with
gzip $OFFSITE/project_xyz.tar
or alternatively use compression when creating the archive. If you have already used a compressed format for your data files, this step should be skipped. - Use rsync to copy your (archived and compressed) directory to $OFFSITE
rsync -av project_xyz.tar.gz $OFFSITE
as you can restart with the same command in case the process get interrupted. Avoid to copy data during the usual office hours. - Create a README file or similar that briefly explains what can be found in project_xyz.tar.gz and the reason for keeping it stored.
- Optional: after the rsync has completed you can make a sanity check using e.g.
md5sum project_xyz.tar.gz
on both the original and the copied file(s). The resulting checksum should be identical, if not the rsync process was most likely aborted and should be restarted. If you created a tar-file directly in $OFFSITE, you can check its integrity with the commandtar tf project_xyz.tar > /dev/null && echo "no error"
which should print no error if everything is ok. - If you are sure the files have been copied correctly, you can delete the original ones to free space on the file system.
- Optional: remove the write-permissions from all the files and directories in $OFFSITE
chmod a-r project_xyz.tar.gz
for a single file orchmod -R a-r project_xyz/
for a full directory.
The main idea of $OFFSITE is that once a file or directory is copied there it will not be changed afterwards. Please note, that the data in $OFFSITE will be deleted some time after your account has been deactivated.