Data storage concept

After login suceeded, your university wide home folder will be mounted. This happens on the login- and on each compute node. Thus, on both the HPC-systems the same home folder is available.

At the compute nodes, however, it is not recommended to access the home folder, for performance reasons. Instead, on each HPC-system an additional high performance storage is provided, which is optimized for a parallel usage.

magnitUDE

On magnitUDE each user has a personal folder on the high performance storage. This folder has the path /scratch/$USER. Whenever possible, users should access the /scratch/$USER for performance reasons.

The variable $USER is resolved to the users login name. To see yours you may execute echo $USER on the magnitUDE.

Warning: Data stored on your /scratch/$USER space are subject to the delete schedule described in the Nutzerrichtlinien. Be sure to download and backup your valuable scientific data regulary.

amplitUDE

On amplitUDE the data storage concept on the high performance storage has been extended due to the experiences from magnitUDE. Each user has a permanent (as long the access to ampitUDE is granted) storage (HPC_HOME) area and a temporary scratch space (SCRATCH), which keep data for a limited time. Both can be easily accessed using the environement variables $HPC_HOME and SCRATCH.

  • HPC_HOME

    • permanent project data (e.g. program binaries, final results)

    • automatically created when access to amplitUDE is granted

    • 0.5 TB user quota

  • SCRATCH

    • Parallel file system for computations, temporary working data

    • individual folders limited in time, user manged by workspace environement, see -workspace docu-

    • 10 TB user quota

To see current used quotas:

lfs quota -u $USER /lustre/ -h

Disk quotas for usr abc123d (uid 123):
     Filesystem    used   quota   limit   grace   files   quota   limit   grace
       /lustre/  2.677G   10.4T   10.5T       -  103178       0       0       -

lfs quota -p $(id -u $USER) /lustre/ -h
Disk quotas for prj 123 (pid 123):
     Filesystem    used   quota   limit   grace   files   quota   limit   grace
       /lustre/  2.677G    450G    500G       -  103176       0       0       -

Summary

Table with summary

=== TBA ===