HPC Policies

HPC Scratch Filesystem Policy

The Lustre scratch filesystem on the UH-HPC cluster is a shared resource.
It is only meant for temporary storage of data.
The scratch filesystem is not backed-up.
Users are responsible for backing up their own data.
UH ITS is not responsible for any loss of data.

Below are the details of the purge policy for the Lustre scratch filesystem.

  • Directory tree subject to purge
    • /lus/scratch/${USER} a.k.a. ~/lus
  • Types of file system objects subject to purge
    • Regular files
    • Symlinks
    • Block files, Character files, Named pipes and Sockets
  • Attributes of files to be purged
    • Creation time > 35 days and file size > 1 MB or file size == 0 bytes
    • Creation time > 120 days and file size >=1 byte and file size <= 1 MB
  • Frequency of purge: Daily

Login Node Policies And Etiquette

The UH ITS HPC Cluster login nodes have two specific purposes: providing ssh shell access to transfer files to and from the cluster and launch batch and interactive session on the compute nodes.  Specifically, Globus , sftp, scp, rsync transfers are allowed along with launching SLURM jobs (batch and interactive) and modifying text files with a text editor- everything else should be run on a compute node. The login nodes are a shared resource and are the only access to the cluster for hundreds of user.  Therefore, running other tasks on the login nodes is not allowed and the resulting tasks will be canceled and repeat offenders can have their HPC accounts disabled.  

HPC Cluster Maintenance

The UH ITS HPC Cluster will need to undergo regular maintenance to address patching, security and system stability.  The first Wednesday of each month that is not a holiday is reserved from 8am-5pm for this maintenance to take place.  Although rare, jobs running on the cluster during this maintenance may have to be stop and possibly restarted – users are responsible for being aware of any impacts on their job from a restart.

Back to Top