HPC Cluster Investment And Price Card Information

The UH ITS HPC Cluster is a joint investment between UH and the research community based on the Condo Compute Model.  UH made an initial investment of 1.8 million dollars in the current cluster consisting of 178 standard compute nodes and 6 large memory nodes which was installed on November 10th, 2014.  The research community contributes back to this resource by purchasing nodes which are added to the cluster with the first PI investment installed on the cluster March 19th, 2016 that added 91 compute nodes and a community GPU node bringing the cluster to 276 compute nodes.   The idea is to create an efficient and sustainable compute resource for the UH system.

Cluster Price Card

ITEM NAMEITEM DESCRIPTIONITEM PRICE
Standard Compute NodeCompute node with 20 cores and 128GB of RAM$6,600
Large Memory Compute NodeCompute node with 40 cores and 1TB of RAM$33,900
GPU Compute NodeCompute node with 20 cores and 128GB of RAM and 2 NVIDIA Tesla GPU Cards$13,600
Custom NodeCompute node with TBD cores and TBD GB of RAMTBD
1TB of Lustre File System Storage1TB of scratch files system for 5 years. Requires the purchase of one compute node per 1TB$600
One Hour of Standard Compute Node ComputationOne hour of compute on a standard 20 core 128GB of RAM compute node. 1000 hour minimum purchase($0.50/hr) $500 minimum
One Hour of Large Memory Compute Node ComputationOne hour of compute on a large memory 40 core 1TB of RAM compute node. 250 hour minimum purchase($2.00/hr) $500 minimum
0.5 TB of Network ValueStorage0.5 TB of Scale Out storage attached to the HPC for long term file storage – more information$65 per year
0.5 TB of Network ValueStorage with Replication0.5 TB of Scale Out storage attached to the HPC for long term file storage that is replicated- more information$130 per year

 

What Does Node Ownership Entail?

Owning a node on the UH HPC Cluster entitles said owner to guaranteed, immediate access to the owned resource(s).  In practice this means that if another user were utilizing “community” cycles on the resources that user’s job would be preempted immediately and the owner’s job would take it’s place.  So owners can run on their owned resource and compete in the “community” portion of the cluster for resources.  Owners also allow their resource to be used in the “kill” partition of the cluster when they are not utilizing it.  Owners also get extended wall-time for their resources -currently two weeks is the default but can be extended depending upon initial input from owners.

Other Benefits

  • Node owners can purchase portions of the parallel Lustre file scratch system to park their data a results for the life of their node.  1TB per node is the limit.
  • Node owners can get enhanced software installation assistance and help running on their resources from the Cyberinfrastructure team.
  • Node owners are stakeholders and therefore have more say in how the cluster is managed in reference to wall-time runs and other scheduler functions.  
  • Node owners can gain accounting management for their resources to manage within their group as they see fit.

Back to Top