The WVU High-Performance Computing (HPC) facilities support computationally-intensive
research that requires especially powerful computational capabilities.
HPC resources assist research teams at WVU to greatly reduce their computational
analysis times. This includes free access to community nodes for researchers at
all institutes of higher learning in West Virginia, with CPU and GPU units. Researchers
can also purchase individual nodes that can be used on a first-come-first-serve
basis and are otherwise shared by the community.
Research Data Storage
HPC users have access to more than 500 TB of data storage accessible for processing
inside the HPC clusters. Researchers can also purchase group storage on the cluster
that will allow data to be easily shared between researchers. The Research Office
also offers storage in the centrally managed and secure Research Data Depot where
storage can be purchased at a cost-effective rate for five years. This data storage
is not intended for storing protected or regulated data.
HPC Clusters
HPC currently maintains 4
clusters. Thorny Flat, Dolly Sods, Harpers Ferry, and a cluster for CTSI
Thorny Flat
Thorny Flat, our general-purpose
HPC cluster, contains 182 compute nodes with a total of 6756 CPU cores, 30TB of
RAM, 21 Nvidia Quadro P6000 GPU's, 24 Nvidia RTX 6000 GPU's, and 2 Nvidia A100
GPU's. Of those compute nodes, 79 are community nodes. The remaining nodes are
nodes purchased by faculty members and departments that comprise an additional
103 nodes. These additional nodes are available to community members in
four-hour increments to increase the utilization of the system.
Dolly Sods
Dolly Sods, our GPU Accelerated
HPC cluster, is focused on artificial Intelligence and machine learning. It
consists of 30 nodes with 32 CPU cores and four A30 GPUs each, four nodes with
32 CPU cores and four A40 GPUs, two nodes with 64 CPU cores and eight SXM A100
GPUs. All nodes are connected to a high-speed low-latency HDR100 Infiniband
fabric to support tightly coupled multi-gpu and multi-node work.
Harpers Ferry
Harpers Ferry, our next
general-purpose HPC cluster replacing Thorny Flat, contains 37 compute nodes
with a total of 9472 CPU cores and 33TB of RAM.
CTSI
The CTSI cluster is a HIPPA
Compliant Cluster used by the CTSI group. It consists of 8 compute nodes with a
total of 400 CPU Cores, 4 TB of RAM, and 4 Nvidia Tesla V100's.
Contacts
Contact the Research Computing team by submitting a message or find more information in the HPC documentation.