Skip to main content

High-Performance Computing

WVU's High-Performance Computing systems can solve large equations in physical, forensic science, biological and social sciences, engineering, humanities, or business.

Cluster specs

Specs

Thorny Flat Dolly Sods
HPC Clusters

Compute Nodes 159 34
CPU Cores 6,196 1,184
Aggregated RAM
10.2TB
GPU Cards
155
Aggregated CUDA Cores
771,360
Aggregated GPU Memory

5TB

The WVU High-Performance Computing (HPC) facilities support computationally-intensive research that requires especially powerful computational capabilities.

HPC resources assist research teams at WVU to greatly reduce their computational analysis times. This includes free access to community nodes for researchers at all institutes of higher learning in West Virginia, with CPU and GPU units. Researchers can also purchase individual nodes that can be used on a first-come-first-serve basis and are otherwise shared by the community.

Research Data Storage

HPC users have access to more than 500 TB of data storage accessible for processing inside the HPC clusters. Researchers can also purchase group storage on the cluster that will allow data to be easily shared between researchers. The Research Office also offers storage in the centrally managed and secure Research Data Depot where storage can be purchased at a cost-effective rate for five years. This data storage is not intended for storing protected or regulated data.

HPC Clusters

HPC currently maintains 4 clusters. Thorny Flat, Dolly Sods, Harpers Ferry, and a cluster for CTSI

Thorny Flat

Thorny Flat, our general-purpose HPC cluster, contains 182 compute nodes with a total of 6756 CPU cores, 30TB of RAM, 21 Nvidia Quadro P6000 GPU's, 24 Nvidia RTX 6000 GPU's, and 2 Nvidia A100 GPU's. Of those compute nodes, 79 are community nodes. The remaining nodes are nodes purchased by faculty members and departments that comprise an additional 103 nodes. These additional nodes are available to community members in four-hour increments to increase the utilization of the system. 

Dolly Sods

Dolly Sods, our GPU Accelerated HPC cluster, is focused on artificial Intelligence and machine learning. It consists of 30 nodes with 32 CPU cores and four A30 GPUs each, four nodes with 32 CPU cores and four A40 GPUs, two nodes with 64 CPU cores and eight SXM A100 GPUs. All nodes are connected to a high-speed low-latency HDR100 Infiniband fabric to support tightly coupled multi-gpu and multi-node work.

Harpers Ferry

Harpers Ferry, our next general-purpose HPC cluster replacing Thorny Flat, contains 37 compute nodes with a total of 9472 CPU cores and 33TB of RAM.

CTSI

The CTSI cluster is a HIPPA Compliant Cluster used by the CTSI group. It consists of 8 compute nodes with a total of 400 CPU Cores, 4 TB of RAM, and 4 Nvidia Tesla V100's.

Contacts

Contact the Research Computing team by submitting a message or find more information in the HPC documentation.

Staff

Aldo Humberto Romero

Director of Research Computing

Aldo is an Eberly Distinguished Professor of Physics that leads the Research Computing team in continuing the sustained growth of HPC resources at WVU. Aldo also helps with outreach to the WVU research computing community and is an ACCESS-NSF Campus Champion.

Dr. Guillermo Franco

Senior Software Specialist

Guillermo has a strong background in the scientific programming languages of C, FORTRAN, and Python. He is also an ACCESS Campus Champion and works directly with researchers to enable them to utilize HPC resources in the most appropriate manner.

Daniel Turpen

Daniel runs cloud services for the B&E Business Data Analytics program, Global access and provides computational support to the GoFirst program.

Dr. Irene Burkhalter

Junior Software Specialist Irene has a background in scientific computing, including the languages of C, Python, Matlab, and Java, as well as a strong background in metaprogramming. She works with researchers to help them effectively use HPC resources.

Dr. Joseph Glaser

Systems Administrator/Research Scientist

Joe is an astrophysicist with a passion for and strong background in high-performance computing. His expertise is focused on optimizing scientific software/workflows to take advantage of modern compute hardware and deploying modular/scalable HPC infrastructure. In addition, he is an active full member with the NANOGrav Pulsar Timing Array collaboration and WVU's Center for Gravitational Waves and Cosmology.

HPC training resources are available throughout the year for new and expert users in different topics from Parallel computing to Neural networks.