Skip to main content

High Performance Computing

West Virginia University provides a variety of services and software support that can assist users with computationally intensive or digital humanities research. The WVU High Performance Computing systems can be utilized to solve large problems in physical, forensic science, biological and social sciences, engineering, humanities, or business using higher computing power than can be achieved from a desktop computer or workstation.

The WVU High Performance Computing (HPC) facilities support computationally intensive research that require high computational capabilities and low latency. HPC resources assist research teams at WVU to greatly reduce their computational analysis times. This includes free access to community nodes for researchers at all institutes of higher learning in West Virginia, with CPU and GPU units. Researchers can also purchase individual nodes that can be used on a first come-first-serve basis and are otherwise shared by the community.

HPC users have access to more than 500 TB free storage and can also purchase group storage on the cluster that will allow data to be easily shared between researchers. ITS also offers storage in the centrally managed and secure Research Data Depot where storage can be purchased at a cost-effective rate for five years.

Our current cluster contains 79 community nodes with a total of 4,824 compute cores, 9.3 Terabytes of Memory and18 NVIDIA P6000 GPUs. Additionally, there are nodes purchased by faculty members and departments that comprise an additional 99 nodes with and additional 3,520 cores and 29 additional NVIDIA GPUs ranging from P6000 to A100 modes. These additional nodes are available to community members in four-hour increments to increase the utilization of the system.

A new cluster is set (starting before June, 2023) focused on artificial Intelligence and machine learning. It will consist of 30 nodes with 32 cores and four A30 GPUs each, four nodes with 32 cores and four A40 GPUs, two nodes with 64 cores and eight SXM A100 GPUs, and one node with three A40 GPUs focused on visualization and testing. All nodes are connected to a high speed, low latency HDR100 Infiniband fabric to support tightly coupled multi-gpu and multi-node work.

Contact the Research Computing group of Information Technology Services (ITS) for more information about  High Performance Computing or the  Research Data Depot.

Staff

HPC training resources are available throughout the year for new and expert users in different topics from Parallel computing to Neural networks.