Description of HPCFS
Hardware
HPC FS consists of two paritions:
- haswell from 2016: 20 compute nodes, 1 GPU compute node, 1 login node
- rome from 2021: 20 compute nodes, 1 login node
Each haswell compute node consists of:
- 2x Intel Xeon E5-2680V3 12-core processor, 2.5 GHz base clock
- 64 GB DDR4-2133 ram
- 250 GB SATA HDD
- QDR Infiniband
- 2x 1 Gbps Ethernet
Haswell GPU compute node:
- 2x Intel Xeon E5-2680V3 12-core processor, 2.5 GHz base clock
- 256 GB DDR4-2133 ram
- 250 GB SATA HDD
- QDR Infiniband
- 10 Gbps Ethernet
- 3x NVIDIA K80
Haswell login node:
- 2x Intel Xeon E5-2680V3 12-core processor, 2.5 GHz base clock
- 256 GB DDR4-2133 ram
- 2x 1TB SATA SSD
- QDR Infiniband
- 10 Gbps Ethernet
- NVIDIA K40
Rome compute nodes:
- 2x AMD EPYC 7402 24-core processor, 2.8 GHz base clock
- 128 GB DDR4-3200 ram
- 1 TB NVMe SSD
- HDR100 Infiniband
- 2x 1 Gbps Ethernet
Rome login node:
- 2x AMD EPYC 7302 16-core processor, 3 GHz base clock
- 512 GB DDR4-3200 ram
- 2x 1 TB NVMe SSD
- HDR100 Infiniband
- 2x 10 Gbps Ethernet
- NVIDIA A100
HPCFS contains also the following storage systems:
- Lustre filesystem:
- 4x 24 TB Object Storage Targets
- 96 TB total capacity
- Network filesystem:
Software
The operating system at HPCFS is Linux, as is the practice on almost all more powerful HPC systems. Due to the compatibility reasons, the CentOS distribution is selected.
On HPCFS we have installed the following software:
- Ansys Multiphysics
- Ansys CFX, Fluent, Maxwell, HFSS
- OpenFOAM CFD + extend
- VisIt in ParaView postprocesor
- Intel F90, CC
- Octave, R, Mathematica, Matlab
- OpenMP, OpenMPI, HPMPI, IntelMPI
- ATLAS, BLAS, BLACS, FFTW, GOTO, MUMPS, NetCDF, HDF5, Sparsekit, Scalapack