HPCFS

@hpcfs

documentation

Public team

Joined on Oct 13, 2021

  • Partition rome 20 compute nodes with Rocky Linux 8.5 have the folowing characteristics: 2 x AMD EPYC 7402 24-Core Processor in multithreaded configuration totaling 96 compute cores per node. Note that for MPI jobs --ntasks-per-core=1 should be used. max --mem=125G RAM can be used per node max --time=72:0:0 limit can be used per job. Longer jobs can use --signal=USR1 or similar, to start graceful shutdown and restart. Partition haswell 20 compute nodes with Rocky Linux 8.5
     Like  Bookmark
  • === Basic Guides Access to HPCFS Support for users Description of HPCFS SLURM at HPCFS Spack builds at HPCFS Using ANSYS at HPCFS Using Mathematica
     Like  Bookmark
  • License server flex.hpc When starting ​​​​mathematica or ​​​​wolframnb for the first time ab actrivation window pops out suggesting activation through web. Please find a button Other ways to activate in the bottom of the dialog and then
     Like  Bookmark
  • 2024 09 01 Upgrade of GPU login nodes to Rocky Linux 9.4 2024 03 06 Upgrade of SLURM abnd LUSTRE 2023 07 07 SSD disks replacements on $HOME fileserver 2023 03 01
     Like  Bookmark
  • Granting access You can access the supercomputer HPCFS by username and password OR by ssh key. In both cases, you first need approval from computing center of UL FME. To get it, you need to send to hpc@fs.uni-lj.si two things: a completed <a href="https://www.fs.uni-lj.si/wp-content/uploads/2022/12/Access-HPCFS-EN-.pdf">form</a> (fill it out, sign it, provide a signature from the head of research unit (laboratory) whose computing resources you will spend); encrypted password OR SSH key:password: click on the <a href="http://hpc.fs.uni-lj.si/password">password</a> to obtain the encrypted password. Send this password to hpc@fs.uni-lj.si; SSH key: you can create and distribute SSH key by following the instructions below. Once the account is approved and activated, you will recieve an email notification to the address you provided in the form. You will be able to access the HPCFS using password that you provided.
     Like  Bookmark
  • All issues related to HPC should be sent to hpc@fs.uni-lj.si. A ticket will be created and all communictaion will follow using the UL FME ticketing system.
     Like  Bookmark
  • Za programiranje vzporednih nalog uporabljamo dva načina: MPI (Message Passing Interface) OpenMP (Open Multi Processing) Osnovno pravilo vzporednega programiranja je, da mora biti rezultat zaporednega in vzporednega programa enak. Z OpenMP je to zelo enstavno preverljivo. Težave lahko nastopijo le pri programih, ki uporabljajo generator naključnih števil (Monte Carlo metode). OpenMP OpenMP programi se izvajajo le na enem vozlišču z porazdelitvijo na več vzporednih niti (threads). Za izdelavo niti skrbi prevajalnik sam. Kako naj se program razdeli na več niti se napiše v komentarje programa. To pa pomeni, da vzporedni program ni nič drugačen od zaporednega. Program lahko celo testiramo tako, da ga enkrat prevedemo brez OpenMP in dobimo zaporeden program. Nato pa ga prevedemo še z OpenMP in dobimo vzporeden program, ki se hkrati izvaja na večih jedrih enega vozlišča.
     Like  Bookmark
  • ANSYS Fluent and SSH Fluent is known that starts MPI jobs within its own fluent script. Usally, passwordless ssh is needed to allocated nodes before any batch script can be submitted. The following 3-lines of code is sufficient to ensure you have this set up: test -f ~/.ssh/id_rsa || ssh-keygen -t rsa -f ~/.ssh/id_rsa -q -N "" test -f ~/.ssh/authorized_keys || cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys grep -qf ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys || cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys To check if paswordless works try to get two nodes and get to one of them in next 5 minutes
     Like  Bookmark
  • Spack provides prebuild modules for all users on HPCFS. Specific module builds are listed below. We restric our builds to RHEL8.4 provided compiler GCC@8.5.0 in order to provide haswell and rome CPU compatibility (arch=zen) for system provided layer (SLURM, UCX, knem). Higher version GCC compilers are used on ROME with zen2 (avx2) compatibility across rome and haswell partitions, while AOCC compilers are intended just for AMD rome partition. User Spack development It is also possible to use modules locally for your own use by setting up the following configurations for local build and deployment. rm -rf ~/.spack mkdir ~/.spack sed -e '/root:/s,$spack,/work/$USER,' /opt/spack/etc/spack/defaults/config.yaml > ~/.spack/config.yaml
     Like  Bookmark
  • Hardware HPC FS consists of two paritions: haswell from 2016: 20 compute nodes, 1 GPU compute node, 1 login node rome from 2021: 20 compute nodes, 1 login node Each haswell compute node consists of: 2x Intel Xeon E5-2680V3 12-core processor, 2.5 GHz base clock 64 GB DDR4-2133 ram
     Like  Bookmark
  •  Like  Bookmark
  •  Like  Bookmark
  • Linux If you use Linux at your (local) computer, run in the command line: ssh-keygen -m PEM -b 2048 -t rsa Generating public/private rsa key pair. `Enter file in which to save the key (/home/uporabnik/.ssh/id_rsa): ` `Created directory '/home/uporabnik/.ssh'.` `Enter passphrase (empty for no passphrase): ` `Enter same passphrase again: ` `Your identification has been saved in /home/uporabnik/.ssh/id_rsa.`
     Like  Bookmark
  • Some examples for running on HPCFS. Using the gpu02 login node Set the host in your NoMachine connection to: gpu02.hpc.fs.uni-lj.si and connect. Cloning the hpc-examples repository
     Like  Bookmark
  • View the book with " Book Mode". Examples Book example Slide example YAML metadata Features Themes
     Like  Bookmark