
<font size=5 color=blueyellow>**Workshop on High Performance Computing at KTH**</font>
:::success
**Apr. 25, 09:00 -- 17:00 (CEST), 2024**
:::
[TOC]
## [Workshop page](https://www.kth.se/en/forskning/forskningsplattformar/digitalisering/kalender/workshop-on-high-performance-computing-at-kth-1.1313518)
## 09.00 – 09.10 Welcome and Introduction
- Tobias Oechtering, KTH Digitalization Platform
- Rossen, PDC Manager
## 09.10 – 10.15, Jordi Muela, BSC
> One of the main developers of SOD2D, a highly efficient GPU-accelerated code for cutting-edge fluid-flow simulations scaling on several thousands of GPUs.
short intro to BSC
exascale computing
cpu vs gpu
gpu programming
- low level
- cuda, opencl, sycl, nvptx
- pragma-based high level models
- openacc
- openmp
- graphics engines
- unreal, directX, Vulkan, OpenGL
- langauage standard parallelization
- C/C++ and Fortran
- ...
### SOD2D Code -- exascale in CFD
- new Continuous Galekin high-order spectral element method (CG-SEM)
- SOD2D = Spectral high-Order coDe 2 solve partial Diffential equations
- gitlab, bsc_sod2d
- Fortran, openACC, require HDF5 + MPI
**scability**
- carried out in MareNostrum 5 ACC
**communication pattern**
- mpi communication pattern
- point-to-point
- collective
- one sided
- cuda-aware mpi + GPUDirect
- communication GPU --> GPU
- before: GPU --> CPU --> CPU --> GPU
**unified memory**
- sod2d vs nekRS vs nek5000
**several application cases**
## 10.15 – 10.30 Break
## 10.30 – 12.00 Success Stories
### 1. Quantum Chemistry: Patrick Norman, Director PDC Centre for High Performance Computing
- intro to veloxchem
### 2. Fluid Dynamics: Outi Tammisola
- complex fluid and multiphase flow
- FluTAS = fluid transport accelerated solver
- FluTAS vs OpenFOAM
- lots of codes (in publications)
- github/gitlab?
- use pdc and maluxina/vega for computing
### 3. Heterogeneous Systems: Ivy Peng
- a heterogeneous way: from GPU to RISC-V
- **sleipner research cluster**
- leverage specialized hardware for HPC
- increased heterogeneity in memory tech
- Jupiter HPC cluster will be based on Sipearl Rhea Processor + HBM-DRAM heterogeneous memory
- HBM3-DRAM memory
- RISC-V based HPC
- OpenCUBE project
- codesign plasma-pepsc code with EPI (European Processor Initiative)
- https://www.european-processor-initiative.eu/
### 4. Biomolecular simulations: Alessandra Villa
- gromacs package
- zenodo.10683366 how to run gromacs efficiently on LUMI
### 5. Visualisation: Tino Weinkauf, InfraVis
- https://infravis.se
---
## 12.00 – 13.00 Lunch
---
## 13.00 – 14.00 Andrey Alekseenko, Modern hardware and software stacks
- https://chipsandcheese.com/
## 14.00 – 14.30 Gert Svensson, Swedish HPC Infrastructure
- naiss doing and not-doing
- management of sensitive data?
## 14.30 – 14.45 Malin Sandström, EuroHPC: the pan-European HPC Infrastructure
- brief into to EuroHPC JU
- CoEs, EPICURE, HPC SPECTRA, NCCs,
- type of calls
- 6-pillar program
- ENCCS is highlighted to be the contact organization for using EuroHPC resources
---
## 14.45 – 15.00 Break
---
## 15.00 – 16.00 HPC Communities
### 1. SeRC Swedish e-Science Research Centre, Olivia Eriksson
- three pillars
- application area
- e-infrastructure
- method development
- common e-science expert (RSEs)
- 8 research communities
- sese (swedish e-science education)
- another one is essence (sesc vs essence)
### 2. EU Centres of Excellence
- Rossen Apostolov (Director BioExcel CoE)
- PMX
- ~~Niclas Jansson (Director CEEC CoE)~~
- Jeremy Johnathan Williams (Project Manager Plasma-PEPSC CoE)
- fusion energy, plasma accelerators, space physics
- four codes
- Vlasiator (UoH)
- GENE(MPG)
- PIConGPU (HZDR)
- BIT (IPP, UL)
- EPI processor
- EPI accelerator
- Quantum Computing
### 3. Peter Larsson, High-level specialised application support service in HPC
- EPICURE (funding from EuroHPC JU?)
- each granted project will be paired with a support team (not a generic helpdesk)
- LUMI info
- workshops/hackathons/webinars about EPICURE
- ==GPU hackathon next year about EPICURE==
## 16.00 – 17.00 Final session with networking
---
:::danger
:::