# EESSI paper meeting (20210511)
Call for papers: https://onlinelibrary.wiley.com/pb-assets/assets/page/journal/1097024x/SPE-SI-HPC-1607014410373.pdf
Submission deadline: Monday **May 31st 2021**
Overleaf project (contact Alan to get access): https://www.overleaf.com/project/60993d67cfe5ee67c2fe2fe0
## Attendees
- Alan O'Cais (Jülich Supercomputing Centre)
- Kenneth Hoste (HPC-UGent)
- Thomas Röblitz (HPC-UBergen)
- Victor Holanda Rusu (CSCS)
- Bob Dröge (HPC-UGroningen)
- to invite
- Terje
- Caspar
- Adam (review?)
## Follow-up sync meetings
- Wed May 19th 13:30 - 15:00 CEST
- Wed May 26th 13:30 - 15:00 CEST
## Attention points
- Need to make clear what is different to what CC did. Check their paper.
## Places to steal from
- Fenics proposals
- NESSI / S4
- CZI grant proposal
- FOSDEM submission
- Alan's writeup
- EESSI docs
## Structure
- Introduction: Problem statement + motivation [Kenneth]
- community aspect
- changing HPC landscape
- x86_64, aarch64, ppc64le, RISC-V
- relevance to exascale? [Alan]
- workflows
- Project overview [Bob, Kenneth, Thomas]
- design choices
- overall structure & layers [Bob](Creating note...)
- software projects (CernVM-FS, Gentoo Prefix, EasyBuild/Lmod/archspec & co) [Kenneth]
- minimal requirements for clients
- build containers (Singularity)
- "continuous" release of software layer
- bi-yearly for compat layer?
- deployment with Ansible/Terraform [Bob, Terje?]
- testing with ReFrame [Victor]
- EESSI community [Kenneth]
- How it started
- Who's involved
- Current structure
- Lack of dedicated funding (bottom-up)
- open source licensed, GitHub
- Use cases [Thomas]
- HPC cluster
- Cloud
- Workstation [Victor]
- WSL, macOS
- Testing across different systems
- Building software on top [Alan]
- Training (cluster in the cloud, magic castle) [Alan]
- might take some use cases from S4 proposal
- Work together with software developers
- review installations
- how to assess quality of installation (correctness, performance)
- Facilitate testing in CI environments [Alan]
- mount EESSI in GitHub Actions
- Pilot setup demo [Bob]
- 4 steps to get access in a Linux VM
- focused effort: limited software, x86_64 + aarch64
- Discussion [Alan]
- different CPUs (see intro?) [Kenneth]
- limitations of CernVM-FS [Bob?]
- risk of central setup (S0)
- read-only
- local disk (cache)
- internet access (default setup, workaround possible) [Alan]
- OS jitter
- deployment workflow [Kenneth]
- delay in adding stuff
- workaround: build on top locally (temporarily)
- security aspects [Kenneth]
- separate section?
- limit access to S0
- signed tarballs
- automation
- transparency
- layers of security
- proprietary software (EULAs, build vs run) [Kenneth]
- Intel, CUDA, AOCC, ...
- vs containers [Kenneth]
- integration with vendor libraries (cfr. Cray, MPI) [Alan, Victor?]
- compatibility of host & EESSI MPI library
- OS jitter due to CernVM-FS + workaround (NFS export of CVMFS filesystem) [Victor]
- OpenFOAM as example?
- impact on other project in terms of improvements, adoption, etc.
- contributions to EasyBuild/ReFrame [Victor]
- adoption of CernVM-FS/Gentoo Prefix in HPC [Bob]
- Evaluation [Alan]
- benchmarks [Alan]
- GROMACS (cfr. write-up by Alan)
- scalable workload (OpenFOAM?) [Kenneth]
- direct MPI eval: OSU (latency/bandwidth)
- Acknowledgements [Thomas]
- Fenics
- Dell
- ComputeCanada
- NESSI + Terje (contributions, Stratum-1)
- UGroningen (effort + Stratum-0 + S1)
- SURF
- HPC-UGent
- JSC (testing)
- AWS
- Azure
- OSU OSL
- Future work [Kenneth]
- automation (CI/CD) [Bob, Kenneth, Terje]
- inject host libraries via /opt/eessi/lib + CVMFS variant symlinks [Alan]
- MPICH toolchain [review Victor]
- funding [Thomas]
- flexibility in filesystem layer (CernVM-FS vs S3) + software layer (EasyBuild vs Spack vs ....) + also compat layer (cfr. Nix -> Gentoo move by CC) [Kenneth]
- proper consortium [Thomas]
- macOS [Kenneth, Terje]