# AGENDA 23. 01. 2025 09.15 Requirements for the 2nd Reporting Period: Interpretation from the Special review - - The revision of Dx.1 is likely due 60 days after we get the report. - RW: Needs to be more technical. - Training - They are looking for more community training (as in multiscale community) - CoE is not a normal (scientific) project, we're also expected to promote EuroHPC in some sense - realistic who is activly running projects on EuroHPC (developement projects) - suggestion: get dev projects as soon as possible - plan needs to be in place for development project on EuroHPC systems (aplication need to be writen) - System to be aiming for: JUPITER @ JSC - can prepare on JEDI system @ JSC (but that's very busy) - can use A64FX partition of Deucalion to prepare (lots of space there) - MPI on Jupiter via MPI in EESSI may be tricky to get to scale - that's mainly a concern for EESSI though - X.1 we need input from all couplings - the technical level needs to be described by each partner - Sorbonne/Tolouse: MDFT with Pystencils, the same idea with LAMMPS - NIC: LAMMPS and waLBerla at the same time - the input of one package and output in the other side with the communication - ## Scientific WPs ### Blueprint on the couplings and risk mitigation - Discussion * LAMMPS OBMD + ALL - Ultrasound (improve optimize code, when coupling is done)LAMMPS OBMD waLBerla copling running on CPU, then Tilen/Petra eill come / further on May. As soon as CPU coupling is done. GPU needs to be done. LAMMPS and waLBerla - try with NICs first and then try wit STUTT. LAMMPS will run waLBerla. OBMD has the waLBerla dependency? * ESPResSo + P3M - * WaLBerla + LEONARDO - LB solution and structure near rotor (2 parts - first part tourbelance modelling and second part handeling moving geometry)- point out that the codes can not be pushed to exascale?(figure out how other CoEs put the code in OpenSource - like MAX) - issue is demonstrating run on another cluser. It is agreed to publish the code. It is not a problem to do on CINECA infrastructure * Pystencils + LAMMPS Bateries * OBMD + WaLBerla - Ultrasound * Contribution to the original codes - What is the mapping to go exascale, how big is the individual thing, is single or multiple node - Is it algorithmic coupling or not - information of all the codes and couplings - having the list of the question Alan mentioned - EoCoE - waLBerla is directly involved in that coE - Ani will set up a meeting. Ani met Frideric - problems with waLBerla - not enough men power, they are already fully impleneted in EoCoE - contribution to the open source code - what other tools do we need for this - Flux (US) ### Exascale workloads of pilot cases - what does the exascale workload look like - how big are we gonig? - Leonardo scalability on CINECA - The plan needs to be in pace - NIC: uncertanty of quantification - designed to use huge HPC resources or single simmulation with large system - How big simulation we want to run at the end of the project - figure it out what is the general size - motivation: one paragraph - how to get from physics to technical implementation - for LEONARDO - good chunck of resources used at CINECA - for Sorbonne: for the Lattic part - reach scale - electron to the system - good representation of the system - multiscaling within the capacitor - Use of JEDI 8 nodes? ### Community services - low effort services: - Support infrastructure - Emailing list: (support-MultiXscale@cmm.ki.si, support@multiXscale.eu) - Consultancy - Support for EuroHPC resources - supporting for applications for EUroHPC resources (dev access) - Training (EESSI / software) - Adding software to EESSI - EESSI is already doing that - Outreach - Practicle: Who will do the outreaching/contacting persons from other communities - Use of Lhumos - recording of the training material - Espresso workhop - recordings for the standar lectures: basic lectures on te web site ### Travel budget and related KPIs - Letter to CASTIEL about the KPIs - system supporting application - not 3 but 20. Asking for the specifications. What does it mean. - ### Requirements for the 2nd Reporting Period: Technical WPs * EESSI at EuroHPC sites * EFP the second part in 2027 * MS8 - requirements ### Detailed third year planning * Pairing between scientific/technical teams: next steps * Lara/Tilen - Where is Tilens code - in the dev.eessi.io. 4.Feb. on GPU clustere (mix Kokkos with CUDA) in parallel GPU in ALL. OBMD in LAMMPS - patch on the source to do developement. Cimake, didnt try with make - package for OBMD * Pair FZJ with Lara - 5. February * Hassane Caspar new pair * Ci in that one * deployment .dev.ie - MDFT - * Jean-Noel/Satish * Meeting for eessi.io with Pedro and Jean Noel * Pedro/matteo - Github repository, building the code with the notes Matteo provided * Expectations of the 2RP review * Work progress * D1.4 - Support for emerging system architectures (RUG) * not concerning * D1.5 - Portable test suite for shared software stack (Ugent) * suficient software there * not an issue * D2.5 - Technical report on performance-portable electrostatics (FZJ) * status: implementing the method it has validation for some unit test, few moving parts are not there yet, hE hope he will finish before the date of the delivarable. Small scale runs on 24 nodes in this deliverable. How Kabana with Kokkos is coupled. * results of the scalabulity wont be on the systems * D2.6 - OBMD library for LAMMPS (NIC) - CPU coupling working on GPU - with Kokkos * D2.7 - ESPresSo: Progress towards Exascale (USTUTT) * * D3.3 - Coupling ESPResSo with waLBerla (USTUTT) * D5.3 - Report on testing provided software (SURF) * dashboard - Maxim * the data is on the internal cloud * D6.3 - Interim report on Community outreach, Education, and Training (NIC) * D4.2 - not worried this wont be delivered * Deliverables (update of template) * Milestones 6,7 * FAIR * release of the codes - github/Gitlab * RW for software would wait - author list problem * Budget estimate * 15.15 MultiXscale, EuroHPC & relationships with other CoEs * Interaction with CASTIEL 2 * Possible scientific collaborations * POP * EoCoE * MAX - force field * 15.45 Coffee break * 16.15 Mare Nostrum supercomputer tour * 18.00 End of first day # AGENDA 24. 01. 2025 ### MultiXscale training, workshops and other events for 2025/2026 - Discussion and strategy how to proceed - CECAM workshop 8-10 - 10.000€ (6000€ ERC, 3000€ FZJ, 1000€ SISA) - Insteed of Axel - Christhop - Finish the program at CECAM page - The CECAM policy - in person format - Neja will ask CASTIEL2 for covering expences - recorded lectures - key notes - advirtise to NCC - User Easybuild workshop - CASTIEL2 for permices - - HiPEAC worshop - ISC tutorial - Espresso workshop 25 - Online scientific events? - Jean-Noel doing a tlak GERMAN rs confference - teaching workshops - ASHPC (Maša (NIC) presenting on Ultrasound) - CASTIEL2 Code of the month - ESPReSso - Neja will contact them (Rudolf) - On the industrial side should be 2 workshops - Cloude users with cloud providers - HPCKP25 June - Other training with industry - contact from the other industry that uses the code - first part of the next year - online wuth clouse users - Inustrial targeting - Matej - will ask at CECAM event - ELI: company and do the trainig there, - Celine: next industrail batery event in Singapure, Mathew in Singapur - Industry conetions (MAtej, Eli) - Matteo in the industry - LEONARDO - State of the art - has to be small event - We need plan to present it ### Discussion on Sustainability of the MultiXscale - position paper release form EuroHPC - application and technical will part? - EESSI - EFP, look into legal entity * Future vision of CoEs - white paper draft - community CoE - No Code developement - CoE is just the support for the community - Funding brench for applications will get funded - this is not CoE anymore, you have to have your own code - another brench for adapting the codes - the other is transversal - CECAM perspective - communit service with another type of project - EESSI would fit in the grant scheme - there will be a separate call for CI/CD - For CECAM it is interesting - this has to be led by CECAM - Question if CECAM has interest in doing that - in our case the sustainbilty is not conected to EuroHPC - Theri plans for EESSI is that it is sustained - CECAm interest to promte the idea - PRACE - write scientific case - HiPEAC has its vision - transversal side (POP CoE). EuroHPC wants to expand that - Thing inside that deserve funding. The tools that alowd you to intergrate. - for sustainbilty we can use what was presented in E-CAM - ### Discussion on Engaging industry, Success stories - EESSI in EFP -invited into the federation platform - Discussion on Dissemination / Webpage - AOB – next GA (date and location) - Amsterdam January 2026 - copied questions: Kenneth Hoste 9:40 Only 1 billion? You’re not ambitious enough… 500 billion is the threshold now (see Stargate) Messages addressed to "meeting group chat" will also appear in the meeting group chat in Team Chat Jean-Noël Grad (USTUTT) 10:52 I can also offer to set up Zoom pre-meetings with Tilen to show him how to use our walberla bridge before meeting in person in the CECAM workshop. Kenneth Hoste 11:40 JEDI is heavily overloaded though no? So doing full system runs there (48 nodes) is not likely to be possible quickly Rodrigo Bartolomeu 11:42 not quickly, you are right. But if we intend to it is good to have in mind that even for JEDI it might take some months Kenneth Hoste 11:43 Deucalion A64FX partition is 1632 nodes, each with 48 cores, so ~78k cores Rodrigo Bartolomeu 11:43 Time window is muh shorter that JUPITER though Julián Morillo (BSC) joined as a guest Thomas Röblitz 11:50 could we upload what we have done for EESSI? all trainings, I mean Kenneth Hoste 11:51 We can download from YouTube and upload to LUMOS, sure… Yannic Kitten left Kenneth Hoste 12:12 GREEN DEAL Thomas Röblitz 12:15 We've spent about 80-85 % (excluding this GA). BMBF in Germany wanted to have exact events (that was 20 years ago). So Germany was ahead in that respect 😄 Kenneth Hoste 12:33 I think for us the travel budget is fully on the EU part of our MultiXscale budget (national part is purely for PMs) Godehard Sutmann left Kenneth Hoste 12:36 Paris may be sensible: easy to reach by train from Barcelona, Belgium, and even the Netherlands. So probably cheaper… Hotels are probably pricey though in Paris... Matej Praprotnik left Kenneth Hoste 12:37 EuroHPC Summit 2026 will be in Cyprus BTW ;P Pedro Santos Neves left Kenneth Hoste 12:37 Just in case you want to co-locate next GA with the Summit. You know, for cost efficiency reasons… Rodrigo Bartolomeu left tilen left Matteo Zanfrognini - LDO left Julián Morillo (BSC) left Nikolaos Ntarakas left Petra left Caspar van Leeuwen (SURF) joined as a guest Matteo Zanfrognini LDO joined as a guest Susana Hernandez - HPCNow! 13:35 Hi, we can't hear you yes Matej Praprotnik joined as a guest Pedro Santos Neves joined as a guest Rodrigo Bartolomeu joined as a guest Petra joined as a guest Nikos N joined as a guest Nikos N left Nikos N joined as a guest Julián Morillo (BSC) joined as a guest Thomas Röblitz 14:37 Richard has more experience with Grace/Hopper. Thomas Röblitz 14:46 Have to leave in about 15 mins. SIGBARNEHAGE. Thomas Röblitz left tilen joined as a guest Nikos N left Jean-Noël Grad (USTUTT) 15:58 Regarding Neja's comment about documenting which conferences each one of us is attending, do we have a shared document where we could input this data? Possibly a spreadsheet so we can standardize the format of each entry (conference/workshop name, date, location, list of talks/posters/BoFs with links to abstracts, beneficiaries, and free text field to give more context for non-standard events). Sharing this info via e-mail could be a bit overwhelming. Caspar van Leeuwen (SURF) 15:58 @Neja Samec yeah, something must be wrong with the planned PMs in RP2 for SURF . You have a total of 54.6PMs planned for SURF in RP2, out of the total of 63PM we have in the project. That can't be right, that would mean almost all our effort would be focussed in RP2 - and it definitely isnt'