# MultiXscale sync meeting 2022-10-20
- deliverables per task?
- + progress report towards the end
- reporting periods: M12, M30, M48
## WP1
- first reporting period (M12)
- progress report + plan
- consolidation task for tasks 1.1-1.3
- two deliverables (progress reports)
- that would imply distributing efforts to consolidation tasks
- tasks
- T1.1 (M1-M24)
- stable: being able to rely on it
- only software stack for an HPC cluster
- being able to ensure that the software stack stays available (even in case of network disconnect)
- doesn't have to be a very extensive software stack
- deliverables
- SLA + documentation for sites adopting it (M12)
- progress report in M12
- support for x86+IB
- separate report on NVIDIA GPU support (M24)
- T1.2: extending support
- M9-M30
- push to consolidation task after M30
- T1.3: test suite
- M1-M30
- goal: developing test suite + automation + dashboard (to use in T5.2)
- T5.2 is consolidation of T1.3?
- deliverables
- release of test suite (M12+M24?)
- T1.4: RISC-V
- M12-M48
- extra T1.5: consolidation - M24-M48
- reacting to problems raised by WP5 (i.e. incident management in WP5, problem management done in WP1)
- expanding+maintenance of test suite + tooling (T1.3)
- current deliverables
- D1.1 (1st prog rep T1.1) - M12
- D1.2 (2nd prog rep T1.1) - M30
- D1.3 RISC-V
- D1.4 final report WP1
- new deliverables
- new-D1.1 prototype/intermediate report T1.1 (prototype) - M12
- incl. policy/SLA + docs for adopting sites
- new-D1.2 final report T1.1 - M24
- new-D1.3 T1.2 report on supported emerging system arch - M30
- AMD GPUs (ROCm), Arm CPUs, ...
- new-D1.4 T1.3 design test suite + tooling report - M12
- new-D1.5 T1.3 final report on test suite + tooling - M30
- new-D1.6 T1.4 RISC-V - M48
- new-D1.7 T1.5 consolidation - M48
- timeline
- M12: D1.1 (T1.1, RUG) + D1.4 (T1.3, SURF)
- M24: D1.2 (T1.1, UB?)
- M30: D1.3 (T1.2, RUG) + D1.5 (T1.3, UGent)
- M48: D1.6 (T1.4, BSC) + D1.7 (T1.5 SURF)
# WP5
- for each 5.x task: split off between setting up a way to support/maintain/test/allow contributing vs consolidation
- reworked tasks
- T5.1: set up support "portal"
- by M12
- T5.2:
- use of test suite developed in T1.3
- start later (M6/M12), cfr. T1.3
- finish by M30, then move to consolidation task
- T5.3: finish by M12 (development of the bot)
- then move effort to T5.4
- (extra) T5.4: supporting and maintaining the shared software stack
- starts M12 (for T5.1 + T5.3)
- later joined by T5.2
- large amount of effort
- progress report as deliverable in M30+M48
- two deliverables in M12
- D5.1 (for T5.3) - Thomas
- D5.2 (for T5.1) - UGent
- other deliverables
- D5.3 (for T5.2) - M30
- (((D5.4 => dropped, consumed by D-M30 for T5.4)))
- new-D5.4 + new-D5.5: two D's for T5.4 - M30 (summer 2025) + M48 (Dec'26)
- => 5 D's in total for WP5
- timeline
- M12: D5.1 (T5.3, UiB) + D5.2 (T5.1, UGent)
- M30: D5.3 (T5.2, SURF) + D5.4 (T5.4, UiB)
- M48: D5.5 (T5.4, UGent)
```
I am contacting you because we received the message from the project officer to update our proposal MultiXscale with the following explanations:
1. All partners- send the mail to Barbara indicating whether you are members in EUROHPC JU.
2. All partners: Please check whether all participants have responsibilities according to their PMs and preferably with at least one deliverable per participant for all three reporting periods (not applicable for participants with low PMs). Examples are: USTUTT, UB, RIJKSUNI, UiB, IIT have the deliverables in only one reporting period etc..
3. All partners: Please address the shortcomings mentioned in ESR:
1. However, the proposal does not sufficiently explain how its approach to co-design, scaling, and the federation of existing resources in Europe, etc., will go beyond the current state of the art, and how hardware vendors will be involved in the co-design process. These are shortcomings.
2. However, the baseline for the used codes and algorithms is insufficiently described. This is a shortcoming.
3. Open science practices and research data management aspects are generically described and how FAIR principles would be applied, for example, is not made clear. E.g. apart from one code, the interoperability of inputs/outputs to other software packages or the intended use of metadata standards enabling interoperability of the data are unclear. In addition, it is not clear which key project results are of commercial interest and which will be open source since they are not defined as deliverables and there will be a confidential business plan. This is a shortcoming.
4. However, the descriptions are in part rather generic and the impact towards specific target groups/user communities is not well outlined. In addition, not all KPIs focusing on impacts are set to measure the contributions of the CoE to these requirements. For example, the KPI to measure how many developing HPC communities will benefit is not entirely clear. This is a shortcoming.
5. Potential barriers to the expected outcomes and impacts are identified and discussed, but the management of the potential negative impacts is only very briefly addressed, which is a minor shortcoming.
6. However, in terms of joint exploitation, although the software will be exploited through training at Extended Software Development Workshops and other training events, exploitation plans and IP management for each participant are not fully detailed.
7. However, the timing of tasks is not appropriate as many tasks span the entire duration of the project (month 1 to 48). It is unclear how different tasks are organised. This is a shortcoming.
8. In addition, the description of task 2.1 lacks sufficient detail on the extension of the waLBerla code to support VLES simulations in HPC clusters with accelerators. Furthermore, the pilot cases of ultrasound simulations for biomedical applications and battery applications lack concrete details of the specific problem and configuration to be addressed.
9. However, the involvement of each participant in specific tasks is not sufficiently clear. The experience and expertise of some partners is not well explained and how these relate to the tasks to be carried out is not well defined. The role of the associated partners is not sufficiently described. These are shortcomings.
10. However, details on risks around achieving the foreseen capabilities for exascale technologies are not well addressed. This is a minor shortcoming.
11. However, the majority of tasks do not end with a deliverable that is appropriate for the content of that particular task. For instance, WP5 aims at Building, Supporting and Maintaining a Central Shared Stack of Optimized Scientific Software Installations. However, this WP5 does not have a sufficiently well-described task focusing on building the Central Shared Stack. While building this stack is foreseen in WP1, this WP1 does not have a corresponding substantial deliverable. This is a shortcoming.
12. However, the expertise in some areas, for example in ultrasound and biomedical applications, and in rotor dynamics, is inadequately described.
4. Only for partner NIC -Why is only one researcher named for the partner NIC (with 106 PMs)
5. All partners: How are the travel costs distributed? The distribution does not seem to consider the FTEs.
6. For partners RIJKSUNI and HPCNOW: Could you please specify more what the costs for “Other goods, works and services” of the partners RIJKSUNI and HPCNOW include.
7. For partner RIJKSUNI: Could you please specify more what the costs for “Equipment” of the partner RIJKSUNI include.
8. For partner FZJ: Please provide an explanation why the personnel cost/PM for FZJ is higher than the rate of FZJ in other similar projects. This by no means should be perceived as mistrust towards any participant. We simply ensure equal treatment of all participants and a responsible management of public funds with this procedure.
```