# RED-SEA
###### tags: `EUROHPC`
Goal: preparation of the next generation of European Exascale interconnects
5
Improving the current BXI technology (Bullsequana eXascale Interconnect) and contributing to the design and specification of the next generation BXI
Enriching the European ecosystem of interconnect technology
Work is inline with the workplan.
Spend more than half. --> catch up.
WP2, WP4 --> 62%
WP2: odd because many tasks are in the second part of the project.
Network requirement and architecture defined, executtion of appllications and benchmarks on testbeds verified.
Internet protocol over BX12 kernel module optimised (up to 4 timess, 3 architectures)
Technical overview of the project
scale industrial interconnect beyond 1000K nodes while meeting key performance. + reliability.
WP4.1 ongoing several modules can work together using the same custom End to end reliability protocol.
Trends for BXI3 - NIC implementation
Network interface component (NIC)
Power reduction --> how much?
Footprint reduction
WP4: First designs of Lean Network interconnect wtih ARM/RSIC-V & accelerators DONE.
Machine delivered through the current DEEP system
Not to mix. Not planned
BX2
BX3. Not the goal to have a machine by the end of the project, just have an internal testbed during the project.
## Collaboration with other projects
10 collaborations identified within the project.
WP2:
VEF traces of HPC workloads
optimisation of collective communication
BXI2 in DEEPSystem
Get new traces with AI-based applications
MAELSTROM & ADMIRE. --> will be detailed later.
WP1:
slide 6 VEF traces obtained from different HPCs?
how are they used in the simulator?
T1.3 6 testbeds
Dibona
CEA --> to collect traces on large scale.
ExaNeSt platform
--> used to estimate congestion management.
INFN: 2 testbeds
INTI-BXI (CEA)
DEEP Cluster with BXI support
CEA: PCVS
interested to get traces from all DEEP-SEA applications?
Simulator to support the latest ARM RISC-V system.
3 Simulators will be ready in M30.
The simulators are already ready.
Simulators are active now And get some
Scale up to 100 of
50K MPI run
Simul
COSIN -> analysis of integration and energy
SAURON get traces with DEEP-SEA
Tune simulators.
BX3.
Same partners on both projects.
WP1: less resources used than initially planned.
One person only from CEA. Spend more in WP4 than WP1.
LAMMS try to do some refactoring within the project life.
NEST
Simulators: Open source?
Sauron is not open source.
IPR issues with SAUROn: cannot be open source.
## WP2
objectives: High performance ethernet.
WP2
IP format wll be used at the boundary
No further contribution from FORTH in the 2nd period.
WP2.
## WP3
Also target future exascale systems.
T3.1 Optimisation of collective communications
current MPI implementation of collective communication
What are the current MPI implementation.
T3.2 Well-know MPI libraries
assume physical topology is a binary tree.
overlapping communication & Computation
## WP4
Task 4.5 Towards BXI Support In parastation MPI (PARTEC).
Is it only for PAraStation MPI?
New website: what will be different?