# Hortense phase 2 kickoff
- Fri 26 May - 10:00
- extra hardware
- +48 nodes in rome partition
- +20 GPU nodes (80gb gpu memory)
- +384 nodes in new milan partition
- +2.7 PB scratch storage (doubled to 5.4 PB now)
- over 100,000 cores in total in Hortense
- new partitions
- rome + rome_all + rome_512 (no change)
- cpu_milan
- gpu partitions
- GPU partitions
- `gpu_rome_a100` partition: could be nodes with 40gb or 80gb GPU memory
- multi-node jobs will *not* mix 40/80GB GPUs
- jobs are limited to 20 nodes (all of same type)
- `gpu_rome_a100_40` partition (was `gpu_rome_a100` before) => only nodes with GPUs with 40gb memory
- `gpu501-gpu520`
- new `gpu_rome_a100_80` partition => only nodes with GPUs with 80gb memory
- `gpu521-gpu540`
- "pilot" phase for milan partition
- `node5385-node5768`
- can be used by all current Tier-1 projects, free of charge (no credits consumed)
- until 7 July 2023 (start of new Tier-1 projects from cutoff of 5 June 2023)
- login nodes are still AMD Rome CPUs => so compile on workernodes on correct partition
- diff with rome CPUs
- same core counts (128/node), same RAM (256GB/node)
- no fat memory nodes (512GB/node) in milan partition
- no GPU nodes with Milan
- lower clock speed
- better NUMA
- HPL: 3-4% slower
- OpenFOAM: 10-15% faster
- nodes per partition: `pbsmon -P`
- debug queue
- rome
- strict limits in terms of cores (cfr. interactive/debug cluster in HPC-UGent Tier-2)
- shared GPU (Quadro?), maybe be slow, others may observe what you're running here
- exclusive access GPU (V100) can be requested, job may need to wait in queue until GPU is free
- software
- same central software stack on rome + milan
- future Tier-1 projects
- will be assigned to rome or milan partition exclusively, based on software being used in project + to spread load across Hortense
- next cutoffs: 5 June 2023, 2 Oct 2023
- support
- large pressure in terms of support requests
- large backlog in installation requests
- `<insert plots from team meeting>`
- HPC-VUB team is helping with Tier-1 industry support
- contact compute@vscentrum.be, but be patient...
- starting grants
- will get access to both rome+milan partitions for testing
- upcoming maintenance windows
- to take loopC of cooling infrastructure into production: to be planned with Atos+APAC
- July 11th 2023: UGent-wide Datacenter Disaster Recovery **test**
- VSC accountpage will be unavailable
- Tier-1 web portal + Tier-1 login nodes will be unreachable during the test
- point to docs: https://docs.vscentrum.be/en/latest/gent/tier1_hortense.html