# Distributed computing meeting - 31 August 2022
## News
* New KEK-SE instance to replace kek2-se02.cc.kek.jp.
* Belle II note being written on "Summary of Belle II operation in Phase3"
## Data Production
* No updates, no new meeting since last week.
* TO-DO:
* Open a chat for DP shifters.
* Number of running jobs is small for now.
* Q: Are the campaings listed in Cedric's machinery?
* Usually Ono-san opens a ticket for new campaigns.
* Cedric will check the full list
* Concern for BGOExp... Cedric will double check.
## Operation matters
### Data placement
* Exp24-26 being purged from TMP-SE after prompt processing completed.
* Exp 27 not to be purged for now (during LS1)
* Exp 24-26 could be deleted before but didn't happen due to lack of communication.
* hRaw_delayedbhabha remains staged for now.
* hRaw_calib to be deleted once calibration is done.
* Delays due to communications and load of human DDM.
* But automatization is currently complicated. We don't know when campaigns will end
* Automatizon for procXX is different. Will be automated with the machinery implemented by Ruslan.
* In accounting, 'Unknown' campaign is shown. Cedric is looking into it.
* MC13 to be purged after getting confirmation from Physics coordination.
* Every new campaign needs to be put into the "list" for accounting
* Until this gets automated by metadata in Rucio.
* Who should be responsible for data placement?
* Should be this a task by DDM?
* Of course, closely following the campaign management.
* Cedric will follow up with Ruslan. Communication is handled by tickets.
* Q: Campaigns stay unknown when the name is put into the list after the end of the productions?
* No, once added to the list it is suddenly properly labeled.
### Expert Shift Report
By Matt Barrett
* All shifters eligible.
* **Still lots of empty shifts in September.**
* Next expert shift is Nisar (BNL).
* Main productions during the week are Skim jobs (MC14rd).
* A low number of jobs running observed. Campaings are under preparation.
* No issues with occupancy on SEs.
* Free space at the sites increasing with the purge reported by Ueda-san.
* GGUS ticket related to access issues at UVic.
* Affects many users. Investigation is ongoing.
* 4 JIRA tickets opened during the week.
* One related to high load in central services. Restarted by Ono-san.
* Several reports from users-forum. To follow up:
* More output files than number of jobs.
* Number of files is correct, but the report displayed by gb2_ds_get is incorrect. Anil is following up.
* Six users reported that files hosted at UVic were not available.
* Recovered after communication via GGUS.
* Would be useful to have a shared space on KEKCC where experts can download files?
* Question about a channel for Data Production shifts.
* Not existing right now. To create in b2chat or rocketchat@KEK?
* We must be careful with the divergence of information. Must be intended for questions rather than reporting issues (for that tickets must be opened).
* There was a concern about putting more load on the expert shifts if they are to watch the chat, not only treating the tickets and mails.
* Q: What kind of error message was shown in UVic download?
* Connection timeout. Error message will be put in the ticket.
* Q: The peaks with saw-like shape on waiting jobs, is Miyake-san launching productions?
* Most likely Miyake-san activity. Ono-san is looking and seems to be user jobs.
* Q: Can be done more smoothly?
* Let's not mix production jobs and user jobs in same slide.
### Issues after Rucio upgrade to 1.28
* A few issues noticed after the update.
* The pilot cannot register the file on Rucio, and a request to RMS is made. Then the owership of the rules is not properly set when registration is done by RMS.
* In Confluence there is registered the number of files registered as `dirac_srv`. Spikes when Rucio is down.
* Do we want to change this? If the ownership of the rules is not changed, user cannot delete the rules and gb2_ds_rm would not work as expected. The rules on the files are to disappear soon, so not a problem, but the ones on the datasets are to be treated.
* We need a mechanism to fix this. PR opened to (hopefully) fix the problem.
* Q: The PR changes the ownership of dataset and files? -- It changes for datasets... (to complete).
* A bug found for deletion resulting in a low deletion rate.
* Hotflix applied and PR opened to Rucio code.
* Scout jobs coudn't be properly registered. Fixed.
* Not catched during the validation due to small size of test projects.
* Multi-VO support on Rucio affect the machinary to add collections.
* Solved chaining permissions. Will be propagated to Belle II policy package.
* No TPC transfers submitted for a few minutes. Work might need to be done in RucioSynchronizer to set properly new values.
* Q: Shouldn't be this affecting also migration?
* We didn't trigger transfers on migration. Transfer was disable most of the time.
* Test productions should include transfers.
* Funcional tests must include distribution and gathering.
### Other operation matters
* Ueda-san discovered empty datasets. Cedric are removing them.
* Seems to be datasets produced long time ago. Rucio now have mechanisms that prevent this.
* Q: Is any ticket opened for recording?
* No, Cedric will open one.
## Development matters
### Release and upgrade planning
* Aiming for a quick release v5r5p2.
* Anil: We hope to have a new feature release before DIRAC v7r3 release, otherwise we will be blocked again.
* We need to fix timeline.
* Q: What about run boundary?
* Will follow with Miyake-san.
### Release v5r5p1
* Q: Can the BNL team make the update for the nodes hosting DDM and B2Rucio?
* Yes, will confirm who will.
* Instructions may be needed.
### BelleDIRAC migration
* ElasticSearch
* Discussion converged? From Anil side, yes. Ueda-san may follow up.
* Q: What about user names in ES? Is b2kibana_admin really intended for admin? -- Will follow up in mail.
* For access we can simply go for username belle.
* In the future we better use tokens instead of publishing passwords.
* Anil will check token access.
* If we use tokens, information can be sync from IAM or DIRAC config, and groups can be used for ES access/management.
* Ueda-san will follow up by mail.
### Multicore-jobs
* Matthias will check for ATLAS, but usually they request 2GB per core and this is enough.
* Also will check if we can increase memory usage at KIT for testing, but for production is very unlikely.
* HTCondor at KIT doesn't kill the job just by passing the requested memory size, but the threshold is two or three times higher. The error comes from basf2exec.
* At ATLAS (years ago) one of the most demanding components for memory consumption was detector geometry which was shared between multiple processes and the proportional part was small. Seems usage by multi-process Basf2 is significant larger.
* Search for publication in CHEP or similar.
* Check ULIMIT
* https://stash.desy.de/projects/B2DC/repos/belledirac/browse/gbasf2/lib/basf2exec.sh#104
```
ulimit -m 4194304 -v 10485760 -f 10485760
```
### Open PRs
* Approve PRs to be included in the next patch release.
* Check pilot PR.
### AOB
* Multicore can be an item for BPAC review.
* There is no agenda for BPAC yet, but it is a good item to discuss.
* We need close communication with software group in advance.
* If memory usage is an issue, it should be reported by SW.
* From computing we can report work to enable multicore.