owned this note
owned this note
Published
Linked with GitHub
# SPC batch pipeline status
# 12th January 2023
## Summary
- [SCP pipeline](https://github.com/alan-turing-institute/spc-hpc-pipeline) is ready and documented.
- Remaining issues are due to the components of the pipeline and their interactions, running in so many different LADs works a some kind of stress test, finding problems that might have not been noticed before.
- Solutions have been found for individual repos by cloning them into the Turing organisation and implementing fixes that work for the pipeline. We could do an upstream PR to the original repo, but there is the danger that our fixes could break something else in the original repo.
- Components that haven't been cloned should at least be fixed on its versions, because updates on the remote repo could break the pipeline.
- If someone wants to run the pipeline again in the future they must check if any relevant updates have to be included.
- Components:
- UKCensusAPI:
- Original repo: https://github.com/ld-archer/UKCensusAPI.git
- Issue: Problems found when running Scotland due to and how it unzips the original file that needed manual input.
- Fixes repo: https://github.com/alan-turing-institute/UKCensusAPI.git
- Calls a bash subprocess for `unzip`, so any system using this now needs that to be installed!
- New issue in progress afecting Scotland described here: https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/26
- Ukpopulation:
- Original repo: https://github.com/ld-archer/ukpopulation.git
- Status: Still using original repo. It would be wise to fix a version.
- humanleague:
- Original repo: https://github.com/virgesmith/humanleague.git
- Status: Still using original repo, but recent update broke the pipeline setup (gcc compiler update). We should fix a version.
- household_microsynth:
- Original repo: https://github.com/nismod/household_microsynth.git
- Issue: Microsimulation conversion issues for some LADs returns NoneType and breaks the pipeline. Issue and solution documented in here: https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/19
- Fixes repo: https://github.com/alan-turing-institute/household_microsynth/tree/fix/NoneType
- microsimulation:
- Original repo: https://github.com/nismod/microsimulation.git
- Issue with types from input data created with household_microsynth meant we had to run the command twice.
- Fixes repo: https://github.com/alan-turing-institute/microsimulation/tree/fix/double_run
- Outputs:
- Wales: Complete, example [here](https://thealanturininstitute.sharepoint.com/:f:/r/sites/DyME/Shared%20Documents/Data?csf=1&web=1&e=cBo61i)
- Scotland: Fixing issues (latest problem described [here](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/26))
- England: Ran succesfully a few LADs, yet to submit a full run.
- Would like Dyme team to approve the [example outputs](https://thealanturininstitute.sharepoint.com/:f:/r/sites/DyME/Shared%20Documents/Data?csf=1&web=1&e=cBo61i) before running all england (most expensive run).
- Fixing repos idea:
- https://github.com/alan-turing-institute/spc-hpc-pipeline/tree/experiment/submodules/submodules
- This clones based on hashes
# 9th January 2023
- [name=Aoife] run scotland on DYME
- N=numLads
# 3rd January 2023
Q: Best way to re-run failed things without repeating existing data on azure?
## Priority checklist TODO:
I'm reluctant to say these all are working now, but I *think* they are!
- [ ] Wales 35% failed issue:
- [x] Using https://github.com/alan-turing-institute/spc-hpc-pipeline/pull/23 to work on this
- [x] Scotland issue:
- [x] https://github.com/alan-turing-institute/UKCensusAPI/commit/cdb75cfa547a086baee9602515047d0955a96ef7
- [x] Zipping problem, possibly fixed?
- [ ] Failing England LADs
- [x] As per CRS fix is here https://github.com/alan-turing-institute/household_microsynth/tree/fix/NoneType the git url needs updated in spc script and run to test!
- [ ] Costing stuff needs looked into
- [name=AH] I haven't looked into this YET, other stuff running seemed more of a priority for now.
# 12th December 2022
## Notes from Camila before going on leave
- ~65% of Wales has been ran (no need to rerun if we dont change anything important in the configuration in the future). The other 35% failed with the error reported in issue 19. The Wales run is documented in [issue 20]( https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/20).
- I did some investigations on [issue 19]( https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/20), which lead to a quick fix [described in the issue](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/19#issuecomment-1346549589). I have only tested this fix locally in my machine and i'm not yet convinced the solution is the appropiate one. It would be good for Aoife to investigate on her debugging system as well:
- Is the problem really a convergence issue? This might mean to look at the humaleague code and maybe discuss this with Hadrian, he might have seen this error before?.
- If we think the solution described on issue 19 is appropiate, i guess we would have to clone the [household_microsynth repo](https://github.com/nismod/household_microsynth/tree/arc) to make our changes and replace the source of this code on the SPENSER_HPC_setup.sh script.
- Looking at the expense by resource it looks like the batch account on Dyme has cost us ~70 GBP so far (look [here](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu/~/costanalysisv3/scope/%2Fsubscriptions%2F6b9576b1-432c-4a06-a1b2-d178f1e08850/open/costanalysisv3.resources/openedBy/Subscription.CostAnalysis.CBR%3AResources)). Wales run seems to have been 40 GBP, comment [here](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/18#issuecomment-1344089433).
- Dyme Was spending 8 pounds a month for a "bastion" service (my fault, for clicking at things without undertanding what they do). I think I managed to cancel the service but it would be good to check in the [cost analysis tool](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu/~/costanalysisv3/scope/%2Fsubscriptions%2F6b9576b1-432c-4a06-a1b2-d178f1e08850/open/costanalysisv3.resources/openedBy/Subscription.CostAnalysis.CBR%3AResources) this week that the daily charge actually stopped after today.
- I've changed the pipeline to a maximum [running time of 48h](https://github.com/alan-turing-institute/spc-hpc-pipeline/blob/b15e16d10732badf9d90625f106de98e4826987f/spc-hpc-client.py#L244) because some england jobs took more than 24h, i've changed it the main branch.
# 8th December 2022
## Standing issues
- [Configuration until 2039](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/10)
- This is implemented in PR #17, hopefully we can merge this today.
- [Move to Dyme subscription and benchmark](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/18).
- Camila running in all Wales as test
- Camila to work on this and update issue with estimates.
- [Scotland run broken](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/14)
- Aoife investigating
- [Run failing in some OA11 from Wales and England](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/19)
- Camila to try to investigate before going on leave, otherwise Aoife to check
- [Refactoring of code and monitoring for failed tasks](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/9)
- Aoife working on this in PR #16
- Could we also check if all files required are created?
- A task can be sucessful according to batch but still fail at some step.
- These tasks are marked as 'complete' fairly soon (less than 1h).
## 21st November 2022
## Summary
- Prototype working running parallel batch jobs, one per LAD. It has been tested using Camila's and Aoife personal Azure accounts.
- Example running 4 parallel LADs has been run a dozen times over more than a week (plus many more incomplete tests), total cost up to now around 20 GBP.
- A typical job takes around 2 hours, from the moment the VM has been assigned to it.
## Standing issues
- We have to understand why we need to run a step twice in the microsimulation, as documented in this [issue](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/7).
## Next development steps
- Script to download a container with data output. Every task/LAD microsimulation gets saved in the same container, currently we can download it manually but it would be good to have a script to download it programmatically (documented in this [issue](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/8)).
- We've been testing with 4 parallel tasks, which are easy to monitor on the Azure portal. However, once we are running hundreds of LADs we need a programmatical way of monitoring and resubmitting failing tasks. Issue [9]( https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/9).
- Move to the Dyme subscription and benchmark running time and costings on a job of ~10 tasks (using low priority nodes and higher pool size which are restricted on the personal subscriptions).
- Finish documentation (still quite incomplete).
## Input needed from Dyme/SCP team
- Currently, the pipeline has been designed to run as described in the initial [Dyme/SPC issue](https://github.com/alan-turing-institute/dymechh/issues/26) (10 year microsimulation). We need input from Dyme/SCP team in the final year of microsimulation to be used. We could try the best-case scenario (2080?) for a couple of LADs and measure how long it will take to run and its costs. Please comment in [this issue](https://github.com/alan-turing-institute/spc-hpc-pipeline/issues/10) and provide example config files.
- Outputs: Currently we are saving the following outputs on the Azure (see bellow or go to the portal storagebatch account, container scp-2022-11-21-09-27-29, probably only Aoife can see these). Are these the outputs we need to save for Dyme? Anything missing or unnecessary?
**household_microsynth**
![](https://i.imgur.com/R37qUJL.png)
**microsimulation**
![](https://i.imgur.com/hC1d966.png)