# Migration to Azure Container apps
## Analysis
We already have a very good idea on the steps that must be taken (see below) but we'll need a small amount of time for some extra exploration and investigation
Estimate - **4h**
## Open questions
- Can Dennis and Kobe get access to the current Azure resource group(s)?
- Request sent to Stijn and IT
- Do we also need to set up monitoring? Our current cluster contains a Grafana agent
- Answer
- ```Voor onze logging gebruiken we momenteel Application Insights & Azure Monitor, terwijl we ook de mogelijkheden van Grafana aan het verkennen zijn. Daarnaast zijn we bezig met het standaardiseren van onze logging – of, nauwkeuriger gezegd, onze tracing – op basis van OpenTelemetry (https://opentelemetry.io/docs/instrumentation/php/). In welke mate wordt dit door jullie ondersteund? Mijn voorstel is om voor nu jullie Grafana opzet te behouden. Als jullie ondersteuning bieden voor OpenTelemetry, graag een voorstel voor de integratie ervan in DigiTrace, indien dit nog niet is geïmplementeerd.```
- Can the new setup be done in a separate account?
- Answer
- ```ook hier moeten we eerst de uitkomsten van het Cloud Adoption-traject afwachten. Laten we daarom voorlopig blijven werken binnen de huidige subscriptions.```
- Do we also need to provide infrastructure-as-code to automate the set up of a new environment?
- VNET, subnet, traffic manager etc.
- **Answer**:
- ```In het kader van Cloud Adoption zijn we betrokken bij een traject om onze aanpak hierin verder te ontwikkelen, waarbij we de richtlijnen van het Cloud Adoption Framework (CAF) van Microsoft volgen.Dit houdt in dat we onze componenten gaan onderverdelen in verschillende 'landing zones'. Bijvoorbeeld, netwerkinfrastructuur zal worden ondergebracht in de landingzone die gewijd is aan connectiviteit. De applicaties zullen vervolgens hun eigen landingzone krijgen. Wanneer we IaC willen voorzien voor DigiTrace, moet dit passen binnen dit framework. Zodra het framework is gefinaliseerd, zullen we hier op terugkomen. Daarom is het voorlopig niet nodig een voorstel uit te werken.```
## Approach - steps
The following concrete steps are our initial thoughts on how to achieve a new Azure Containers apps based setup for the Digitrace platform.
Importantly, the front end infrastructure for the different applications stays completely the same. The following steps are therefore only relevant for the back end.
1. Build two container apps in one [environment](https://learn.microsoft.com/en-us/azure/container-apps/environment):
- Laravel API container app
- Containers:
- PHP-FPM
- Attach volume
- nginx (with exposed port 80)
- Queue Worker
- Container:
- PHP-FPM
- Volume
- Can be scaled differently
- Includes the PDF generation jobs (see scaling issue)
- No direct communication needed between container apps
- Dapr not needed (to be investigated)
2. Investigate certificate management
- Activate https://learn.microsoft.com/en-us/azure/container-apps/custom-domains-managed-certificates?pivots=azure-portal
3. Investigate storage / volumes
- https://learn.microsoft.com/en-us/azure/container-apps/storage-mounts?pivots=azure-cli
- Really needed?
4. Get init container up and running
- Actions needed (init.sh)
- Needs to run migrations before new code is deployed and handling requests
- cache clear
- See https://learn.microsoft.com/en-us/azure/container-apps/containers#init-containers
5. Find a proper queue solution
- Redis currently runs in-cluster
- Azure cache for Redis is quite expensive
- Bonus is that scale rules for container apps can be Redis based - see https://learn.microsoft.com/en-us/azure/container-apps/scale-app?pivots=azure-cli#scale-rules
- Database driver = lightweight; Managed Redis = heavy weight (cost = +$100)
- Align with Reynaers non-functional requirements about PDF generation
6. Monitoring
- Azure Monitoring = lightweight
- Grafana integration = heavy weight (lower prio)
7. Investigate logging possibilities
- Log streaming = light weight
- https://learn.microsoft.com/en-us/azure/container-apps/log-streaming?tabs=bash
- Prometheus = heavy weight (lower prio)
8. Migrate secrets and env file management
- Currently managed via Helm
- Azure Key Vault https://learn.microsoft.com/en-us/azure/container-apps/manage-secrets?tabs=azure-portal be a solution
9. Create a new Bicep file
- Laravel module that defines our backend stack
- Needs to be reusable
- port from current Kubernetes/Helm setup
10. Set up CI/CD to automate this
- Push image to ACR
- Auto-deploy (update / revision)
- Blue / green deployment
### What is not needed?
- Data migration
- All long live data is stored in the database or storage accounts
## Phased approach
- Set up dev environment
- Re-deploy FE, configured with new container app endpoint (API container app)
- Test 3 applications
Once okay from VDI and Reynaers
- Set up acc environment
- Re-deploy FE, configured with new container app endpoint (API container app)
- Test 3 applications
## Go-live
Goal is to have no downtime.
- For production, a traffic manager can be used to route via a weighted way the requests to the new container app setup
We keep the old cluster running, all deploys update both. If something fails or does not run correctly, we can have the traffic switch all requests to the cluster