# FADI Brainstorm Fri, 12. Mar 2021
Quentin, Alex, Cedric, Chiara, Lukasz, Ross
- [x] Chiara post this in issue. Quentin please edit to actual next steps.
## Fri, 12. Mar 2021 FADI Brainstorm
## Next Steps
- [ ] Quentin will update FADI and ask to get 5 projects. Then, we can start w Cedric on stuff.
- [ ] Ross and Lukasz continue with research.
## Summary:
- Expectation: Have sth ready to deploy master branches to production. Before Yann leaves. End of April.
- Quentin’s next step: Build initial solution to create master env for prod stuff. So just deploy everything to clusters first. Maintain transistor until we don’t have better solution. Lukasz and Ross not yet blocking.
- Other notes
- We talked a lot about how do we generate helm charts, helm chart or not, the multiple different environments, what triggers events,
- Both helm and kustomize suck. Pick the devil you know. Probably need to discuss w teams what they are more comfortable with.
- ## What are the biggest open questions?
- **Main question: Do we have idea of single service or component to POC targeted component?**
- If we can do that (take images and deploy them somewhere.. then we can think about the others.
- This is no problem. Just ask Yann which is a good candidate.
- Until we can do this POC, we can’t tell what makes sense.
- **What is Quentin’s next step?**
- Build initial solution to create master env for prod stuff. So just deploy everything to clusters first. Maintain transistor until we don’t have better solution. Lukasz and Ross not yet blocking?
- [ ] Get 5 projects and look at the stuff. If it’s easy to transform to helm chart or kustomize. If can transform w a job or something.
- [ ] Need installation to test.
- [ ] Decide: Use Flux for initial CD part to deploy everything to clusters?
- [ ] Help them migrate
Once we have this...
- [ ] Feature branch - w jenkinsx or argo, whatever - to help them deploy stuff to our cluster
- **How do we generate helm charts?**
- Where do we need helm charts?
- W flux cd - you need k8s primitives, not helm charts.
- Came from Pau. Just taking the app.yaml file and considering that as values input file for helm chart.
- Assumption: Most services deployed look the same. They have def’n of how that is. One helm chart that would be able to remake the deployment.
- Are repos so similar that we could combine to combine into … chart yaml and build it?
- Not sure. team decides. app.yaml. service definition anywhere - in this repo, that repo, shared repo, etc. k8s primitives? No. they created their own stuff. emiter transforms to k8s resources.
- With custom service object… service definition just creates service, ingress, and deployment. Why separate repo? Team wants overview. We can ask if they want to change.
- Service definition can be another repo. In project repo, there can be no indication of this service. just app.yaml. shows where these files are. CRDs, but not k8s CRDs.
- What they do now
- They don’t have helm charts right now. Have custom config files that have just subset of k8s resources. One for cron jobs. One for secrets. One for service, deploymnet, ingress.
- Is app def’n described in YAML? Like a chart.yaml (name of chart, env it’s installed in, which … is installed in cluster, docker image used)
- **How do we actually deploy them?**
- JenkinsX doesn’t have… they have repo where you… Then you can add any other repo. Generates everything you need to have pipeline deployed. Preview repos. Not auto detected. But easy to add.
- Currently, they define webhooks … something to transistor. Webhook to detect new environment.
- ## Others
- What about environments? They need multiple different environments?
- By default there are three: dev, staging, and prod. For every dev commit, they create an env as well? For VPL (?), vspot??? feature branch spot. Dev namespace for …? everything deployed is in master branch. feature branch that create namespace. they push only… in the namespace, they deploy services that redirect to services… the … is communicating w what branch?? It’s just redirection.
- What triggers events?
- Actual trigger: call webhook from docker registry. Their contract is docker image.
- How docker image is built is up to teams. Variety of CI servers. From transistor perspective, don’t care how it’s built.
- Whole workflow starts with apps yaml. There’s an image in repo, team drops cR, then transistor starts? All team has to do. (?)
- Actually: Write app definition in github repo. There is collector that scans github repos w metadata. ??? writes resources in k8s, like env, realm, depending on what it finds on github.
- Jenkins cannot do this. You have to do it manually, but doesn’t seem big issue - people don’t create 50 repos per day.
- Mostly commit on githubs and docker registries webhooks
- Is the namespace the trigger/ When there’s new feature branch - how it triggers that it exists? They use webhooks that have… bla bla in transistor, that there’s new PR or PR is closed. Creates new envs. Transistor is creating CRs, if it detects new env, creates new namespace, deploy emiter, emiter does sth like what should be there and not, then deploys what should be in namespace. Once… it bla bla. that’s why no leftovers.
- Argo events thing might help w that. They have 20 different event source, like webhook, k8s resources.
- thinks both jenkinsx and argo could be good solutions.
- Need to be careful b/c not sure who maintains it.
- argo cd - need to configure a lot? might be harder to maintain
- Very flexible - but less is built than jenkinsx. you have to build yourself.
- jenkinsx - works out of the box?
- Lukasz experience has been terrible - just doesn’t work. talking w maintainer on slack. can’t make custom tekton pipeline for sep. repo, it’s a no go. this is for feature branch?
- What they have - 3 envs. Also, preview env per commit when you commit sth to repo. Also auto promotion strategies.
- Default strategy: if everything passes in dev, auto promoted to staging. From staging, give manual approval to promote to prod.
- Idea is cool, but execution, docs, etc. is so chaotic.
- So far Lukasz summary
- Some team builds image. Image .. registry. From there we don’t care. Then take over, … change. Run tests. So far, if we can make JenkinsX work, it can work neatly.
- Concerns
- Prev environment?
- You create a branch. It creates namespace, bring only service in that change in that namespace. Creates k8s service resources. just redirecting to dev namespace. You always have dev namespace ready, working in master branch.
- What if other direction? Service is called by.. incoming, not outgoing…? Incoming is harder. It’s not for acceptance testing. Non-issue. They’re not testing full pipeline? Only your service. So only downward facing is important.
- This makes previews easier. Only challenge: if same branch name in repo, both will be on same namespace on k8s clusters. because of naming convention.
- If we can get any pipeline, argo workflow, tekton, etc. we can implement whatever we want. Only question is how much resources we want to do.
- Next part: spin up env.
- For QA, create new namespace for QA. Deploy all there.
- No idea what we’re talking about
- Combine Cedric and Quentin’s ideas.
- Re: Helm chart or not?
- Cedric doesn’t see why to use helm charts. Push them towards k8s primitives.
- Lukasz: We need to take config from app.yaml. Does it make easy migration path? Semi automatic bunch of primitives into chart. Into template directory. Then turn config from app.yaml to values.yaml?
- Alex shows his screen /getting-started/
- Lukasz idea: Get 5 projects. See if the same. If reasonable to convert to helm chart.
- Cedric: Let’s keep open mind. e.g. Flux uses kustomize.
- Both helm and kustomize suck. Pick the devil you know. Probably need to discuss w teams what they are more comfortable with.
- Hard ETAs?
- Expectation: Have sth ready to deploy master branches to production. Before Yann leaves. End of April.