# Transistor
Transistor watches every repo for customer custom CRDs
- app.yaml <=> Chart.yaml
- services <=> Deployment + Service
- cronjobs
- config
- secret
Transistor can create and delete namespaces on the fly using FADI Realm and Environment CR
By default, every data tier (dev, staging, production) has a different set of database, cloud services and so on.
The app.yaml should be in the app repo but the deployment resources can be in another repository (one per realm stack for example)
[Recording of Transistor Discovery Session](https://drive.google.com/file/d/13o-4KMUnz0BNpIe0HGRYReBpBsB0Fomb/view?usp=sharing)
## Glossary
__Realm__: Operational set of services (ecom, dotcom, crm) mostly managed by one team. The service in a realm communicate using Kubernetes services and inter realm communication goes through DNS
__Environment__: Installation of a set of services for a given realm (dev, staging, production) backed by a namespace (realm-environment)
__Priviledged environment__: Flag at the cluster and the environment level to determine if cronjobs should run and if some properties should be set
## CI
Each team can use what they want as a CI tool.
The only contract they have with Transistor is to have an app.yaml file and a docker image pushed in a docker repository that can send webhook events.
If we implement something like Jenkins X, we could also help them migrate their CI if they so desire to use it.
## Insight service and commit history
Event dispatcher for Kubernetes Events.
Is used by other apps to provide extra services like garbage collection, dns sync and so on.
It allows to know if a service is ready (commits, emitter, errors, and so on).
It has a webui that allows to check the status of deployments and commits as well as changelogs
Used to determine which steps of the deployment process are done for a specific commit.
Can get a changelog for each service, env or the cluster.
Be able to pinpoint where an error occured in the development process.
## Integrations
- Teams, Slack and Github integration
The status of each deployment (master or branch) can be found either in Teams in the channel [FaDi] Transistor 2-O365 -> Release Log or in our Dashboard. The messages in this channel show if the Transistor is aware of a commit and which steps in the deploy pipeline are already completed.
For each production deployment an additional message is send to the channel f-it-release-log or t-crm-release-log depending on which team is responsible for the change.
## Workflows:
By default, every app's master branch is installed for every realm (e.g. crm-production, crm-staging, crd-dev)
Images are tagged with the commit hash and the deployment pipeline is only triggered once the PR is build and the docker image is present in the repository
Applications in dev, staging and prod run in the same code state, the master branch. The only difference between them are the data tiers they are using.
Once code is pushed to a repository and the image is present in the docker registry:
- The **collector** receives the webhook events, generates the FIDRepository and FIDImageRepository. If the realm and environment are new, this is where the new namespace is created. An Emitter is then deployed in this namespace.
- The **Emitter**, a component installed in every namespacem, transforms the service definitions (Service, Config, Secret and Cronjobs) into kubernetes resources
- 4 different service definitions:
- Service => Deployment, HPA, Service or Ingress, Service Account and Network policies
- We could define 2 strategies:
- Have the team create Helm Charts (This will require migration of the teams so it could be the end goal)
- Have Emitter create Helm charts
- The **appyaml service** reads and validates the app.yaml files
They created their own subset of resources to be able to validate templates more easily and were not aware of the existence of validating webhooks at the time. We could develop a validating webhook with them and move to helm charts
### PR / branch
Once a branch is created, transistor creates a test environment for this version of the repository (feature-environment)
- Deploy all the services from this repository in the namespace.
- All requests from this test environment will be routed to the default development environment.
- Using branch name conventions. If more than one repository needs to be changed for a Jira ticket the same branch name should be used for all involved repositories. They will be deployed in the same namespace
For example, for the branch name `feature/PPP-123_frontend_change`
Transistor creates:
- namespace: {realm-name}-f-ppp-123 (i or b instead of f depending on the prefix)
- service url: {service-name}–f-ppp-123–{realm-name}.pucis.net
Once PRs are merged or closed, the namespace is deleted
To allow services to communicate with the ones in the dev namespaces, the Emitter creates a service per deployment in the namespace that redirects to the correct pod. All the containers are using local services (without the namespace)
### QA feature testing:
If the PR has some specific labels defined below, QA assistant (a service inside the custer), does almost the same thing as the PR workflow described above but it deploys al the services in one namespace for isolation instead of using existing ones.
This creates a new Enviroment CRD
#### List of possible labels
`qa-assist/environment-up` - Start a new QA environment for this PR.
`qa-assist/request-status` - Request the status of the environment. The status will be posted to the PR in Github.
`qa-assist/enable-metrics` - Add label to send metrics for this qa-environment to wavefront.
`qa-assist/full-scale` - Add label to allow this qa-environment to scale as much as it needs.
`qa-assist/data-tier/development` - Set data tier for this QA environment to development.
`qa-assist/data-tier/staging` - Set data tier for this QA environment to staging.
`qa-assist/data-tier/production` - Set data tier for this QA environment to production.
`qa-assist/auto-boot` - Add label to allow this QA environment to boot automatically in the morning.
If no data tier label was set the staging data tier will be used by default. Once the environment has been deployed the name of the associated namespace and the QA-URL will be posted to the PR
All QA Environments will shutdown during the weekend. During the week all QA Environments are up until 22:00. By setting the label auto-boot on your PR the QA Environment will boot up the next day at 6:30 in the morning. Otherwise a new QA Environment will be shutdown at the end of the day and not boot up the next day.
### Production
Once the PR has been tested sufficiently the PR will be merged and the branch deleted. The merge to master will automatically trigger a deployment of the new master version to the production system (as well as to the dev and qa environment).
## SIPS
Custom ingress controller (sort of API Gateway)
Can define targets and policies per cluster, per environment, per realm or per service as well as provide authentication
SIPS is aware of the environment and realms using FIDRealm and FIDEnvironment CRD but apart from this, it is decoupled from transistor.
We might have to create the CR still if they decide to keep SIPS

## CRD
app.yaml
```yaml=
version: 1
meta:
owner:
- team: infrastructure
realm:
name: lazertools
deployments:
- definition:
localRef: deployment/*.yaml
provides:
- imageRef: fashionid/t8r2-docs
```
ConfigMap:
```yaml
apiVersion: configmap/v1
kind: ConfigMap
name: t8r2-hello-world-config
data:
- key: testkey
value: testvalue
- key: privilegedtestkey
value: testvalue2
condition:
privileged: "yes"
tier: "production"
- key: privilegedtestkey2
value: testvalue23
condition:
tier: "staging,production"
```
```yaml
apiVersion: combound/v1
kind: Service
service:
name: t8r2-hello-world
image: fashionid/t8r2-hello-world
configs:
env:
TEST_CONFIG: t8r2-hello-world-config.testkey
OTHER_TEST_CONFIG: t8r2-hello-world-config.otherkey
volume:
- t8r2-hello-world-config.testkey
- t8r2-hello-world-config.otherkey
```
Secret:
```yaml
apiVersion: secret/v1
kind: Secret
name: example-secret
data:
- key: database_pw
values:
development: password
production: gYJYIZIAWUDBAEuMBEEDMdT8EzCOLtmfZbN
staging: qadbpw
```
Cron Job:
```yaml
apiVersion: combound/v1
kind: CronJob
cron:
name: cronjob-example
image: fashionid/cronjob-example
schedule: "*/2 * * * *"
logTags:
- cronjob-example
metrics:
enabled: true
```
Services:
```yaml
apiVersion: combound/v1
kind: Service
service:
name: t8r2-hello-world
image: fashionid/t8r2-hello-world
command: ["node", "/home/node/app/app.js"]
dnsNames:
- t8r2-hello-world.puc.services
- www.t8r2-hello-world.puc.services
replicas:
min: 1
max: 1
metrics:
cpu: 30
resources:
requests:
cpu: 50m
memory: 100Mi
metrics:
enabled: false
env:
EXAMPLE_ENV: "50"
expositionType: shared # dedicated for a dedicated loadbalancer, shared for ingress
secrets:
K8S_SECRET: secretname.examplesecret
configs:
env:
TEST_CONFIG: t8r2-hello-world-config.testkey
volume:
- t8r2-hello-world-config.testkey
```
## Future:
Going forward, FADI wants to use more standard components but keep the flexibility Transistor was giving them.
For Yann, the challenge is mostly the feature branch / qa workflow
They want to keep having a very fast release cycle allowing them to test every change in parallel in a very confined way. Their ecom team has 60 people and is doing between 5 to 15 deploys to production a day
The first step would be to replace the CD for dev, staging and production environments and leaving the feature branch and QA in transistor and work on moving them over
## Questions:
- Policy on privacy? Can the docker and app repository be public? If using helm chart for example?
- Everything needs to be private
- Can we setup a preproduction cluster in your env to test our production pipeline?
- We can create clusters and environments