## Current Ironhide Architecture
Here's the docker build command from the jenkins ironhide job:
```shell=
docker build \
--file ./cybertron-ironhide/Dockerfile \
--tag $DOCKER_HUB/crs_staging_ironhide:$DEPLOY_VERSION \
--build-arg STAGE=$environment \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_REGION=ap-southeast-1 \
--build-arg CLUSTER_IDENTIFIER=apse1-cluster \
--build-arg SG_ID=sg-06f01003bee3fcb9b \
--build-arg SN_ID1=subnet-b54071fc \
--build-arg SN_ID2=subnet-dc7671bb \
--build-arg VPCE=vpce-0d95629b58eb95bb4 \
```
While creating a docker image using Dockerfile (`cybertron-ironhide/Dockerfile`), we run this command:
```shell=
RUN npm run $STAGE:build
```
For this command, we do this:
- get service endpoint urls from AWS secret manager (including the GraphQL url)
- get all the tenants using the `/tenants` API and fetch metabase secret key, printer IPs secrets from AWS secret manager
- bundle the javascript and css files and add the tenant configurations in the bundle as __CONFIG__ variable
Since there is a server as well in ironhide (where we wrap the express server in serverless module and define a handler for the lambda function), we do the wepack compilation for the server code as well.
Once the compilation is done, we package this and keep it ready for the deployment:
```javascript=
spawnSync('serverless', ['package'], { stdio: 'inherit' });
```
Now, we create a docker container using this image:
```shell=
docker run \
--name cybertron-ironhide \
--env AWS_REGION=ap-southeast-1 \
--env CLUSTER_IDENTIFIER=apse1-cluster \
--env SG_ID=sg-06f01003bee3fcb9b \
--env SN_ID1=subnet-b54071fc \
--env SN_ID2=subnet-dc7671bb \
--env VPCE=vpce-0d95629b58eb95bb4 \
$DOCKER_HUB/crs_staging_ironhide:$DEPLOY_VERSION \
npm run $environment:serve
```
Notice the `npm run $environment:serve` command
Here's what we do in the `serve` step:
- Run the deploy step of serverless
```javascript=
spawnSync('serverless', ['deploy', '--package', '.serverless'], { stdio: 'inherit' });
```
The deploy step updates the lambda function, puts the updated cybertron-ironhide zip file in S3.
The docker container is killed once this is done! (Please note that the container is not running in detached mode so it gets killed once the serve command is executed)
Route 53 entry of `pms.treebo.com` points to a cloudfront and cloudfront has an origin domain of the S3 bucket.
There are two API Gateway endpoints that hit the lambda function with the event payload
example:
https://ppm3usupc5.execute-api.ap-southeast-1.amazonaws.com/staging/
https://ppm3usupc5.execute-api.ap-southeast-1.amazonaws.com/staging/{prox+}
The express server app is wrapped in the serverless module and there is a middlrware that intercepts the request and sends back the HTML skeleton with the script and css links. These static assets are then cached in cloudfront and the subsequent requests are being served from the cloudfront.
So, the job of the Docker container was to update the lambda and put the bundled code on S3.
The job of the lambda function is to put the static assets on cloudfront for the subsequent requests.
## Proposed Ironhide Architecture
From what I have been reading, people keep the static website deployments quite simple! We can directly put the assets on S3 using `aws sync S3` command. This will sync the `build` folder (build folder gets generated by the webpack) to the S3 bucket. We can specify the origin domain for cloudfront as this S3 bucket. Cloudfront can directly serve the assets from S3 (in case of a miss).
**There is no serverless now! The docker container would sync the build folder to S3 and cloudfront uses this S3 bucket to serve the static assets.**
We would be removing the AWS secret and tenant calls from the build step in ironhide. We'll not create the configs of all the tenants in the build step now.
Ironhide would call GQL and get the required configs and AWS secrets.
*How would ironhide know the GQL endpoint?*
We can do one of these:
- We can keep it simple! We already know the cluster identifier `graphql-aps1.hotelsuperhero.com`. We can keep service enpoints, AWS secrets for the tenant in the GQL cache and serve them from the cache to ironhide.
- Ironhide can call AWS Secret Manager using ES6 `fetch` on load but in this approach we'll end up calling the secret manager many times from the client.
Now the architecture is quite simple!
After bundling the assets, we sync S3 bucket with the `build` folder and cloudfront talks to this S3 bucket. Please note this syncing would be done in the docker container at the build step in deployment job. Ironhide would be deployed like a static website to S3.
We'll remove:
Serverless packaging
Serverless deployment
Tenant calls
AWS Secret Manager calls from the build step
We don't have to deploy if a new tenant is added or any secret is changed. We just have to clear the GQL cache in these cases.