# Design decisions The diagram below illustrates a high-level view of the serverless stack: ![diagram](https://github.com/fleekxyz/non-fungible-apps/blob/feat/sc-912/deployment-fixes-and-script/serverless/Serverless%20Stack%20Diagram.png?raw=true) ## Layers When I was following [this tutorial](https://dev.to/eddeee888/how-to-deploy-prisma-in-aws-lambda-with-serverless-1m76) (the tutorial doesn't work) for setting up Prisma with Serverless, it was recommended to divide the Prisma, node, and library dependencies separately as layers. I agree with that decision, so that's how layers are structured currently. ## The deployment script There are quite a few steps that we have to run each time to deploy the module to AWS Lambda. To make this easier, I have written a shell script that takes care of running different commands and loading ENV vars as needed. You can check it out [here](https://github.com/fleekxyz/non-fungible-apps/blob/feat/sc-912/deployment-fixes-and-script/serverless/scripts/deploy.sh). For anyone who is interested in the deployment flow, I believe this is the best starting point. # Issues This is a brief list of some of the major issues with the previous deployment / invocation flow. ## Dependency issues and why we need layers If we publish the current implementation without the three layers, we will see that the lambda throws dependency errors when handlers are triggered. This is because serverless does not include dependencies in its final build and we need to upload them separately using "layers". We currently have three layers in our latest branch: - Node Modules: Contains the node modules directory's content. - Prisma Client: Contains the @prisma/client directory in node_modules. - Libs: Contains the helper functions in src/libs. ## Environment Variables ### The first issue The `serverless.yml` file can include environment variables for each handler function. But surprisingly, even if configured to use the main `.env` file, it's not able to find the variables and throws errors when trying to deploy. The solution I have been using to get around this is setting the variables in the lambda dashboard on the AWS website. This can be a temporary solution, but I would like to present it as a possible final solution to this issue as well. ### The second issue If the `.env` file is not located in the `dist/serverless` directory, every time a function is triggered the following error is logged in the console: ``` 2023-05-24T15:05:45.830Z prisma:tryLoadEnv Environment variables not found at /var/task/nfa-serverless/dist/serverless/.env 2023-05-24T15:05:45.831Z prisma:tryLoadEnv Environment variables not found at /var/task/nfa-serverless/dist/serverless/.env 2023-05-24T15:05:45.831Z prisma:tryLoadEnv No Environment variables loaded ``` This is because the `schema.prisma` file tries to find the `DATABASE_URL` variable in the `.env` file. There is a solution to this problem, but I think we might need to reconsider our approach here. The temporary fix is copying the `.env` file to the `dist/serverless` directory in the pre-build phase. Currently, the deployment script takes care of that and this doesn't touch the `src` directory. The only affected path is `dist/` and the whole `dist` folder is gitignored. So it's a local build process. Overall I find this to be a good temporary solution but I have reason to believe this is related to the next issue. ## The schema.prisma issue In our codebase, we have the schema file stored in the `prisma` directory and the local deployment works fine with that path. But once the module is deployed to AWS lambda, we see that serverless looks for this file in the same folder as the handler: ``` { "errorType": "Error", "errorMessage": "ENOENT: no such file or directory, open '/var/task/dist/serverless/src/functions/mints/schema.prisma'", "trace": [ "Error: ENOENT: no such file or directory, open '/var/task/dist/serverless/src/functions/mints/schema.prisma'", " at Object.openSync (node:fs:601:3)", " at Object.readFileSync (node:fs:469:35)", " at new LibraryEngine (/var/task/dist/serverless/src/functions/mints/handler.js:99:2538)", " at c.getEngine (/var/task/dist/serverless/src/functions/mints/handler.js:179:6130)", " at new PrismaClient (/var/task/dist/serverless/src/functions/mints/handler.js:179:5711)", " at /var/task/dist/serverless/src/functions/mints/handler.js:180:9889", " at /var/task/dist/serverless/src/functions/mints/handler.js:1:231", " at Object.<anonymous> (/var/task/dist/serverless/src/functions/mints/handler.js:411:24828)", " at Module._compile (node:internal/modules/cjs/loader:1254:14)", " at Module._extensions..js (node:internal/modules/cjs/loader:1308:10)" ] } ``` I spent a lot of time on trying to fix this issue and searching online for possible alternative approaches, but the only working fix that I found was copying the schema to the handler directories in the pre-build phase. Just like the `.env` issue, this fix does not touch and affect any path that is tracked by git. This happens in the `dist` folder and that is ignored by git. Relation to the `.env` problem: The `schema.prisma` file loads its `DATABASE_URL` variable from the the env source ([this line](https://github.com/fleekxyz/non-fungible-apps/blob/4924307b78bbd54e6908c4fce4fdb2756b77346f/serverless/prisma/schema.prisma#L8)). So, naturally, the schema expects a `.env` file in the root of the project. ## The Prisma engine issue Apparently, Amazon needs the `rhel-openssl-1.0.x` engine to run Prisma ([specified here in config](https://github.com/fleekxyz/non-fungible-apps/blob/4924307b78bbd54e6908c4fce4fdb2756b77346f/serverless/prisma/schema.prisma#LL3C51-L3C51)) so it tries to locate that in different paths. This engine needs to be included in the final zip file that's uploaded to the lambda, so in the pre-build phase the script copies the engine to handler directories. ## The index.mjs issue This is also one of the issues that took me the longest time to find a solution for. This issue occurs when everything is set correctly but the path to the handler is not updated to match the `dist` directory and it's structure. There are two paths that contain the source code for handlers: - src: this is where we update files and write to them. - dist: this is where the built JS scripts are pushed to by the `yarn tsc` command. For local testing, the `src` path is best to work with since it contains the latest changes and there's no need for building the scripts. (At least it was the best before the deployment shell script that we have now) So, the serverless manifest looks for the handler path in the `serverless.yml` file which was pointing to the `src` directory. The first step to fixing this issue is updating the handler path to the newly generated scripts in the `dist` folder. But even then, this error doesn't go away. The last thing that needs to be done to make this fix final is to look at the source code in the AWS Lambda dashboard. By doing that, we see the all content of `dist` is pushed inside a directory named `nfa-serverless`. This is also the name of the generated ZIP file by the `npx sls deploy` command. So, by adding that directory's name to the path, we can fix this issue. ## The event.body issue At this point, the function is working perfectly well and there are no errors anymore. When sending a test invocation request to the function through the AWS Lambda dashboard, we can set the JSON data that we want to send along with the request. I used a sample JSON body that I had used before to test the mint function locally. But when I sent the test request, a error regarding the `event.body` field was thrown. By logging the event variable and it's values I quickly found out the JSON that's sent from the lambda dashboard is exactly the event variable and it's context, unlike the local environment (tools like `curl`, handle this themselves). But even with sending requests through `curl` I keep getting a 500 response from the handler. It's not quite clear to me what's wrong at this moment but I don't think this is a serious issue. Going to deploy everything from scratch with the deployment script to test the script + send test requests to the functions with `curl`. ### The function url issue This isn't a big deal, but I used the AWS Lambda dashboard to generate a function url for the submitMintInfo function. Subsequently, I used that url in the Alchemy notify service but none of the requests were coming through. I usually check the logs for this and there are no records of any incoming requests (same thing happened when I sent requests through `curl`). I'm not sure if it's an issue about the generated URL or not, but I will update this section after testing the specific url that I get with the deployment command (the one that ends with `/mint`).