# Introduction to Web API Security ## Importance of securing Web APIs Backend servers are the powerhouse of modern-day applications; hence a high level of expertise goes into building them. However, it's important to ensure that these backend servers are well-secured from bad guys (hackers, phishers). These bad elements access the backend servers through vulnerable points in the gateways to wreak havoc, stealing relevant info and negatively affecting application performance and efficiency via various forms of API attacks such as SQL and non-SQL-based injections, DDOS, code malware, and other methods to exploit vulnerabilities. In this article, I will be focusing on Rate Limiting, an important hack that helps protect the backend API from being exploited by hackers via the use of Distributed denial of service, Brute-force attacks and other related malicious activities. But first of all, what does Rate limiting mean? ## Introduction to Rate Limiting Rate limiting just simply implies a mechanism put in place to regulate the frequency of requests made by a client to the backend server. It prevents the repetition of a client request within a defined time frame. Why do we then have to implement Rate limiting in the API development? ## Importance of Rate Limiting Here are some of the reasons why rate limiting is used in backend application development. ### DDOS attacks First of all, it serves as a preventive measure to mitigate against DDOS attacks. DDOS attacks are malicious attacks on servers which involve flooding the server endpoints with multiple requests, often millions resulting in reduced server efficiency and disruption of server functions. Mostly, these occur with the use of automated bots. These attacks could either be volumetric, protocol-based or application layer-based. A key example of this form of attack occurred on the GitHub website in the year 2018. ### Web Scraping Also, Rate limiting plays a role in protecting web applications and web servers from unauthorized web scrapers and web crawlers. These, also automated usually emit requests continually to collect relevant website data which could be exposed to unauthorized persons. Having a good rate limiter in place helps to prevent all these. ### Brute force attack This involves trying to gain access to a server's resources by trying all possible configurations possible to get access to the resource. This could be done manually but it is mostly automated with the use of bots as it is resource-consuming. Rate limiting also proves effective in preventing these forms of attacks by disabling the requests if they exceed the required number of requests within a specific time frame. ### Resources optimization Server requests usually cost the API owners some expenses in terms of running and maintenance costs. Having a rate limiter in place helps to regulate the number of requests the Server can handle, help conserve cost and maximize efficiency. Subsequently, we will highlight some algorithms on which rate limiters are built. ## Adoption and Usage of Rate-Limiting by Popular Sites Rate limiting as a security measure has been adopted by a lot of tech products, ranging from large-scale to small-scale applications. For example, Twitter (X) has a rale limit feature implemented in the Application programming interfaces they provide to developers. These interfaces allow access to Twitter sign-up extension and other features made available by Twitter. To guarantee the efficient running of these interfaces, Twitter imposed a rate limit of 50 tweet post requests per user every 24 hours. More details about this can be found [here](https://developer.twitter.com/en/docs/twitter-api/rate-limits). ## Other Real life Use-cases of API Rate Limiting The use of an application programming interface isn't limited to just what popular sites like Twitter use it for. Here are some other real-life applications of Rate-limiting in today's world. ### Reducing the incidence of spamming [Research](https://www.emailtooltester.com/en/blog/spam-statistics/#:~:text=Survey%20Methodology-,Key%20statistics,spam%20messages%20in%20some%20form.) reveals that over 160 billion spam emails are sent daily. Hence this has prompted the implementation of rate-limiting to curb the spread of unsolicited messages and spam content via messaging and emailing platforms over a specific time range. By so doing, it encourages responsible use of these platforms. ### Tackling Fraudulent Activities Rate limiting is currently implemented across web applications to help detect unusual web application activities by some users which may possess fraudulent intents. This measure serves to prevent and mitigate the ongoing fraudulent transactions being performed over the application server. ### Disabling Malicious User Authentication Individuals with malicious intent may want to compromise the web servers by taking several measures such as brute force, DDOS and other techniques to take over other users' accounts. However, several sites have efficient rate limit systems in places which limit the amount of login attempts an individual has to a site within a specific time range. This also contributes to web security measures. ## How Does Rate Limiting Work? Rate-limiting tools used in applications are implemented based on different algorithm structures. These algorithms guide the functionality of the rate-limiting tool whose end goal is to limit the number of requests a server receives per time to enhance its efficiency. ## Examples of Rate Limiting Algorithms Here are some of the most popular algorithms currently in use. ### Fixed window algorithm This algorithm is based on fixing a static definite time interval by the server for all clients, regulating the number of requests that can be made to the server, irrespective of the number of clients accessing the API. For example, setting a request limit of 5 minutes prevents any client from accessing the endpoint till the expiration of the 5 minutes and the algorithm is renewed again. This model isn’t cost-effective. ### Sliding window algorithm This algorithm is similar in configuration to the fixed window algorithm but it provides a solution to the fixed window algorithm by individualizing client access to a given number of requests within a specific time interval by Creating independent time intervals for each client. For example, if Client A accesses the request by 10:00, the client is allowed to make 10 requests until the expiration of the time by 9:03 while Client B who accesses the request by 10:02 is allowed to make 10 requests until the expiration by 10:05. ### Leaking bucket algorithm This algorithm is based on the literal meaning of its name, the leaking bucket. Its algorithm ensures that only a specific number of requests can be processed by the server at any given time. Any requests exceeding this number will be discarded and issued an ***error 429***. This is to ensure that the server is not overloaded and to guarantee maintenance of server efficiency and speed. ### Token bucket algorithm This model is similar to the leaking bucket as there is a hypothetical bucket which serves as the rate limiter. This bucket serves to manage tokens and new tokens are added periodically into the bucket. When a request is made, a token gets discarded, all this continues until all the tokens in the bucket get depleted and any requests made will be discarded with an added ***Error 429***. This also helps to prevent server congestion and ensure maximal efficiency. ## Rate limiting best practices Efficient web API development is mainly achievable by following the best API development practices. To maximize the use of a rate limiter as an API security measure, the following needs to be implemented. - **Firstly, choose a compatible rate-limiting algorithm.** Having a strong rate-limiting algorithm in place will be essential to achieve the desired result. Choosing the best rate-limiting algorithm in sync with your API endpoint will also be needed. - **Ensure that the limit sets are within the reasonable limit ranges**. Having arbitrary rate limit parameters set could negatively affect user experience and that could defeat its purpose. Setting a reasonable time limit to maximize user experience and militate against attacks has proven to be much more effective. - **Ensure efficient error handling and provide necessary feedback to the client.** The default rate limiting error code is error code 429. Appropriate handling of the errors that occur during the API usage especially due to the abuse of the API will be necessary to provide the necessary feedback to the user. - **Implement flexible rate-limiting mechanisms across several parameters.** Setting a fixed time interval across all endpoints seems to be a bad practice as some API endpoints are much more data-sensitive than others. Hence, having a flexible rate limiter that sets parameters in order of relevance helps to maximize server efficiency and ensure security. - **Ensure the provision of appropriate application logging, monitoring and observability tools.** Having in place API metrics, logging, monitoring and observability tools also helps to serve as an additional security hack for Web APIs as they help monitor server activity, and through the use of monitoring alerts, notify the server developer when suspicious requests are detected on the server. - **Ensure synchronicity of rate limiting and other API security measures.** Proper synchronicity of rate limiters with other API security hacks should be harnessed to potentiate the API security measures. Adequate knowledge of the security measures and expertise is needed in order not to counteract the existing security measures. - **Ensure proper API documentation.** Adequate API documentation is also needed to ensure users, other developers and clients alike are aware of the rate-limiting practice in place to ensure compliance with the rate-limiting rules. ## Strapi Rate limit tools Implementing a rate limiter into your Strapi application has been made easy with the use of some packages. Here are some of the popular ones. * [Koa2 rate limit package](https://www.npmjs.com/package/koa2-ratelimit) * [Express rate limit package](https://www.npmjs.com/package/express-rate-limit). Alternatively, a custom Strapi rate limiter can be built and integrated into the Strapi API middleware. This article will cover all these in detail with a demo project. ## Demo Project Right now, we will be building an e-commerce site using [Strapi](https://strapi.io/) as our backend framework. We will then set up a rate limiter in our Strapi application to help guarantee our backend security. Postman will serve as our tool to test the API endpoints. Let's now go on to create a default Strapi application. To create a strapi application, enter `npx create-strapi-app@latest {project name}` on the command line and follow the commands provided. To make the installation more straightforward, stick with the *quick start* installation method and viola, your app is ready. This installation modality automatically sets up an easy-to-use SQLite database. However, you could choose to use any other SQL database supported by Strapi. Alternatively, you can download the starter repo for the project from [here](https://github.com/oluwatobi2001/Strapi-default) and install the necessary dependencies via `npm install`. Thereafter, you can execute the Strapi application by navigating to the Strapi application code folder on the command line and run `npm run develop` ![Captur](https://hackmd.io/_uploads/BkRn2PqrR.png) On successful execution, you will be provided with the link to the localhost address to customize the application. ![act](https://hackmd.io/_uploads/SkkSavcS0.png) Navigating to the link will require you to create an admin login mail and password. Successful completion of this step will give you access to the backend dashboard. ![logi](https://hackmd.io/_uploads/S1Vqxd5B0.png) You can utilise the Strapi dashboard UI to create APIs, or you can alternatively generate an API using `npm generate`. The APIs created will be used in completing the setup for the rate limiting functionality. We will be creating a product store for our e-commerce site. To easily set up products, kindly navigate to the Content-Type builder tab on the sidebar. ![dash](https://hackmd.io/_uploads/r1RzbO5BC.png) The content-Type builder manager allows us to create various collections which would come in handy when setting up our APIs. In this case, the product and category collections will be created to enable us set up our product catalogues. ![cat](https://hackmd.io/_uploads/B16rbu5rA.png) ![entry](https://hackmd.io/_uploads/SJhdb_qSR.png) After completing the creation of the collection types, you can easily add your various products seamlessly into the Backend database. In my case, I created phone brand products for sale. ![completPr](https://hackmd.io/_uploads/HyR9JT6fR.jpg) Also noteworthy is that the collections we created in the Strapi dashboard automatically creates an API folder for us within our codebase. We will then be working on the project codebase subsequently. The next step in this tutorial is setting up an efficient rate limiter for our Strapi APIs created in the repo using the tools discussed above. ### koa2-rate-limit In this section, we will be using the koa2-rate-limit package to build ourproject rate limiter. To install the package, navigate to your project folder on the command line and execute` npm i koa2-rate-limit`. On successful installation, navigate to the middleware subfolder within the API folder and create a code file. For ease of integration, name it as `rateLimit.js` After that, within the rate limit file, import and initialize the koa2-rate limit package. `const RateLimit = require("koa2-ratelimit").RateLimit;` Afterwards, we will configure the koa rate limiter to a specified time interval frame and the total number of requests. ``` module.exports = (config, { strapi }) => {   // Configuring the rate limiter middleware   const limiter = RateLimit.middleware({     interval: { min: 1 }, // Time window in minutes     max: 3, // Maximum number of requests per interval }); ``` In the code above, the rate limiter middleware was invoked and the time interval in which the rate limit gets applied was set to 1 minute. The total maximum number of requests (max) was set to 3 for this tutorial. You can however tweak this to suit your preference. ```   return async (ctx, next) => {         try {       // Apply the rate limiter to the current request       await limiter(ctx, next); } catch (err) {       if (err.status === 429) {         // Handle rate limit exceeded error         strapi.log.warn('Rate limit exceeded.');         ctx.status = 429;         ctx.body = {           statusCode: 429,           error: 'Too Many Requests',           message: 'You have exceeded the maximum number of requests. Please try again later.', }; } else {         // Re-throw other errors to be handled by Strapi's error-handling middleware         throw err; } } ``` The code above defines a middleware which gets executed whenever a function is made on any API. If the requests exceed the given maximum, an error code is outputted. Below is the full code. ``` 'use strict'; /** * `RateLimit` middleware */ const RateLimit = require("koa2-ratelimit").RateLimit; module.exports = (config, { strapi }) => {   // Configuring the rate limiter middleware   const limiter = RateLimit.middleware({     interval: { min: 1 }, // Time window in minutes     max: 3, // Maximum number of requests per interval });   return async (ctx, next) => {     try {       // Apply the rate limiter to the current request       await limiter(ctx, next); } catch (err) {       if (err.status === 429) {         // Handle rate limit exceeded error         strapi.log.warn('Rate limit exceeded.');         ctx.status = 429;         ctx.body = {           statusCode: 429,           error: 'Too Many Requests',           message: 'You have exceeded the maximum number of requests. Please try again later.', }; } else {         // Re-throw other errors to be handled by Strapi's error-handling middleware         throw err; } } }; }; ``` To ensure its seamless integration to all APIs within the strapi project, the admin middlewares must also be configured. ``` cconst rateLimit = require('../middlewares/rateLimit'); module.exports = [ 'strapi::logger', 'strapi::errors', 'strapi::security', 'strapi::cors', 'strapi::poweredBy', 'strapi::query', 'strapi::body', 'strapi::session', 'strapi::favicon', 'strapi::public', { name: 'global::rateLimit', config: {}, }, ]; ``` With this, we have successfully configured the rate limiter powered by koa2-ratelimiter. Here is a picture of its execution. ![Success message](https://hackmd.io/_uploads/Bybbd-hj0.png) ![error response](https://hackmd.io/_uploads/r1Zb_-3jC.png) ## Custom Strapi Api rate limiter. Within the rateLimit file in the API/middlewares folder,  we would create our custom rate limiter by initializing a memory store. `const requestCounts = new Map();` Thereafter, we would define our rate limit function and then configure the rate limiter. ``` module.exports = (config, { strapi }) => {     const rateLimitConfig = strapi.config.get('admin.rateLimit', {     interval: 60 * 1000,       max: 3,   }); ``` The time interval above is 1 minute while the maximum number of requests that can be made within the specified time interval is 3. You can however tweak it to suit your preference.   ``` return async (ctx, next) => {     const ip = ctx.ip;     const currentTime = Date.now();     if (!requestCounts.has(ip)) {             requestCounts.set(ip, { count: 1, startTime: currentTime }); } else {       const requestInfo = requestCounts.get(ip);             if (currentTime - requestInfo.startTime > rateLimitConfig.interval) {         requestInfo.count = 1;         requestInfo.startTime = currentTime; } else {         }             if (requestInfo.count > rateLimitConfig.max) {         strapi.log.warn(`Rate limit exceeded for IP: ${ip}`);         ctx.status = 429;         ctx.body = {           statusCode: 429,           error: 'Too Many Requests',           message: 'You have exceeded the maximum number of requests. Please try again later.', };         return; } }     await next(); }; }; ``` Afterwards, a middleware is defined which obtains the user IP address and then stores it in the memory store. The time interval is also set from the current time the request is made and the request count gets updated with every new request made, If the requests made exceed the maximum expected requests within the time interval of 1 minute in our case, an error is thrown. Here is the full code below. ``` 'use strict'; const requestCounts = new Map(); module.exports = (config, { strapi }) => {     const rateLimitConfig = strapi.config.get('admin.rateLimit', {     interval: 60 * 1000,       max: 3,   });   return async (ctx, next) => {     const ip = ctx.ip;     const currentTime = Date.now();     if (!requestCounts.has(ip)) {             requestCounts.set(ip, { count: 1, startTime: currentTime }); } else {       const requestInfo = requestCounts.get(ip);             if (currentTime - requestInfo.startTime > rateLimitConfig.interval) {         requestInfo.count = 1;         requestInfo.startTime = currentTime; } else {                 requestInfo.count += 1; }           if (requestInfo.count > rateLimitConfig.max) {                 ctx.status = 429;         ctx.body = {           statusCode: 429,           error: 'Too Many Requests',           message: 'You have exceeded the maximum number of requests. Please try again later.', };         return; } }     await next(); }; }; ``` Here is a demo of the project. ![successful response](https://hackmd.io/_uploads/BkIyHZ2j0.png) ![rate limiting error](https://hackmd.io/_uploads/HyxgHW2i0.png) ### Express-rate-limiter implementation Express rate limiter is also another important package that can be used to implement rate limiting in our project. Right now, this package will be used to implement a route-specific API rate limiting. The next step in this tutorial is setting up an efficient rate limiter for our Strapi APIs created in the repo. To set up rate limiters on our Strapi applications, we would be working mainly on the `routes `file. this can be navigated to by accessing the `src` folder within the project root directory. Within the src folder, navigate to the API folder. the API folder contains all the API files for the collections created in the strapi dashboard. ![routes](https://hackmd.io/_uploads/S1ERbxndR.png) # product * [content-types/](.\product\content-types) * [product/](.\product\content-types\product) * [schema.json](.\product\content-types\product\schema.json) * [controllers/](.\product\controllers) * [product.js](.\product\controllers\product.js) * [routes/](.\product\routes) * [product.js](.\product\routes\product.js) * [services/](.\product\services) * [product.js](.\product\services\product.js) The rate limiter will be enforced in the routes section of each API. For the tutorial, I will be using the products API as a demo API in this article. ``` 'use strict'; /** * product router */ const { createCoreRouter } = require('@strapi/strapi').factories; module.exports = createCoreRouter('api::product.product'); ``` This is the initial code setup in the routes.js file in our product api folder. The rate limiting tool of choice for this tutorial is express-rate-limit as it offers much simplicity and user-friendliness coupled with its efficiency. Here is a link to its [documentation](https://www.npmjs.com/package/express-rate-limit). To get this installed, navigate to the command line of the project directory and run `npm install express-rate-limit` On completion of its instalation, we will be initializing it in the **products** file already created within the **routes** folder as follows. `const { rateLimit } = require("express-rate-limit");` we will now go on to configure the rate limiter to our desired specifications. ``` const rateLimit = require('express-rate-limit'); const limiter = rateLimit({ windowMs: 3 * 60 * 1000, // 3 minutes max: 2, // limit each IP to 2 requests per windowMs handler: async (req, res, next) => { const ctx = strapi.requestContext.get(); ctx.status = 429; ctx.body = { message: "Too many requests", policy: "rate limit" }; // Ensure the response is ended after setting the response body and status ctx.res.end(); } }); module.exports = limiter; ``` The code above serves to configure the rate-limiting parameters we intend to use for the file. The **windowMs** represents the time interval in milliseconds for the number of requests. In our case, we specified a time of 3 minutes. Also, we specified the maximum number of requests that can be made within that same time frame. In our case, we used 2 for demo purposes. However, the `limit` parameter also serves as an alternative to` max` parameter. Also included is the handler function that gets executed whenever the requests exceed the set number. It returns an **Error 429** with an error body containing “Too many requests”. ``` const { createCoreRouter } = require('@strapi/strapi').factories; module.exports = createCoreRouter('api::product.product', { config: { find: { middlewares: [ async (ctx, next) => { await new Promise((resolve, reject) => { limiter(ctx.req, ctx.res, (error) => { if (error) { ctx.status = 429; ctx.body = { error: error.message }; reject(error); } else { resolve(); } }); }); await next(); } ] } } }); ``` The above code illustrates the use of the Strapi API middleware which serves to ensure that the rate limit is fulfilled before the onward execution of the API requests. It also ensures that the request is terminated when the rate limit gets exceeded. Here is the final code for the project. ``` 'use strict'; /** * product router */ const { createCoreRouter } = require('@strapi/strapi').factories; const rateLimit = require('express-rate-limit'); const limiter = rateLimit({ windowMs: 3 * 60 * 1000, // 3 minutes max: 2, // limit each IP to 2 requests per windowMs handler: async (req, res, next) => { const ctx = strapi.requestContext.get(); ctx.status = 429; ctx.body = { message: 'Too many requests', policy: 'rate limit' }; // Ensure the response is ended after setting the response body and status ctx.res.end(); } }); module.exports = createCoreRouter('api::product.product', { config: { find: { middlewares: [ async (ctx, next) => { await new Promise((resolve, reject) => { limiter(ctx.req, ctx.res, (error) => { if (error) { ctx.status = 429; ctx.body = { error: error.message }; reject(error); } else { resolve(); } }); }); if (ctx.status !== 429) { await next(); } } ] } } }); ``` Here is an image showing the Rate limiting functionality. ![prod](https://hackmd.io/_uploads/S116Wu9BR.png) ![ratel](https://hackmd.io/_uploads/S1zMGO5B0.png) You can also download the final code for the project [here](https://github.com/oluwatobi2001/Strapi-project). Having completed this, you can then go ahead to test the Rate-limiting functionality of your API. The Strapi application can be run by executing `npm run develop` in the command line. ## Conclusion With this, we have come to the end of the tutorial. We hope you’ve learned essentially about rate limiting, its uses, tools and best practices. You can also design multiple rate limiters within the code and implement them in any endpoint of your choice to test it out. Feel free to drop any questions or comments in the comment box below. Till next time, keep on coding!