---
title: ioredis
tags: Nodejs
description: ioredis
---
# ioredis
## Set up
**ioredis docs**: https://www.npmjs.com/package/ioredis
**redis-config.ts**
```typescript!
const redisConfig = {
port: 6379,
host: '127.0.0.1',
password: '',
};
export default redisConfig;
```
**redis-base.ts**
```typescript!
import Redis from 'ioredis';
import redisConfig from './redis-config';
let redis: any;
if (!redis) {
redis = new Redis(redisConfig);
}
redis.on('connect', () => {
console.log('---------- REDIS CONNECTED ----------');
});
redis.on('close', () => {
console.log('---------- REDIS CLOSED ----------');
});
redis.on('error', (err: any) => {
console.log('---------- REDIS FAILED ----------');
console.error(err);
});
export default redis;
```
---
## Usage
In your Next.js API route, you call get data from redis like this:
```typescript!
import redis from '@utils/redis-base';
import { NextApiRequest, NextApiResponse } from 'next';
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
const strData = await redis.get(req.query.key);
res.status(200).json({ strData });
}
```
However, you will quickly notice that if you have multiple API routes that use redis, a new Redis connection will be created for each API call in **dev mode**.
However, it behaves differently in **production mode**. API routes made in the `pages/api` folder seem to **share a Redis instance**. And the server components in the `app` folder also seem to **share another Redis instance**.
---
## New Redis Connections
Even though we exported a single `redis` instance from `redis-base.ts`, we are using Next.js which uses [**serverless** API endpoints](https://nextjs.org/learn/basics/api-routes). Each API route in Next.js is a **separate serverless function and cannot share memory state with others**.
In **serverless environments**, each request can be handled by a different instance of your server, each with its own memory and module cache. If a different instance handles the request, it would import the file anew, create a new Redis connection, and thus trigger the "connect" event again.
***Note**: We tested the above theory, and it seems to only apply to dev mode.*
### How to prevent too many connections?
One way is to call `redis.disconnect()` after each API call. But if you use the same redis client initiated in **redis-base.ts**, it will close the current Redis connection, and **it cannot be used again**. If you try to use the same connection after disconnecting, you'll get an error "*Connection is closed.*".
You could **establish a new connection whenever you need to use Redis**, and disconnect when you're done. This ensures that you always have a valid connection when you need one, and you don't leave connections open when they're not in use.
Example:
```typescript!
import redisConfig from '@utils/redis-config';
import Redis from 'ioredis';
import { NextApiRequest, NextApiResponse } from 'next';
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
const redis = new Redis(redisConfig);
const strData = await redis.get(req.query.key);
// Close connection
await redis.disconnect();
res.status(200).json({ strData });
}
```
However, there's a caveat. **Disconnecting and reconnecting to Redis for each API call can add a significant overhead**, especially if your API is under heavy usage. So it's generally not recommended to do it this way.
---
## Kill idle connections in Redis automatically
On the Redis server, you can use the `timeout` config option to set a **timeout value (in seconds)**. The **default it 0**, so if you don't set this, idle connections be connected forever.
Remember to be careful with this setting, as clients might expect connections to stay open even if they're idle.
For reference, [Heroku's default setting kills idle connections after **300 seconds**](https://blog.heroku.com/real-world-redis-tips).
Offical Redis Docs:
https://redis.io/docs/reference/clients/
More info:
https://pranavprakash.net/2016/02/05/kill-idle-connections-to-redis/
---
## Timeout Errors when load is too much
Reference: https://github.com/luin/ioredis/issues/1613
In the above link, someone recommended using [autopipelining](https://github.com/luin/ioredis#autopipelining) to help prevent this.
Note: We tested this out, didn't make much of a difference.
---
## Connect to multiple Redis servers
ioredis supports [Redis Sentinel or Redis Clusters](https://linuxhint.com/redis-sentinel-vs-cluster/).
However, if you are not using Sentinel or Clusters, and you have one master and multiple slave Redis servers, you'll need to manage the connections yourself in your application code.
You would **create different Redis instances, each connected to a different Redis server (either a master or a slave)**, then decide in your application logic when to use each connection.
The master supports both READ/WRITE operations, while the slaves only support READ operations.
### Example set up
**redis-config.ts**
```typescript!
import { RedisOptions } from 'ioredis';
export const commonConfig: RedisOptions = {
password: '',
lazyConnect: true,
enableAutoPipelining: true,
maxRetriesPerRequest: 3,
};
// the first one in the config array will be the "master"
export const redisConfigs =
process.env.NODE_ENV === 'development'
? [{ host: '127.0.0.1', port: 30001 }]
: [
{ host: '127.0.0.1', port: 30006 },
{ host: '127.0.0.1', port: 30007 },
{ host: '127.0.0.1', port: 30008 },
];
```
**redis-base.ts**
```typescript!
import { LogLevel, serverSideLog } from '@utils/log/logging';
import Redis, { RedisOptions } from 'ioredis';
import { commonConfig, redisConfigs } from './redis-config';
export function genNewRedis(redisConfig: RedisOptions) {
const redis = new Redis(redisConfig);
redis.on('connect', () => {
console.log('ENV:', process.env.NODE_ENV);
console.log(`---------- REDIS CONNECTED ${redisConfig.port} ----------`);
});
redis.on('close', () => {
console.log(`---------- REDIS CLOSE ${redisConfig.port} ----------`);
});
redis.on('error', (err: any) => {
console.log(`---------- REDIS FAILED ${redisConfig.port} ----------`);
console.error(err);
serverSideLog(
`'[redis-base][connect error][port:${redisConfig.port}]Error: ${err.toString()}'`,
LogLevel.Error
);
});
return redis;
}
const redisClients: Redis[] = [];
redisConfigs.forEach((config) => {
const newRedisClient = genNewRedis({ ...commonConfig, ...config });
redisClients.push(newRedisClient);
});
function isJson(val: string) {
try {
JSON.parse(val);
} catch (e) {
return false;
}
return true;
}
export class RedisBase {
masterServer = redisClients[0];
randomServer: Redis;
isJson = isJson;
_keys?: string;
constructor() {
this.randomServer = redisClients[Math.floor(Math.random() * redisClients.length)];
// console.log('randomServer', this.randomServer);
}
async $get(key: string, forCache = false) {
try {
const result = await this.randomServer.get(key);
if (result) {
return this.isJson(result) ? JSON.parse(result) : result;
}
if (!forCache) {
serverSideLog('[redis-base][$get]key not found', LogLevel.Error, { key });
}
return null;
} catch (err: any) {
console.error(err);
serverSideLog(`[redis-base][$get]redis fail error: ${err.message}`, LogLevel.Error, { key });
return null;
}
}
async $array<T>(...keys: string[]): Promise<T[]> {
return await this._array(false, ...keys);
}
async _array<T>(includeAll: boolean, ...keys: string[]): Promise<T[]> {
this._keys = keys.join(', ');
const result: any[] = [];
for (const k of keys) {
const data = await this.$get(k);
if (data || includeAll) {
result.push(data);
}
}
return result;
}
async $object<T>(...keys: string[]): Promise<{ [key: string]: T }> {
const result: { [key: string]: any } = {};
const data = await this._array(true, ...keys);
data.forEach((item, index) => {
result[keys[index]] = item;
});
return result;
}
async $set(data: { [key: string]: string }): Promise<boolean> {
try {
this._keys = Object.keys(data).join(', ');
for (const k in data) {
await this.masterServer.set(k, data[k]);
}
return true;
} catch (err: any) {
console.error(err);
serverSideLog(`[redis-base][$set] error: ${err.message}`, LogLevel.Error, { keys: this._keys });
return false;
}
}
async $setCacheKey(key: string, value: string, expireSeconds: number): Promise<boolean> {
try {
const redisBase = new RedisBase();
const redis = redisBase.masterServer;
await redis.set(key, JSON.stringify(value), 'EX', expireSeconds);
return true;
} catch (error: any) {
console.error(error);
serverSideLog(`[redis-base][$setCacheKey] error ${error.message}`, LogLevel.Error);
return false;
}
}
}
```
### Example usage
```typescript!
import { RedisBase } from '@utils/redis-base';
const redis = new RedisBase();
// get one redis key value (value could be any type: string, number, JSON... etc)
const data = await redis.$get("key1"); // va1
// get an arry of redis values
const arr = await redis.$array('key1', 'key2', 'key3'); // ['va1', 'va2', 'va3']
// get an object of redis values
const obj = await redis.$object('key1', 'key2'); // { key1: "va1", key2: "val2" }
// set multiple values
await redis.$set({ key1: 'new value 1', key2: 'new value 2' });
// get one cache value
const strData = await redis.$get('cacheCatInfo', true); // { name: 'Fluffy', age: 2 }
// set one cache value
const isSuccess = await redis.$setCacheKey('cacheDogInfo', '{ "name": "Lucky", "age": 5 }', 30); // true
```
___
## Stress Testing
References:
- https://lufor129.medium.com/%E6%B8%AC%E5%A5%BD%E6%B8%AC%E6%BB%BF-%E4%B8%80-%E5%A3%93%E5%8A%9B%E6%B8%AC%E8%A9%A6jmeter-5356b5335628
- https://www.simplilearn.com/tutorials/jmeter-tutorial/jmeter-performance-testing
- https://loadium.com/blog/how-to-send-jmeter-post-requests
- https://www.redline13.com/blog/2018/05/guide-jmeter-thread-groups/
Open the `apache-jmeter-5.5/bin` folder and double click **jmeter.bat** to enter GUI mode

---
## References
- Vercel's demo with Next.js and ioredis: https://github.com/vercel/next.js/tree/canary/examples/with-redis