Skip to content

How to Boost JavaScript Objects Storage on Redis

It’s our duty to give users the services they want

As software engineers, one of our duties is to create services that satisfy users’ needs. In today’s world, users demand faster services, able to retrieve their information using the least amount of time.

To achieve that, we need, of course, the right architecture, and most of the time Redis  is a part of it.

But… can we go a step further, improving the way we integrate it and the general performances?

YES! And I will show you how using some basic examples.

But first… what is Redis?

Redis is an open-source, in-memory data structure store that is widely used for its exceptional performance and versatility. With its simple key-value data model, Redis  enables developers to implement various use cases such as real-time analytics, pub/sub messaging, job queues and caching.

Today we will focus on this last point, and how can we use it to achieve our goals.

Great! What do I need?

To follow all the steps, I highly suggest you clone the code publicly available in the dedicated GitHub repository. To execute the code you clone, this is what you’re going to need:

  • Node.js - version 18 or above, I suggest NVM to manage it;
  • Docker and Docker compose, to create a local Redis instance. If you already have it, feel free to skip this;
  • RedisInsight, a GUI for Redis. If you already have some experience with the Redis CLI, you can skip this;
  • A minimal knowledge of Fastify, the web framework we will run on each example;
  • The basic parameters to invoke autocannon, the HTTP/1.1 benchmarking tool we will use during our comparisons.

Let’s start coding!

Firstly, we are going to write a reusable function to create a new Fastify instance.

js
import fastifyRedis from '@fastify/redis'
import fastify from 'fastify'

export const buildFastifyInstance = async (postHandler) => {
    const fastifyInstance = fastify()

    await fastifyInstance.register(fastifyRedis)

    await fastifyInstance.redis.flushall()

    fastifyInstance.post('/', postHandler)

    await fastifyInstance.listen({ port: 3000 })
}

src/fastify.mjs

As you can see, we are:

  • Registering the @fastify/redis plugin, which instantiates a Redis client under the hood using the popular library ioredis;
  • Using ioredis to clean all Redis’ data before starting with our tests.

Secondly, we are going to implement a basic postHandler , just to explore the simplest Redis functionality: saving and retrieving data through plain strings .

js
import { buildFastifyInstance } from './fastify.mjs';

const REDIS_KEY = 'step_1';

async function postHandler() {
    let cachedValue = await this.redis.get(REDIS_KEY);

    if (!cachedValue) {
        cachedValue = "success!"
        await this.redis.set(REDIS_KEY, cachedValue)
    }

    return cachedValue
}

await buildFastifyInstance(postHandler)

src/step_1.mjs

Using this handler, we search for the key step_1 in Redis. If it’s there, we directly return what has been extracted, otherwise we populate it with the value success!  before returning it.

Finally, to test what we made, we must run the following scripts:

  • npm run redis:up: this script creates a Redis server in a container, you need to run it only once;
  • npm run step_1: this script instantiates a Fastify server which will be invoked once by autocannon.

If everything is OK, you should see something like this inside RedisInsight:

Saving non-nested JavaScript objects

I know, I know, it doesn’t make any sense to store a static value inside a cache… but hey, we are just warming up ????

It’s time to write another handler, and this time we will store a different entity: a non-nested JavaScript object!

As it’s not a plain string, we must also change the structure that will host our data: we can opt for hashes , which represents record types modeled as collections of field-value pairs.

js
import { buildFastifyInstance } from './fastify.mjs';

const REDIS_KEY = 'step_2';

async function postHandler() {
    let cachedValue = await this.redis.hgetall(REDIS_KEY);

    if (Object.keys(cachedValue).length === 0) {
        cachedValue = {
            step: 2,
            createdAt: Date.now()
        }
        await this.redis.hset(REDIS_KEY, cachedValue)
    }

    return cachedValue
}

await buildFastifyInstance(postHandler)

src/step_2.mjs

The structure of the handler is similar to the previous one, but this time we are using the hset method, where the initial h stands for hashes .

To test this, we can run the step_2 script and see the result in RedisInsight, as we did before:

Saving nested JavaScript objects

Now things get a bit more complicated ????

Unfortunately hashes are not the right structure for this duty, since they save the .toString() object’s representation for nested fields: it means that we are losing our precious data!

One possible solution is to use a library like flat  in order to avoid any kind of nesting.

js
import { buildFastifyInstance } from './fastify.mjs';

import flat from 'flat';

const { flatten, unflatten } = flat

const REDIS_KEY = 'step_3_flatten';

async function postHandler() {
    let cachedValue = await this.redis.hgetall(REDIS_KEY);

    if (Object.keys(cachedValue).length === 0) {
        cachedValue = {
            step: 3,
            createdAt: Date.now(),
            nested: {
                object: true
            }
        }
        await this.redis.hset(REDIS_KEY, flatten(cachedValue))
    } else {
        cachedValue = unflatten(cachedValue)
    }

    return cachedValue
}

await buildFastifyInstance(postHandler)

src/step_3_flatten.mjs

But since we are deeply mutating the structure of each object, we will notice a slowdown in the API performance: we will measure it using autocannon .

Let’s try it running step_3:flatten script:

js
% npm run step_3:flatten

> redis-next-level@1.0.0 step_3:flatten
> concurrently 'node src/step_3_flatten.mjs' 'npm run autocannon' --kill-others

[1] 
[1] > redis-next-level@1.0.0 autocannon
[1] > autocannon -m POST 'http://localhost:3000'
[1] 
[1] Running 10s test @ http://localhost:3000
[1] 10 connections
[1] 
[1] 
[1] ┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
[1] │ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%  │ Avg     │ Stdev   │ Max   │
[1] ├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
[1] │ Latency │ 0 ms │ 1 ms │ 1 ms  │ 1 ms │ 0.86 ms │ 0.41 ms │ 20 ms │
[1] └─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
[1] ┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┐
[1] │ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev   │ Min     │
[1] ├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
[1] │ Req/Sec   │ 7683    │ 7683    │ 9119    │ 9191    │ 8980.55 │ 414.23  │ 7682    │
[1] ├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
[1] │ Bytes/Sec │ 1.83 MB │ 1.83 MB │ 2.17 MB │ 2.19 MB │ 2.14 MB │ 98.5 kB │ 1.83 MB │
[1] └───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┘
[1] 
[1] Req/Bytes counts sampled once per second.
[1] # of samples: 11
[1] 
[1] 99k requests in 11.01s, 23.5 MB read

Thanks to RedisInsight, we can look at how data is now stored:

We must always aim for the best performance so, alternatively, we can choose to follow one of these paths:

  • Using the JSON data type;
  • Stringifying our object, and store it as a string.

As ioredis requires us to stringify even for the JSON data type , I will follow the latter approach as it’s simpler.

js
import { buildFastifyInstance } from './fastify.mjs';

const REDIS_KEY = 'step_3';

async function postHandler() {
    let cachedValue = await this.redis.get(REDIS_KEY);

    if (!cachedValue) {
        cachedValue = {
            step: 3,
            createdAt: Date.now(),
            nested: {
                object: true
            }
        }
        await this.redis.set(REDIS_KEY, JSON.stringify(cachedValue))
    } else {
        cachedValue = JSON.parse(cachedValue)
    }

    return cachedValue
}

await buildFastifyInstance(postHandler)

src/step_3.mjs

Running the step_3 script:

js
% npm run step_3

> redis-next-level@1.0.0 step_3
> concurrently 'node src/step_3.mjs' 'npm run autocannon' --kill-others

[1] 
[1] > redis-next-level@1.0.0 autocannon
[1] > autocannon -m POST 'http://localhost:3000'
[1] 
[1] Running 10s test @ http://localhost:3000
[1] 10 connections
[1] 
[1] 
[1] ┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
[1] │ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%  │ Avg     │ Stdev   │ Max   │
[1] ├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
[1] │ Latency │ 0 ms │ 1 ms │ 1 ms  │ 1 ms │ 0.75 ms │ 0.56 ms │ 29 ms │
[1] └─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
[1] ┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬────────┬─────────┐
[1] │ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev  │ Min     │
[1] ├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼────────┼─────────┤
[1] │ Req/Sec   │ 7211    │ 7211    │ 9423    │ 9535    │ 9211.1  │ 642.99 │ 7208    │
[1] ├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼────────┼─────────┤
[1] │ Bytes/Sec │ 1.67 MB │ 1.67 MB │ 2.19 MB │ 2.21 MB │ 2.14 MB │ 149 kB │ 1.67 MB │
[1] └───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴────────┴─────────┘
[1] 
[1] Req/Bytes counts sampled once per second.
[1] # of samples: 11
[1] 
[1] 101k requests in 11.01s, 23.5 MB read

We notice that even if we achieve the same result, the stringify/parse combination is faster, allowing us to serve more requests per second.

Looking at RedisInsight, we can see the magic happens ✨

We’re finally able to store nested objects, with dynamic shape ????…

… almost ????

Nested objects…but with some limitations

Let’s suppose for a second that you have an object which contains the results of a huge number of calculations.

It’s true that with this technique you can store objects with a dynamic shape, but in the JavaScript engine used by Node.JS and Chromium, v8, there’s a limit of ~512MB for each string size .

Imagine having a 515 MB JSON (in the code repository, it is automatically generated once you install the dependencies), and trying to load its entire content as a string: it will throw an error like this ????

js
% node
Welcome to Node.js v18.14.2.
Type ".help" for more information.
> const fs = require('fs')
undefined
> const largeFileContent = fs.readFileSync('./src/large-json.json', 'utf-8')
Uncaught Error: Cannot create a string longer than 0x1fffffe8 characters
    at Object.slice (node:buffer:605:37)
    at Buffer.toString (node:buffer:824:14)
    at Object.readFileSync (node:fs:512:41) {
  code: 'ERR_STRING_TOO_LONG'
}

It means that we can’t simply stringify every  kind of object. ????

Nested objects: the solution ????

It seems to be an unbearable obstacle… ????

Well, even if we can’t configure this limit yet, we can still represent data using a different format… like buffers! 1️⃣0️⃣1️⃣0️⃣0️⃣1️⃣

Do you know how to serialize objects to buffers without stringifying them first? Let me introduce you to MessagePack : an efficient binary serialization format .

Feel free to consult the entire specification , but I advise you use the msgpackr library which implements it with an additional native acceleration  to boost performance.

js
import { Packr } from 'msgpackr';
import { buildFastifyInstance } from './fastify.mjs';

const REDIS_KEY = 'step_4';

const packr = new Packr({
    maxSharedStructures: 8160,
    structures: []
})

const pack = packr.pack.bind(packr)
const unpack = packr.unpack.bind(packr)

async function postHandler() {
    let cachedValue = await this.redis.getBuffer(REDIS_KEY);

    if (!cachedValue) {
        cachedValue = {
            step: 4,
            createdAt: Date.now(),
            nested: {
                object: true
            }
        }
        await this.redis.set(REDIS_KEY, pack(cachedValue))
    } else {
        cachedValue = unpack(cachedValue)
    }

    return cachedValue
}

await buildFastifyInstance(postHandler)

src/step_4.mjs

We can now run our last step through the step_4 script:

js
% npm run step_4

> redis-next-level@1.0.0 step_4
> concurrently 'node src/step_4.mjs' 'npm run autocannon' --kill-others

[1] 
[1] > redis-next-level@1.0.0 autocannon
[1] > autocannon -m POST 'http://localhost:3000'
[1] 
[1] Running 10s test @ http://localhost:3000
[1] 10 connections
[1] 
[1] 
[1] ┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
[1] │ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%  │ Avg     │ Stdev   │ Max   │
[1] ├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
[1] │ Latency │ 0 ms │ 1 ms │ 1 ms  │ 1 ms │ 0.74 ms │ 0.48 ms │ 19 ms │
[1] └─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
[1] ┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬────────┬────────┐
[1] │ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev  │ Min    │
[1] ├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼────────┼────────┤
[1] │ Req/Sec   │ 7779    │ 7779    │ 9455    │ 9575    │ 9304.91 │ 489.66 │ 7779   │
[1] ├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼────────┼────────┤
[1] │ Bytes/Sec │ 1.81 MB │ 1.81 MB │ 2.19 MB │ 2.22 MB │ 2.16 MB │ 113 kB │ 1.8 MB │
[1] └───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴────────┴────────┘
[1] 
[1] Req/Bytes counts sampled once per second.
[1] # of samples: 11
[1] 
[1] 102k requests in 11.01s, 23.7 MB read

For small objects, we have the same performances of JSON.stringify , without its drawbacks and with less use of space on Redis.

In our simple example, we are consuming ~78% less memory to store the same data!  ????

Benchmark time ????

Now that we know how to proceed, it’s time to better compare MessagePack and JSON formats on arrays of different items. Will we reach the same results using heavier objects?

I’m measuring 3 different things in these tests:

  • Number of items in the array. To make things harder, each object has a different content, based on the following example:
js
{
	index: i,
	createdAt: Date.now(),
	nested: { isEven: i % 2 === 0 }
}

The time needed to create each array is not considered;

  • The amount of memory consumed in Redis for both JSON and MessagePack;
  • The number of requests/sec served using both JSON and MessagePack.

All the tests have been executed on a 16-inch 2019 MacBook Pro running both Fastify and autocannon using them concurrently as we did in the previous steps. Its specs are:

  • CPU: 2.6 GHz 6-Core Intel Core i7
  • Memory: 16 GB 2667 Mhz DDR4
  • OS: macOS Ventura 13.4.1

And here is the outcome: Requests/sec

# items in the array JSON MessagePack
1 101k 101k
10 87k 96k
100 52k 60k
1000 8k 16k
10000 850 1k
100000 65 138
1000000 20 (with 6 timeouts) 25

As you can see, we reach the best scenario for 1000 items, where we double our throughput  ????. Memory consumption on Redis

# items in the array JSON MessagePack
1 128 B 72 B
10 816 B 208 B
100 7 KB 2 KB
1000 64 KB 16 KB
10000 768 KB 160 KB
100000 7 MB 2 MB
1000000 80 MB 20 MB

In this case, the best scenario is reached with 10000 items where we use 380% less space ????.

In all the presented scenarios, MessagePack is better than JSON. Things could change a bit on the requests/sec metric if we start using libraries like fast-json-stringify , but the size on memory will always be the same.

Summary

By leveraging the power of MessagePack and Redis, developers can significantly reduce memory consumption in their applications and speed up their services. MessagePack offers a compact binary format that is not only efficient in terms of storage but also in terms of memory usage.

With its wide language support and easy integration, MessagePack is a powerful tool for optimizing memory usage in various applications: you can also use it to store data inside IndexedDB, for example.

Will you give it a chance? ????

Insight, imagination and expertly engineered solutions to accelerate and sustain progress.

Contact