Cache API with Redis and Node

Tram Ho

1. The problem

Often your backend server needs to communicate with downstream services, third party APIs, and databases. These transactions take both time and money. For example, consider your API fetching popular movies from the movie API that charges you based on each call you make or, you need to have the user's participation date in your app to show displayed on their profile. The list of popular movies is unlikely to change in a few hours while the user's participation date is completely unchanged.

To fetch common movie data, each time you call a third-party API to get the same information you received a minute ago. This will add noticeable performance issues and higher costs to your application. Similarly, for a frequently accessed record, the database is constantly queried for the same data and will inevitably deliver heavy invoices.

There is a simple solution to this problem – Caching.

2. Solution

Caching

Caching is the process of storing data in a high-speed data storage layer (buffer). Cache memory is usually stored in fast-access hardware and is more efficient than fetching data from the main data store used by the application. This is a very basic example of caching – remembering. Note that memorization is a very specific form of storage that involves storing the return value of the function based on its parameters.

Calculate the nth number in the Fibonacci sequence .

Basically, the above snippet recursively calls this method for (n – 1) and (n – 2) and adds it together, dividing it by n = 4. This is what the call stack would look like. which:

Callstack for n = 4 without storing / remembering

As you can see, we calculate fibonacci(2) which can be considered a relatively resource-intensive activity. Basically, we can store the value for fibonacci(2) somewhere when we first calculate it and use the value of the store a second time to speed up the process.

Here is the updated call stack with remembering process:

Callstack for n = 4 with caching / remembering

As you can see, we were able to reduce computational time with memorizing just one form of storage. Now, use this technique to cache responses from a third-party API service using Redis.

Redis

Redis is an open source memory data storage repository, used as database, buffer and message broker. Find instructions for downloading it on your local machine here .

3.Demo

Let's set up a simple button project to test this. In your project directory, run npm init to start the node project. Respond to all prompts appropriately and then create a new file called index.js in the project directory.

Install all the dependencies we will use for this demo:

We have a simple endpoint that provides details about the latest launches of SpaceX.

After you run the server using, it will boot at localhost:4000 . I am using Postman to test my API.

result in,

API response time 489 ms

Note the time in the red box on the screengrab above. That is 489ms. Now add caching with Redis. Make sure you have Redis running on your local machine. Run:

on a new terminal window. It will look like this:

Redis server screengrab

Now, let's add the middleware to check if the key exists in the buffer, otherwise get it from the third party API and update the buffer.

After you get the GET method on localhost: 4000 / spacex / launches, it will still take as long as the first time to run before adding Redis. This is because the buffer does not have that key and is currently updating it. When you run it a second time, you should be able to see the difference.

API response time 23 ms

A very obvious pitfall in this implementation is that once we add it to the cache, we will never get the updated value from the third party API. This is probably not the expected behavior. One way to combat this is to use setex which has an expired argument. It mainly runs two main activities SET and EXPIRE . After the set expiration time, we will retrieve the data from the third-party API and update the buffer.

4. Conclusion

Caching is a powerful tool when used properly. Considering the data type and the importance of the latest value, buffers can be added to improve performance, reliability and reduce costs.

Source translation

Share the news now

Source : Viblo