Skip to main content

Redis Caching in Practice: Speed Up Your App Without the Headaches

A practical walkthrough of Redis caching strategies, data structures, cache invalidation, and integration with Node.js, Next.js, and serverless platforms like Upstash.

Anurag Sharma
15 min read
Redis Caching in Practice: Speed Up Your App Without the Headaches

Why Your App Feels Slow (And Why Caching Fixes It)

Here is a number that should bother every backend developer: a typical database query to PostgreSQL or MySQL takes somewhere between 1 to 50 milliseconds depending on complexity. A Redis lookup? About 0.1 to 0.5 milliseconds. That is a 10x to 100x difference, and when your API chains together multiple queries per request, those milliseconds stack up brutally.

I have seen applications go from 800ms response times to under 100ms just by adding a Redis caching layer in front of the database. No query optimization wizardry. No infrastructure overhaul. Just a smart caching layer that intercepts repeated reads before they ever hit the database.

But here is the thing most tutorials skip over: caching is not just "put data in Redis, read data from Redis." The real challenge is knowing when to cache, what to cache, and most importantly, when to throw the cache away. Cache invalidation, as Phil Karlton famously said, is one of the two hardest problems in computer science (the other being naming things).

So this is going to be the practical, opinionated guide I wish I had when I started using Redis seriously. We will cover data structures, strategies, integration patterns, and the gotchas that bite you in production.

Understanding Redis Data Structures

Redis is not just a key-value store. That is a common misconception. It is a data structure server, and picking the right structure for your use case makes all the difference.

Strings

The simplest structure. Store a value against a key. This is what most people think of when they hear "Redis."

SET user:1001:name "Anurag Sharma"
GET user:1001:name
# "Anurag Sharma"

# With expiry
SET session:abc123 "{\"userId\":1001}" EX 3600

Use strings for session data, simple cached responses, counters (with INCR and DECR), and rate limiting tokens.

Hashes

Think of hashes as objects. Instead of serializing an entire user object into a JSON string, you can store individual fields.

HSET user:1001 name "Anurag" email "[email protected]" role "admin"
HGET user:1001 name
# "Anurag"
HGETALL user:1001
# name, Anurag, email, [email protected], role, admin

The advantage? You can update a single field without fetching and re-serializing the entire object. If your user object has 20 fields but you only need to update the lastLogin timestamp, hashes save you bandwidth and processing time.

Lists

Ordered collections. Perfect for message queues, activity feeds, and recent items.

LPUSH notifications:1001 "New comment on your post"
LPUSH notifications:1001 "Someone liked your photo"
LRANGE notifications:1001 0 9
# Returns the 10 most recent notifications

Sets

Unordered collections of unique elements. Great for tracking unique visitors, tags, or mutual friends.

SADD online_users "user:1001" "user:1002" "user:1003"
SISMEMBER online_users "user:1001"
# 1 (true)
SCARD online_users
# 3

Sorted Sets

Like sets, but every element has a score. This is where Redis truly shines for leaderboards, priority queues, and time-series data.

ZADD leaderboard 1500 "player:anurag" 2200 "player:priya" 1800 "player:rajesh"
ZREVRANGE leaderboard 0 2 WITHSCORES
# player:priya, 2200, player:rajesh, 1800, player:anurag, 1500
Data StructureBest ForTime Complexity (Common Ops)
StringsSessions, counters, simple cacheO(1)
HashesUser profiles, object fieldsO(1) per field
ListsQueues, activity feedsO(1) push/pop, O(N) range
SetsUnique tracking, tagsO(1) add/check
Sorted SetsLeaderboards, rankingsO(log N) add, O(log N + M) range

Caching Strategies That Actually Work

Not all caching patterns are created equal. The strategy you pick depends on your read-to-write ratio, consistency requirements, and how angry your users get when they see stale data.

Cache-Aside (Lazy Loading)

This is the most common pattern and the one you should default to unless you have a specific reason not to.

The flow is straightforward:

  1. Application receives a request
  2. Check Redis for the data
  3. If found (cache hit), return it
  4. If not found (cache miss), query the database
  5. Store the result in Redis with a TTL
  6. Return the data
async function getUser(userId) {
  const cacheKey = `user:${userId}`;

  // Step 1: Check cache
  const cached = await redis.get(cacheKey);
  if (cached) {
    return JSON.parse(cached);
  }

  // Step 2: Query database
  const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);

  // Step 3: Populate cache with 1-hour TTL
  await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600);

  return user;
}

Pros: Only caches data that is actually requested. Simple to implement. Cache misses are handled gracefully.

Cons: First request is always slow (cold cache). Potential for stale data if the database is updated directly.

Write-Through

Every write goes to both the cache and the database simultaneously. The cache is always up to date.

async function updateUser(userId, data) {
  const cacheKey = `user:${userId}`;

  // Write to database
  await db.query('UPDATE users SET name = $1 WHERE id = $2', [data.name, userId]);

  // Write to cache
  await redis.set(cacheKey, JSON.stringify(data), 'EX', 3600);
}

Pros: Cache is never stale (assuming all writes go through your application). Read performance is consistently fast.

Cons: Write latency increases because you are writing to two places. You might cache data that is rarely read.

Write-Behind (Write-Back)

The application writes to Redis first, and a background process asynchronously syncs to the database. This gives you the fastest write performance but adds complexity.

I would not recommend this for most applications. If Redis crashes before the data is persisted to the database, you lose writes. That said, for high-throughput scenarios like analytics counters or view counts, it makes perfect sense. You do not care if you lose a few page view counts during a server restart.

TTL and Eviction: When Data Should Die

Every cached value should have a Time To Live (TTL). If you are not setting TTLs, you are building a memory leak, plain and simple.

SET product:5001 "{...}" EX 1800  # Expires in 30 minutes
TTL product:5001                   # Check remaining time

Here is how I think about TTL values:

  • User sessions: 24 hours (or match your session cookie expiry)
  • Product listings: 5 to 15 minutes (prices change, stock changes)
  • Blog posts or static content: 1 to 6 hours
  • Configuration/feature flags: 1 to 5 minutes (short TTL so changes propagate fast)
  • API rate limits: Window duration (60 seconds for per-minute limits)

Eviction Policies

When Redis runs out of memory, it needs to decide what to throw away. You configure this with maxmemory-policy.

PolicyBehaviorBest For
noevictionReturns error on writesWhen data loss is unacceptable
allkeys-lruEvicts least recently used keysGeneral-purpose caching
allkeys-lfuEvicts least frequently used keysWhen hot data matters most
volatile-lruLRU among keys with TTL setMixed persistent + cached data
volatile-ttlEvicts keys closest to expirationWhen TTL reflects importance

For most caching use cases, allkeys-lru is the right default. It keeps frequently accessed data warm and dumps old stuff automatically.

Cache Invalidation: The Actually Hard Part

You have probably heard the joke. But it stops being funny when your users see stale prices, outdated inventory counts, or yesterday's profile picture.

Pattern 1: TTL-Based Expiry

The simplest approach. Set a TTL and accept that data might be stale for up to that duration. For many use cases, this is perfectly fine. Does it really matter if a blog post is cached for 10 minutes after an edit? Probably not.

Pattern 2: Explicit Invalidation

When a write happens, explicitly delete or update the cached value.

async function updateProduct(productId, data) {
  await db.query('UPDATE products SET price = $1 WHERE id = $2', [data.price, productId]);

  // Delete the cache entry — next read will fetch fresh data
  await redis.del(`product:${productId}`);

  // Also invalidate any list caches that might contain this product
  await redis.del('products:featured');
  await redis.del(`products:category:${data.categoryId}`);
}

The tricky part here is knowing all the cache keys that might contain the stale data. A product might be cached individually, as part of a category listing, in search results, and in a "featured products" list. Miss one, and you have inconsistency.

Pattern 3: Cache Tags / Namespacing

Use a version number or timestamp in your cache keys. When you want to invalidate, just bump the version.

async function getProductCacheVersion() {
  return await redis.get('products:version') || '1';
}

async function getProduct(productId) {
  const version = await getProductCacheVersion();
  const cacheKey = `product:v${version}:${productId}`;

  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);

  const product = await db.query('SELECT * FROM products WHERE id = $1', [productId]);
  await redis.set(cacheKey, JSON.stringify(product), 'EX', 3600);
  return product;
}

async function invalidateAllProducts() {
  await redis.incr('products:version');
  // Old keys will expire naturally via TTL
}

This is elegant but wasteful — old cached data sits around until its TTL expires. Fine if memory is not tight.

Redis with Node.js and Express

Here is a production-ready setup using ioredis, which is the best Redis client for Node.js. The built-in redis package has improved, but ioredis still offers better cluster support and Lua scripting.

import Redis from 'ioredis';
import express from 'express';

const redis = new Redis({
  host: process.env.REDIS_HOST || 'localhost',
  port: 6379,
  password: process.env.REDIS_PASSWORD,
  retryStrategy(times) {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
  maxRetriesPerRequest: 3,
});

redis.on('error', (err) => console.error('Redis connection error:', err));
redis.on('connect', () => console.log('Connected to Redis'));

const app = express();

// Caching middleware
function cacheMiddleware(ttl = 300) {
  return async (req, res, next) => {
    const key = `cache:${req.originalUrl}`;

    try {
      const cached = await redis.get(key);
      if (cached) {
        return res.json(JSON.parse(cached));
      }
    } catch (err) {
      console.error('Cache read error:', err);
      // Fall through to handler — cache failures should not break your app
    }

    // Override res.json to intercept the response
    const originalJson = res.json.bind(res);
    res.json = (data) => {
      redis.set(key, JSON.stringify(data), 'EX', ttl).catch(console.error);
      return originalJson(data);
    };

    next();
  };
}

app.get('/api/products', cacheMiddleware(600), async (req, res) => {
  const products = await db.query('SELECT * FROM products WHERE active = true');
  res.json(products);
});

One thing I want to highlight: never let cache failures crash your application. Redis going down should mean your app gets slower, not that it stops working entirely. Always wrap Redis calls in try-catch blocks and fall through to the database on errors.

Redis with Next.js

Next.js has its own caching mechanisms (ISR, fetch cache, Data Cache), but Redis gives you more control, especially for API routes and server actions.

// lib/redis.js
import Redis from 'ioredis';

const redis = new Redis(process.env.REDIS_URL);

export async function getCached(key, fetcher, ttl = 300) {
  try {
    const cached = await redis.get(key);
    if (cached) return JSON.parse(cached);
  } catch {
    // Silently fail — database is the source of truth
  }

  const data = await fetcher();

  try {
    await redis.set(key, JSON.stringify(data), 'EX', ttl);
  } catch {
    // Log but do not throw
  }

  return data;
}
// app/api/products/route.js
import { getCached } from '@/lib/redis';
import { db } from '@/lib/db';

export async function GET() {
  const products = await getCached(
    'products:all',
    () => db.select().from(products).where(eq(products.active, true)),
    600
  );

  return Response.json(products);
}

Pub/Sub for Real-Time Features

Redis is not just for caching. Its publish/subscribe system is surprisingly useful for real-time notifications, chat applications, and event broadcasting.

// Publisher (when something happens)
async function publishNotification(userId, message) {
  await redis.publish(`notifications:${userId}`, JSON.stringify({
    message,
    timestamp: Date.now(),
  }));
}

// Subscriber (listening for events)
const subscriber = new Redis(process.env.REDIS_URL);

subscriber.subscribe('notifications:1001', (err, count) => {
  if (err) console.error('Subscribe error:', err);
  console.log(`Subscribed to ${count} channels`);
});

subscriber.on('message', (channel, message) => {
  const data = JSON.parse(message);
  // Send to WebSocket client, trigger UI update, etc.
  console.log(`Received on ${channel}:`, data);
});

Pub/Sub is fire-and-forget — if no subscriber is listening when a message is published, it is lost. For durable messaging, look at Redis Streams instead.

Plain Redis requires you to serialize everything to strings. Redis Stack adds native JSON support and full-text search, which eliminates a lot of boilerplate.

# Store a JSON document
JSON.SET product:5001 $ '{"name":"MacBook Air M4","price":114900,"category":"laptops","tags":["apple","ultrabook"]}'

# Query nested fields
JSON.GET product:5001 $.name
# "MacBook Air M4"

# Update a single field
JSON.SET product:5001 $.price 109900

The search module lets you create indexes and run queries without a separate search engine like Elasticsearch:

# Create an index
FT.CREATE idx:products ON JSON PREFIX 1 product: SCHEMA
  $.name AS name TEXT
  $.price AS price NUMERIC
  $.category AS category TAG

# Search
FT.SEARCH idx:products "@category:{laptops} @price:[50000 120000]"

For small to medium datasets (under a few hundred thousand documents), Redis Search is genuinely competitive with Elasticsearch and far simpler to operate.

Upstash: Redis for Serverless

If you are deploying on Vercel, Cloudflare Workers, or any serverless platform, traditional Redis does not work well. Long-lived TCP connections and serverless cold starts are a bad combination.

Upstash solves this with an HTTP-based Redis client. Every command is an HTTP request — no persistent connections needed.

import { Redis } from '@upstash/redis';

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL,
  token: process.env.UPSTASH_REDIS_REST_TOKEN,
});

// Works exactly like regular Redis, but over HTTP
await redis.set('greeting', 'hello', { ex: 3600 });
const value = await redis.get('greeting');

Upstash offers a generous free tier — 10,000 commands per day with 256 MB storage. For side projects and small apps, you might never need to pay. Their pricing beyond the free tier is pay-per-request, which aligns perfectly with serverless economics.

I have been using Upstash for my Next.js projects on Vercel for about a year now, and it has been rock solid. Latency from Indian servers is around 50-80ms (their closest region is Singapore), which is acceptable for most caching scenarios.

Monitoring with RedisInsight

You cannot optimize what you cannot measure. RedisInsight is Redis's official GUI tool, and it is free. It gives you:

  • Real-time memory analysis — which keys are consuming the most memory
  • Slow log viewer — commands taking longer than expected
  • Key browser — visually inspect your data
  • CLI integration — run commands with autocompletion
  • Profiling — watch commands in real-time

Install it from the Redis website or run it as a Docker container:

docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest

Some metrics to keep an eye on:

  • Hit rate — ideally above 90%. Below 80% means your caching strategy needs work.
  • Memory usage — set alerts before you hit maxmemory.
  • Connected clients — sudden spikes might indicate connection leaks.
  • Eviction count — if keys are being evicted frequently, you need more memory or shorter TTLs.

Common Mistakes and How to Avoid Them

After running Redis in production across several projects, here are the patterns that cause the most grief:

1. Caching everything. Not all data benefits from caching. If a query is fast and data changes frequently, the overhead of cache management might outweigh the performance gain. Profile first, cache second.

2. No TTL on keys. I mentioned this earlier, but it bears repeating. Every cached key must have an expiration. No exceptions.

3. Giant serialized objects. If you are caching a 500 KB JSON blob, you are probably doing something wrong. Cache smaller, more targeted pieces of data. Your network and memory will thank you.

4. Not handling Redis failures gracefully. Redis is a cache, not your database. If it goes down, your app should slow down, not crash. Always have a fallback path to the source of truth.

5. Using KEYS command in production. The KEYS pattern command scans every key in the database and blocks the server while doing so. Use SCAN instead for production key iteration.

// BAD — blocks Redis
const allKeys = await redis.keys('user:*');

// GOOD — iterates without blocking
let cursor = '0';
const keys = [];
do {
  const [newCursor, batch] = await redis.scan(cursor, 'MATCH', 'user:*', 'COUNT', 100);
  cursor = newCursor;
  keys.push(...batch);
} while (cursor !== '0');

6. Ignoring connection pooling. A single Redis connection can handle a lot of throughput thanks to pipelining, but under high concurrency, you should use connection pooling. ioredis handles this well with its Cluster and Sentinel modes.

When Not to Use Redis

Redis is excellent, but it is not the answer to everything. Skip Redis if:

  • Your dataset fits entirely in application memory and you only have one server (use an in-process cache like node-cache or lru-cache)
  • You need complex queries across cached data (use a database with proper indexing)
  • You need strong consistency guarantees (Redis replication is asynchronous by default)
  • Your data is write-heavy with few reads (caching helps read performance, not write performance)

Wrapping Up

Redis caching boils down to a few principles: cache what is read often and changes infrequently, always set TTLs, handle failures gracefully, and measure your hit rates. The specific strategy — cache-aside, write-through, or a hybrid — depends on your application's consistency and latency requirements.

Start with the cache-aside pattern. Add it to your slowest endpoints first. Measure the improvement. Then gradually expand. Do not try to cache everything on day one — that path leads to stale data bugs and debugging nightmares at 2 AM.

If you are building on serverless, Upstash makes the infrastructure side trivial. For traditional servers, a managed Redis instance from your cloud provider (ElastiCache on AWS, Memorystore on GCP) saves you from operational headaches. Either way, Redis is one of those tools that delivers disproportionate value for the effort required to set it up. A few hours of work can transform your application's performance profile entirely.

Advertisement

Advertisement

Ad Space

Share

Anurag Sharma

Founder & Editor

Tech enthusiast and founder of Tech Tips India. Passionate about making technology accessible to everyone across India.

Comments (0)

Leave a Comment

Related Articles