the quick notes

fast, quirky, and occasionally buggy: where tech meets sticky notes!

hybrid cache with in-memory redis database

šŸ‡¬šŸ‡§
techc#dotnetcachingredis

hybrid cache = in-memory (L1) + Redis (L2). reads hit L1 first (fast), then L2, then the data source. Redis keeps the cache consistent across multiple server instances.


how it works

client → L1 (memory) → L2 (redis) → database
         ↑ cache hit   ↑ cache hit  ↑ cache miss
  • L1 (in-memory): fastest. local to each server process.
  • L2 (redis): shared across all server instances. survives restarts.
  • database: only called on a full cache miss.

setup

register hybrid cache with Redis as the distributed backend:

builder.Services.AddHybridCache(options =>
{
    options.DefaultEntryOptions = new HybridCacheEntryOptions
    {
        Expiration = TimeSpan.FromMinutes(5),
        LocalCacheExpiration = TimeSpan.FromMinutes(5)
    };
});

builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = "localhost:6379";
    options.InstanceName = "hybrid-cache-demo";
});

cache entries expire from both L1 and L2 after 5 minutes.

sample endpoint

app.MapGet("/v1/categories", async (ILookupService service) =>
{
    var result = await service.GetCategoriesAsync();
    return Results.Ok(result);
});

load test results

using bombardier — 10,000 requests, 125 concurrent connections:

Reqs/sec    6937 avg  (peak: 82,646)
Latency     p50: 2.62ms  p95: 4.24ms  p99: 1.36s
HTTP 2xx    10,000 / 10,000
Throughput  2.34 MB/s

the p99 spike (1.36s) is expected — those are cache misses hitting the data source cold.

trade-offs

  • āœ… Fast reads: L1 hit returns in microseconds
  • āœ… Scalable: Redis shares cache across instances
  • āœ… Simple API: GetOrCreateAsync handles all layers
  • āš ļø Redis required: Adds infra dependency
  • āš ļø Brief inconsistency: Each server has its own L1; may serve stale data until L1 expires
  • āš ļø Cache invalidation: Not trivial — design expiry strategy upfront

references