Get back to home
2026-03-09

Real-time cache invalidation with SSE, tRPC & Redis

You have a multi-user app. User A completes a task. User B is staring at a stale list. The classic problem.

Polling works until it doesn’t. You’re trading latency for bandwidth, and it always feels like a compromise. WebSockets are powerful but often overkill when all you need is server → client push.

This article walks through a pattern I’ve used in production: SSE + tRPC subscriptions + Redis Pub/Sub + TanStack Query invalidation for real-time cache invalidation. No WebSocket server, no polling, fully type-safe end-to-end.

WebSocket vs SSE

Before diving into the implementation, let’s clarify why SSE is often the better choice.

WebSocket
Bidirectional
Client
Server
SSE
Unidirectional (server → client)
Client
Server

Click the Send buttons to see the difference

WebSocket opens a persistent, bidirectional channel. Both client and server can send messages at any time. Great for chat apps, collaborative editing, or gaming.

SSE (Server-Sent Events) is a one-way stream from server to client over plain HTTP. The client opens a connection, and the server pushes events. That’s it.

For cache invalidation, we only need to tell the client “hey, this data changed, refetch it.” That’s a server → client message. SSE is perfect.

The real power shows when one user’s action needs to notify every connected client. A single mutation fans out through the server to all SSE streams:

SSE: One-to-many broadcast
User A
mutation
Server
Redis + SSE
User A
User B
User C
SSE streams

One mutation triggers an update for every connected client

The architecture

Here’s how every piece fits together. Click the button to see the full flow in action:

Event Flow: Real-time Cache Invalidation
MutationcompleteTodo()
API HandlertRPC procedure
Redis Pub/Subpublish(channel, event)
SSE StreamEventSource
Client HandleronInvalidate(event)
InvalidatequeryClient.invalidate
Refetchautomatic

The key insight: we don’t push the data itself through SSE. We push invalidation events. The client then uses TanStack Query’s built-in refetching to get fresh data through the normal API. This keeps the SSE payload tiny and leverages all the caching, deduplication, and retry logic TanStack Query already provides.

Tracing the flow

Before diving into each piece, let’s walk through a concrete example. A user completes a todo:

mutation completeTodo(id: 42)
→ db.update(todo, { status: "completed" })
→ publish("todo:42", { type: "todo.updated", todoId: 42 })

What happens next:

Now let’s build each piece.

Redis Pub/Sub as the event bus

In production you’ll have multiple server instances behind a load balancer. User A’s mutation hits instance 1, but User B’s SSE connection lives on instance 2. In-memory events won’t cross that boundary, so Redis Pub/Sub bridges the gap.

Channel naming

Namespace channels by environment to avoid cross-contamination. A channel looks like prod_invalidate:todo:42 or dev_invalidate:user:7; the prefix isolates environments, and the rest identifies the entity.

Reference counting

When multiple SSE connections subscribe to the same channel, we don’t want duplicate Redis subscriptions. A reference counter tracks how many listeners exist per channel and only subscribes/unsubscribes at the Redis level when the count transitions between 0 and 1.

Redis Subscription Reference Counting
SSE Connections
Channel: todo:42
0
refCount
Redis Status
Not subscribed

Click clients to connect/disconnect. Redis only subscribes when the first client connects and unsubscribes when the last one leaves

subscribe("todo:42", handler)
→ if refCount was 0 → redis.subscribe("todo:42")
→ refCount++
unsubscribe("todo:42", handler)
→ refCount--
→ if refCount is 0 → redis.unsubscribe("todo:42")

Publishing events

Publishing is straightforward: serialize the event and push it to the right channel:

publish("todo:42", { type: "todo.updated", todoId: 42 })
→ redis.publish(channel, JSON.stringify(event))

tRPC subscriptions via SSE

tRPC v11 supports subscriptions over SSE natively, with no WebSocket server needed. The server uses async generators, and the client connects via httpSubscriptionLink.

Defining events with Zod

Every event flowing through the system is validated at runtime with a discriminated union. This gives us a single source of truth for both runtime validation and TypeScript types:

InvalidationEvent = discriminatedUnion("type", [
{ type: "todo.updated", todoId: number }
{ type: "todo.completed", todoId: number }
{ type: "user.assigned", userId: number }
])

Because InvalidationEvent is a discriminated union, TypeScript narrows the type when you switch on event.type. Add a default branch with never to catch unhandled variants at compile time. If you add a new event to the schema but forget to handle it, TypeScript will error:

Compile-time Exhaustiveness Checking
Zod Schema
z.discriminatedUnion("type", [
z.object({ type: z.literal("todo.updated"), todoId: z.number() }),
z.object({ type: z.literal("todo.completed"), todoId: z.number() }),
z.object({ type: z.literal("user.assigned"), userId: z.number() })
])
Event Handler
switch (event.type) {
case "todo.updated"invalidate(['todo', id])
case "todo.completed"invalidate(['task', 'list'])
case "user.assigned"invalidate(['user', id])
default: {
const _ex: never = event
}
}

All variants are handled. Click the button to add a new one and see what happens

Server-side: async generator

The subscription procedure creates an async generator that bridges Redis Pub/Sub into the SSE stream. Each connected client gets its own generator, but they share Redis subscriptions via the reference counting layer above.

subscription onInvalidate({ todoId?, userId? })
→ subscribe to relevant channels (global, todo:id, user:id)
→ on redis message → validate with Zod → yield to SSE stream
→ on client disconnect → unsubscribe from all channels

The generator yields validated events one at a time. tRPC serializes them as SSE and streams them to the client. When the client disconnects, the finally block cleans up Redis subscriptions.

Route subscriptions through SSE, everything else through regular HTTP:

trpc.createClient({
links: splitLink(
subscriptions → httpSubscriptionLink("/api/trpc")
everything else → httpBatchLink("/api/trpc")
)
})

No WebSocket server, no ws dependency, no special proxy configuration. SSE flows over standard HTTP. Here’s what the browser actually receives on the wire:

SSE Wire Format
connected
GET /api/trpc/onInvalidate HTTP/1.1
Accept: text/event-stream
:ok

What your browser actually receives: plain text over HTTP, one event at a time

Cache invalidation on the client

This is where it all comes together. A single hook subscribes to the SSE stream and invalidates the right TanStack Query cache entries.

TanStack Query Cache Invalidation
todo.getById({ id: 42 })fresh
todo.list()fresh
user.getById({ id: 7 })fresh
task.list({ todoId: 42 })fresh
stats.dashboard()fresh

The invalidation hook

useInvalidationSubscription(todoId?, userId?)
→ subscribe to SSE via trpc
→ on event:
"todo.updated" → invalidate(todo.getById, todo.list, stats)
"todo.completed" → invalidate(task.list, todo.getById)
"user.assigned" → invalidate(user.getById)

Then drop it into any page. Every query stays fresh without polling. When another user makes a change, the invalidation flows through Redis → SSE → hook → TanStack Query → automatic refetch.

Production considerations

Graceful shutdown

Order matters. Stop accepting connections first, close SSE streams, then close Redis and the database. If you’re running BullMQ workers, let active jobs finish before closing.

shutdown()
→ server.close()
→ close all SSE connections
→ redis.quit()
→ db.disconnect()

Error handling

tRPC’s SSE link will auto-reconnect on connection drops. Log errors to your monitoring service (Sentry, etc.) for visibility, but the client recovers automatically.

Horizontal scaling

Each server instance maintains its own set of SSE connections and Redis subscriptions. Redis Pub/Sub ensures that an event published on any instance reaches all subscribers across all instances. No sticky sessions required.

Horizontal Scaling with Redis Pub/Sub
Instance 1
Instance 2
Instance 3
Server instances
Redis
Pub/Sub
Client A
Client B
Client C
SSE connections

An event from any instance reaches every client via Redis Pub/Sub, with no sticky sessions needed

When to use this

Reach for this pattern when:

Reach for WebSockets instead when:

Keep it simple with polling when:

Conclusion

This pattern gives you real-time UI updates without the complexity of WebSockets. tRPC subscriptions over SSE, Redis Pub/Sub for cross-instance distribution, and TanStack Query for client-side cache management. Each piece handles one concern well, and they compose cleanly.


Going further: async jobs with BullMQ

Some mutations trigger work that’s too heavy to run inline, like sending emails, generating reports, or recalculating denormalized data. Instead of blocking the mutation, offload it to a job queue and let it feed back into the invalidation system when it’s done.

BullMQ Job Queue
Waiting: 0
Active: 0
Done: 0
Failed: 0
Click "Add job" to start

The pattern

mutation completeTodo(id)
→ db.update(todo)
→ enqueue("todo.recalculate", { todoId: id }) ← async, returns immediately
→ publish("todo.updated", { todoId: id }) ← instant invalidation
worker processes "todo.recalculate"
→ heavy computation...
→ publish("todo.updated", { todoId: id }) ← second invalidation with fresh data

The key: after completing a job, the worker publishes an invalidation event. This closes the loop. Background processing feeds back into the real-time system, and every connected client gets updated automatically.

Job queue essentials