When we deployed an SEO conference website in 2023, we thought we had the perfect stack: Next.js for performance and Sanity CMS for flexible content management. The site needed to handle frequent content updates during live events while keeping running costs low during the off-season.
The Challenge: Event-Driven Content With Budget Constraints
Our client runs annual SEO conferences and needed a website that could:
- Display frequently updated schedules, speaker information, and session details during events
- Serve the latest content without delays when editors made changes
- Minimize hosting costs during the 10+ months between events when updates were rare
- Handle the interconnected nature of conference data (speakers linked to sessions, sessions linked to tracks, etc.)P
The economics were clear: during active conference periods, we were paying $132/month in Vercel hosting costs due to high server-side rendering (SSR) usage. During quiet months, this dropped to $60/month, but even that felt excessive for what was essentially a static site most of the time.
Attempt #1: ISR with GROQ-Powered Webhooks (Spoiler: It Failed)
Incremental Static Regeneration (ISR) seemed like the obvious solution. The plan was simple:
- Generate static pages at build time
- Use Sanity's GROQ-powered webhooks to trigger on-demand revalidation when content changed
- Only regenerate the specific pages that needed updates
Sanity's webhook system is configured through their management console (sanity.io/manage), where you can use GROQ to define sophisticated trigger conditions. We set up our webhooks like this:
Sanity Webhook Configuration:
Name: Revalidate on Content Change
URL: https://{BASE_URL}/api/revalidate
Dataset: YOUR_DATASET_NAME
Trigger on: ☑ Create ☑ Update ☑ Delete
Filter (GROQ):
_type in ["speaker", "session", "track", "venue"]
Projection (GROQ):
{
_id,
_type,
"slug": slug.current
}
Secret: WEBHOOK_SECRET
HTTP Method: POST
This configuration told Sanity: "When any speaker, session, track, or venue document is created, updated, or deleted, send the document's ID, type, and slug to our Next.js API route."
Then on the Next.js side, we built an API route to receive these webhooks and trigger revalidation:
// pages/api/revalidate.js
import { isValidSignature, SIGNATURE_HEADER_NAME } from '@sanity/webhook'
interface ISanityRevalidatePayload {
_id: string;
_type: "speaker" | "session" | "track";
slug: string;
}
export default async function POST(req) {
try {
const { isValidSignature, body } =
await parseBody<ISanityRevalidatePayload>(
req,
secret // Sanity Revalidation Secret
);
/*
* Add your code to handle invalid signature & body
*/
// Revalidate the changed page
if (_type === 'speaker') {
revalidatePath(`https://${BASE_URL}/speakers/${slug}`)
} else if (_type === 'session') {
revalidatePath(`https://${BASE_URL}/sessions/${slug}`)
} else if (_type === 'track') {
revalidatePath(`https://${BASE_URL}/tracks/${slug}`)
}
return new Response("Triggered Revalidate Successfully.");
} catch (err) {
return new Response(err.message, { status: 500 });
}
}This worked perfectly, for about a week.
Why GROQ Webhooks Fall Short with Interconnected Content
The problem emerged as soon as we started working with real conference data. Here's what we quickly discovered:
The Content Graph Problem
Consider this typical conference data structure:
// A session query that pulls in related data
*[_type == "session" && slug.current == $slug][0] {
title,
description,
startTime,
endTime,
track->{
name,
color
},
speakers[]->{
name,
bio,
company,
photo
},
venue->{
name,
capacity
}
}
When a speaker's bio changed, we could easily revalidate their speaker detail page. But what about:
- All session pages that featured that speaker?
- The main speakers listing page?
- The schedule page showing their sessions?
- Track pages that included their sessions?
The Tracking Problem
With GROQ webhooks, we knew what changed (e.g., "speaker document with ID xyz123 was updated"), but we had no reliable way to know what else referenced that content.
The power of GROQ filters in webhooks is impressive. You can create sophisticated triggers like:
// Only trigger when a speaker's bio changes
_type == "speaker" && before().bio != after().bio
// Only trigger for featured sessions
_type == "session" && featured == true
// Trigger when price decreases
_type == "product" && before().price > after().price
And you can shape the webhook payload with projections:
{
_id,
_type,
"slug": slug.current,
"status": "Bio updated from: " + before().bio + " to: " + after().bio
}
But here's the critical limitation: GROQ projections don't support sub-queries. You can't include queries like:
{
_id,
_type,
"slug": slug.current,
// ❌ This doesn't work in webhook projections
"relatedSessions": *[_type == "session" && references(^._id)]
}
To identify dependent pages, you would need to manually query Sanity from your Next.js API route to find all documents that reference the changed content, which becomes complex with deeply nested references. We tried implementing this:
// Attempting to find all sessions featuring a speaker
const dependentSessions = await sanityClient.fetch(
`*[$speakerId in speakers[]._ref] { "slug": slug.current }`,
{ speakerId: changedSpeakerId }
)
// Revalidate all dependent pages
for (const session of dependentSessions) {
await res.revalidate(`/sessions/${session.slug}`)
}
But this approach had critical flaws:
- Performance: Each webhook had to execute multiple queries to find dependencies
- Timeout risks: With serverless function time limits, complex dependency chains could fail
- Maintenance nightmare: Every new content relationship required updating the webhook logic
- Incomplete tracking: Nested references (speaker → session → track → schedule) were nearly impossible to track reliably
There are also serverless function limitations, including execution timeouts, that limit how many paths you can revalidate in a single webhook call.
What We Actually Experienced
Our webhook implementation became increasingly brittle:
- Editors would update a speaker's company affiliation, but session pages would show stale data
- Schedule pages wouldn't reflect session time changes until we manually triggered a rebuild
- We missed tracking some reference types entirely (like speakers referenced in blog posts about the conference)
- The webhook code became a sprawling mess of dependency-tracking logic
We could track a content change to a specific document and revalidate that page, but we could not reliably track all the other documents that referenced it or that it referenced and update them as well.
The Nuclear Option: Full SSR
Faced with unreliable cache invalidation, we made the pragmatic choice: convert everything to Server-Side Rendering (SSR). Every request would hit Sanity's API and generate a fresh page.
// getServerSideProps - fresh data, every time
export async function getServerSideProps({ params }) {
const session = await sanityClient.fetch(sessionQuery, { slug: params.slug })
return { props: { session } }
}
The good: Content was always fresh. Editors saw their changes immediately.
The bad:
- Vercel bills jumped to $132/month during events
- Sanity API requests skyrocketed to 2-4 million per month
- Page load times increased as each visit required API calls
- We were essentially paying for dynamic rendering of mostly static content
We needed a better solution.
Enter Sanity Live: The Game Changer
In late 2024, Sanity released their Live Content API with the @sanity/next-sanity integration, promising "live by default" experiences without the manual webhook complexity. We were intrigued but cautious. This was a relatively new feature, and we couldn't afford more instability during event periods.
We waited two months after the release, monitoring Sanity's changelog and community feedback to ensure the feature was stable and production-ready. Once we felt confident it had matured, we decided to migrate.
How Sanity Live Actually Works
Sanity Live solves the dependency tracking problem at the infrastructure level rather than requiring developers to manually track references.
1. Automatic Content Graph Analysis
Sanity developed a fine-grained system to precisely map which queries need invalidation on content updates. Queries are tagged to track exactly what an update will affect, using opaque sync tags that identify content dependencies.
When you fetch data using Sanity's client, the API analyzes your GROQ query and automatically determines:
- Which documents the query accessed
- Which fields were used
- All references that were followed
- The complete dependency graph for that specific query
This analysis happens on Sanity's backend, creating sync tags that uniquely identify content dependencies.
2. Opaque Sync Tags
The system uses opaque tags generated by Sanity's backend, prefixed with sanity:, which are safe to expose without authentication. These tags precisely map content dependencies without exposing sensitive information.
// When you fetch data with Sanity Live
const posts = await sanityFetch({
query: postsQuery,
tags: ['posts'] // Your custom tag
})
// Behind the scenes, Sanity adds sync tags like:
// fetch.next.tags = [
// 'posts', // Your custom tag
// 'sanity:abc123', // Document dependencies
// 'sanity:def456', // Field dependencies
// 'sanity:ghi789' // Reference dependencies
// ]
These opaque tags encode the complete dependency graph without you having to track it manually.
3. Real-Time Event Streaming
The Live Content API provides an event stream that listens for content changes and emits targeted sync tags when specific content is updated, triggering revalidation only for queries that actually depend on the changed content.
The <SanityLive /> component establishes a Server-Sent Events connection:
// app/layout.tsx
import { SanityLive } from '@/sanity/lib/live'
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<SanityLive /> {/* Establishes live connection */}
</body>
</html>
)
}
When content changes in Sanity:
- The Live Content API emits events with relevant sync tags
- The <SanityLive /> component receives these events in the browser
- It calls a server action that invokes
- for matching tags
- Next.js invalidates only the queries affected by that specific change
- The page refetches data and updates seamlessly
4. Smart Granularity
The system only refetches queries that are actually affected by the change. If a route uses three GROQ queries and only one gets invalidated, only that specific query refetches while the others remain cached.
This is dramatically more efficient than our webhook approach, which often invalidated entire pages even when only a small piece of data changed.
Implementation: Surprisingly Simple
Here's what our implementation looked like:
// lib/sanity/client.ts
import { createClient } from '@sanity/client'
import { defineLive } from 'next-sanity'
const client = createClient({
projectId: process.env.NEXT_PUBLIC_SANITY_PROJECT_ID,
dataset: process.env.NEXT_PUBLIC_SANITY_DATASET,
apiVersion: '2024-01-01',
useCdn: true,
stega: { enabled: true, studioUrl: '/studio' }
})
export const { sanityFetch, SanityLive } = defineLive({
client,
serverToken: process.env.SANITY_API_TOKEN,
browserToken: process.env.NEXT_PUBLIC_SANITY_API_TOKEN
})
// app/sessions/[slug]/page.tsx
import { sanityFetch } from '@/lib/sanity/client'
export default async function SessionPage({ params }) {
const { data: session } = await sanityFetch({
query: `*[_type == "session" && slug.current == $slug][0] {
title,
description,
speakers[]->{
name,
bio,
company
},
track->{ name, color }
}`,
params: { slug: params.slug }
})
return <SessionDetails session={session} />
}
That's it. No webhook configuration. No manual dependency tracking. No complex revalidation logic.
The Results: 40% Cost Reduction
After migrating to Sanity Live, the numbers spoke for themselves:
Vercel Hosting Costs
- During events: $132 → $80/month (39% reduction)
- Off-season: $60 → $40/month (33% reduction)
- Before: 2-4 million requests/month
- After: 700k-900k requests/month (65-78% reduction)
Why Such Dramatic Improvements?
- ISR is back: Pages remain statically generated and cached with ISR, only revalidating when actually necessary rather than on every request.
- Precise invalidation: Instead of over-invalidating (webhooks) or never caching (SSR), we now invalidate exactly what changed
- Deduplication: When multiple visitors trigger revalidation simultaneously, Vercel deduplicates requests and all clients listen for a single refetch, keeping API costs predictable.
- Visitor-triggered updates: As long as one visitor accesses your app after a content change, the cache is updated globally for all users, even for pages they didn't visit.
What We Learned
1. Graph-Based Content Needs Graph-Aware Caching
Traditional webhook-based cache invalidation assumes relatively flat data structures. The moment you have speaker -> session -> track -> schedule, manual dependency tracking becomes unsustainable.
Content relationships in modern CMSs create complex dependency graphs that require automated tracking rather than manual management to ensure cache invalidation happens correctly.
2. Developer Experience Matters for Maintenance
Our webhook implementation worked initially but became increasingly fragile as the content model evolved. Every new reference type required updating webhook logic. With Sanity Live, the system adapts automatically.
3. Infrastructure-Level Solutions Beat Application-Level Hacks
By handling dependency tracking at the infrastructure level (Sanity's API), we eliminated an entire category of bugs and maintenance burden from our application code.
4. ISR's Full Potential Requires Smart Invalidation
ISR is powerful, but only if you can reliably invalidate the right pages at the right time. Without precise cache invalidation, systems often resort to aggressive polling or serving stale content, neither of which is ideal.
When Should You Use Sanity Live?
Sanity Live is particularly valuable when:
- Your content has interconnected references: Speakers, sessions, venues, authors, categories, products with variants, etc.
- You need fresh content but want ISR benefits: The "live by default" approach gives you both
- Your traffic is variable: Event sites, seasonal businesses, or any scenario where you want to optimize costs during quiet periods
- You're tired of webhook complexity: Let Sanity handle dependency tracking instead of maintaining it yourself
When Might You Not Need It?
Sanity Live might be overkill if:
- Your content model is completely flat with no references
- You're comfortable with time-based revalidation (e.g., revalidate: 60)
- You have very predictable traffic patterns where SSR costs are acceptable
- Your content rarely changes and you can trigger full rebuilds when it does
Migration Tips
If you're considering migrating from webhooks to Sanity Live:
- Start with non-critical pages: Test the integration on lower-traffic pages first
- Keep webhooks initially: Run both systems in parallel briefly to compare behavior
- Monitor API usage: Watch your Sanity API request patterns during the transition
- Update to latest packages: Sanity Live requires recent versions of @sanity/client and next-sanity
- Configure tokens properly: You'll need both server and browser tokens for the full live experience
Conclusion
The journey from webhook complexity to Sanity Live taught us an important lesson: sometimes the right solution isn't clever application code, but better infrastructure. By moving dependency tracking from our application to Sanity's backend, we:
- Eliminated an entire class of cache invalidation bugs
- Reduced hosting costs by 33-40%
- Decreased API usage by 65-78%
- Simplified our codebase significantly
- Improved the editor experience with truly live updates
For teams building content-heavy Next.js applications with Sanity CMS, especially those dealing with interconnected content models, Sanity Live represents a significant advancement in how we approach cache invalidation and content freshness.
The era of manually tracking content dependencies in webhook code is over. And our Vercel bills are much happier for it.
Tech Stack: Next.js 14 (App Router), Sanity CMS v3, @sanity/next-sanity, Vercel hosting
Resources:
