Isomorphic Supabase in TanStack Start
TanStack Start introduces a powerful but sometimes confusing concept: isomorphic loaders. Unlike traditional server-side frameworks where loaders run exclusively on the server, TanStack Start loaders execute in both environments. They run on the server during the initial page load (SSR) and on the client during subsequent SPA navigations.
This creates an interesting challenge when integrating with Supabase. You need a Supabase client that works correctly regardless of where your code is executing. Fortunately, Supabase's @supabase/ssr package is designed for exactly this scenario.
This article presents a clean architecture pattern that embraces isomorphism rather than fighting it. By combining TanStack Start's createIsomorphicFn, a service layer with dependency injection, and TanStack Query for state management, you can build applications where your business logic is written once and runs correctly everywhere.
TanStack Start is implemented as a Vite plugin, making it server-agnostic. You can deploy to Cloudflare Workers, a Node.js server with Nitro, or any other runtime. The patterns in this article work regardless of your deployment target.
Understanding the Isomorphic Landscape
TanStack Start Loaders Are Isomorphic
This is the most important concept to internalize. When you write a loader in TanStack Start:
export const Route = createFileRoute('/posts')({
component: PostsPage,
loader: () => fetchPosts(),
})
That fetchPosts() call happens on the server when a user navigates directly to /posts (full page load). But when a user clicks a link from another page in your app, that same fetchPosts() runs in their browser.
This has significant implications:
- You cannot use server-only APIs (like
process.env.SECRET_KEY) directly in loaders - You cannot assume access to Node.js modules
- Your data fetching logic must work in both environments
The Supabase SSR Package
Supabase provides the @supabase/ssr package specifically for server-rendered applications. It exports two functions:
-
createBrowserClient: Creates a client for browser environments. It automatically handles cookies via thedocument.cookieAPI and implements a singleton pattern to avoid creating multiple instances. -
createServerClient: Creates a client for server environments. It requires you to provide cookie accessors (getAllandsetAll) because servers don't have access todocument.cookie.
The naive approach to handling both environments looks like this:
// Don't do this - it gets messy fast
function getSupabaseClient() {
if (typeof window === 'undefined') {
// Server: need to get cookies somehow...
const cookies = /* ??? */
return createServerClient(url, key, { cookies })
} else {
return createBrowserClient(url, key)
}
}
This approach has problems. The cookie handling on the server depends on your framework's request context, and mixing these concerns makes the code fragile and hard to test. TanStack Start provides a better way.
Choosing Your Caching Strategy: Router vs Query
Before diving into implementation, you need to make an architectural decision about caching.
Two Caches, One Problem
TanStack Router has its own built-in cache with staleTime and gcTime options. TanStack Query also has a cache with similar concepts. When you use both, you have two layers of caching, and reasoning about invalidation becomes complex:
- When should you invalidate the Router cache?
- When should you invalidate the Query cache?
- What happens when they get out of sync?
The Recommendation: Let Query Own State
For most applications, the cleanest approach is to disable Router caching entirely and let TanStack Query handle all data state:
// app/router.tsx
export function createRouter() {
const queryClient = new QueryClient()
return routerWithQueryClient(
createTanStackRouter({
routeTree,
context: { queryClient },
defaultPreload: 'intent',
defaultPreloadStaleTime: 0, // Disable router caching
}),
queryClient
)
}
With defaultPreloadStaleTime: 0, the Router always considers its cached data stale and defers to Query. This gives you:
- A single source of truth for cache state
- Fine-grained invalidation via Query's
invalidateQueries - Familiar patterns if you've used TanStack Query before
When This Trade-off Doesn't Make Sense
If your application has very simple data requirements (few entities, rare updates, no optimistic updates), the Router's built-in cache might be sufficient. The service layer pattern in this article still applies; you would just skip the Query integration.
Creating Supabase Clients
You'll need several Supabase client factories, each for a specific use case. Keeping them separate makes the codebase clearer and prevents accidentally using the wrong client in the wrong context.
Browser Client
The browser client is straightforward. It uses createBrowserClient from @supabase/ssr, which automatically handles cookies via the document.cookie API and implements a singleton pattern:
// lib/supabase/browser-client.ts
import { createBrowserClient } from '@supabase/ssr'
import type { Database } from './database.types'
export function createBrowserSupabaseClient() {
const supabaseUrl = import.meta.env.VITE_SUPABASE_URL
const supabasePublicKey = import.meta.env.VITE_SUPABASE_PUBLIC_KEY // Or VITE_SUPABASE_ANON_KEY for older projects
return createBrowserClient<Database>(supabaseUrl, supabasePublicKey)
}
Server Client
The server client needs cookie accessors. Since TanStack Start is server-agnostic (it's just a Vite plugin), the exact implementation may vary depending on your deployment target. Here's an example using the TanStack Start server-only and cookie helper functions.
// lib/supabase/server-client.ts
import { createServerClient } from "@supabase/ssr"
import { createServerOnlyFn } from "@tanstack/react-start"
import { getCookies, setCookie } from "@tanstack/react-start/server"
import type { Database } from './database.types'
export const createServerSupabaseClient = createServerOnlyFn(() => {
// TODO: Validate these before using them
const supabaseUrl = import.meta.env.VITE_SUPABASE_URL
const supabasePublicKey = import.meta.env.VITE_SUPABASE_PUBLIC_KEY // Or VITE_SUPABASE_ANON_KEY for older projects
return createServerClient<Database>(
supabaseUrl,
supabasePublicKey,
{
cookies: {
getAll() {
return Object.entries(getCookies()).map(([name, value]) => ({
name,
value,
}))
},
setAll(cookies) {
cookies.forEach((cookie) => {
setCookie(cookie.name, cookie.value)
})
},
},
},
)
})
Isomorphic Client
The isomorphic client uses createIsomorphicFn to automatically pick the right implementation based on the runtime environment:
// lib/supabase/client.ts
import { createIsomorphicFn } from '@tanstack/react-start'
import { createBrowserSupabaseClient } from './browser-client'
import { createServerSupabaseClient } from './server-client'
export const createSupabaseClient = createIsomorphicFn()
.server(() => createServerSupabaseClient())
.client(() => createBrowserSupabaseClient())
export type SupabaseClient = ReturnType<typeof createSupabaseClient>
When you call createSupabaseClient():
- On the server: The
.server()implementation runs, returning a server-configured client - On the client: The
.client()implementation runs, returning a browser-configured client
The caller doesn't need to know or care which environment they're in.
Admin Client
The admin client uses the secret key, which bypasses Row Level Security. This must only ever be used in server functions, never exposed to the client.
To guarantee the admin client and its secrets never leak into the client bundle, wrap it in createServerOnlyFn from TanStack Start. This causes a build-time error if the code is ever imported from a client module:
// lib/supabase/admin-client.ts
import { createServerOnlyFn } from '@tanstack/react-start'
import { createClient } from '@supabase/supabase-js'
import type { Database } from './database.types'
export const createAdminSupabaseClient = createServerOnlyFn(() => {
const supabaseUrl = process.env.SUPABASE_URL!
const supabaseSecretKey = process.env.SUPABASE_SECRET_KEY! // Or SUPABASE_SERVICE_ROLE_KEY for older projects
return createClient<Database>(supabaseUrl, supabaseSecretKey, {
auth: {
autoRefreshToken: false,
persistSession: false,
},
})
})
Note that the admin client uses @supabase/supabase-js directly, not @supabase/ssr. It doesn't need cookie handling since it authenticates via the secret key (or service role key for older projects).
Type Safety
By passing your Database type (generated via supabase gen types typescript), you get full type safety for all your queries:
const supabase = createSupabaseClient()
const { data } = await supabase
.from('posts') // Autocomplete for table names
.select('id, title, author:users(name)') // Type-safe column selection
The Service Layer Pattern
This is the core of the architecture. A service layer abstracts your business logic away from framework concerns, making it reusable and testable.
Why a Service Layer?
Consider this code in a component:
function PostsList() {
const supabase = createSupabaseClient()
const { data } = useQuery({
queryKey: ['posts'],
queryFn: async () => {
const { data, error } = await supabase
.from('posts')
.select('*, author:users(name)')
.order('created_at', { ascending: false })
if (error) throw error
return data
},
})
}
This works, but it has problems:
- The query logic is coupled to the component
- You can't reuse this query elsewhere without copy-pasting
- Testing requires mocking React Query
- Business logic (like the ordering) is buried in UI code
Dependency Injection with Services
A service is a class (or set of functions) that encapsulates business logic. It receives its dependencies (like the Supabase client) as constructor arguments:
// posts/service.ts
import type { SupabaseClient } from '@/lib/supabase/client'
export class PostsService {
constructor(private supabase: SupabaseClient) {}
async list() {
const { data, error } = await this.supabase
.from('posts')
.select('*, author:users(name)')
.order('created_at', { ascending: false })
if (error) throw error
return data
}
async getById(id: string) {
const { data, error } = await this.supabase
.from('posts')
.select('*, author:users(name)')
.eq('id', id)
.single()
if (error) throw error
return data
}
async create(input: { title: string; content: string }) {
const { data: user } = await this.supabase.auth.getUser()
if (!user.user) throw new Error('Not authenticated')
const { data, error } = await this.supabase
.from('posts')
.insert({
title: input.title,
content: input.content,
author_id: user.user.id,
})
.select()
.single()
if (error) throw error
return data
}
async delete(id: string) {
const { error } = await this.supabase
.from('posts')
.delete()
.eq('id', id)
if (error) throw error
}
}
Services Are Environment-Agnostic
Notice that PostsService has no idea whether it's running on the server or client. It just uses the Supabase client it was given. This is the power of dependency injection:
- On the server: Pass in a server-configured client
- On the client: Pass in a browser-configured client
- In tests: Pass in a mock client
The service doesn't change. Only the injected dependency changes.
Thin Query/Mutation Wrappers
With services handling business logic, your TanStack Query code becomes minimal. Query functions just wire things together:
- Create the isomorphic Supabase client
- Instantiate the relevant service
- Call the service method(s)
Query Options
Define reusable query options using the queryOptions helper. A query key factory keeps keys consistent and type-safe:
// posts/queries.ts
import { queryOptions } from '@tanstack/react-query'
import { createSupabaseClient } from '@/lib/supabase/client'
import { PostsService } from '@/posts/service'
// Query key factory for consistent, type-safe keys
export const postsQueryKeys = {
all: () => ['posts'] as const,
detail: (id: string) => ['posts', id] as const,
}
export const postsQueries = {
all: () =>
queryOptions({
queryKey: postsQueryKeys.all(),
queryFn: () => {
const supabase = createSupabaseClient()
const service = new PostsService(supabase)
return service.list()
},
}),
detail: (id: string) =>
queryOptions({
queryKey: postsQueryKeys.detail(id),
queryFn: () => {
const supabase = createSupabaseClient()
const service = new PostsService(supabase)
return service.getById(id)
},
}),
}
Using Query Options
These query options can be used anywhere:
// In a component
const { data: posts } = useQuery(postsQueries.all())
// In a loader - fire and forget to warm the cache
queryClient.prefetchQueryData(postsQueries.all())
// In a loader - when you need the data in the loader itself
const posts = await queryClient.ensureQueryData(postsQueries.all())
Mutations
Mutations follow the same pattern, using the mutationOptions helper. Cache invalidation can be handled automatically via mutation metadata (see TKDodo's article and the template repository for this approach):
// posts/mutations.ts
import { mutationOptions } from '@tanstack/react-query'
import { createSupabaseClient } from '@/lib/supabase/client'
import { PostsService } from '@/posts/service'
export const postsMutations = {
create: () =>
mutationOptions({
mutationFn: (input: { title: string; content: string }) => {
const supabase = createSupabaseClient()
return new PostsService(supabase).create(input)
},
meta: {
invalidates: [postsQueryKeys.all()],
},
}),
delete: (id: string) =>
mutationOptions({
mutationFn: () => {
const supabase = createSupabaseClient()
return new PostsService(supabase).delete(id)
},
meta: {
invalidates: [postsQueryKeys.all(), postsQueryKeys.detail(id)],
},
}),
}
Components use mutations directly without wrapper hooks:
const createPost = useMutation(postsMutations.create())
const deletePost = useMutation(postsMutations.delete(postId))
Putting It Together: Route Loaders + Query
Now for the critical integration point: connecting TanStack Start's loaders with TanStack Query.
Warming the Query Cache
In loaders, use prefetchQueryData to kick off data fetching. This is a fire-and-forget approach that warms the query cache without needing to handle the promise.
// routes/posts.tsx
import { createFileRoute } from '@tanstack/react-router'
import { useSuspenseQuery } from '@tanstack/react-query'
import { postsQueries } from '@/posts/queries'
export const Route = createFileRoute('/posts')({
component: PostsPage,
loader: ({ context }) => {
// Fire and forget - warms the cache, no need to handle the promise
context.queryClient.prefetchQueryData(postsQueries.all())
},
})
function PostsPage() {
// This will suspend until the data is ready
const { data: posts } = useSuspenseQuery(postsQueries.all())
return (
<div>
<h1>Posts</h1>
<ul>
{posts.map((post) => (
<li key={post.id}>
<Link to="/posts/$id" params={{ id: post.id }}>
{post.title}
</Link>
</li>
))}
</ul>
</div>
)
}
Why prefetchQueryData?
The prefetchQueryData approach has several advantages:
- No promise handling required — Unlike
ensureQueryData, you don't need to.catch()rejected promises to prevent unhandled rejection errors on the server - The server starts fetching data immediately
- The HTML shell is sent to the client
- The Promise (and eventually its resolved data) streams to the client
- The component suspends on the client until data arrives
- Once resolved, the component renders with data
This gives you streaming SSR with minimal complexity and cleaner loader code.
When to Use ensureQueryData
Use ensureQueryData when you need to await the response and use the data within the loader itself:
export const Route = createFileRoute('/posts/$id')({
component: PostPage,
loader: async ({ context, params }) => {
// When you need the data in the loader
const post = await context.queryClient.ensureQueryData(
postsQueries.detail(params.id)
)
// Use the data to make decisions or fetch related data
if (post.authorId) {
context.queryClient.prefetchQueryData(usersQueries.detail(post.authorId))
}
return { post }
},
})
When using ensureQueryData without awaiting, you must attach a .catch(() => {}) to prevent unhandled rejection errors on the server. The actual error handling happens in the component via error boundaries or TanStack Query's error state. This is why prefetchQueryData is preferred for simple cache warming.
Parallel Data Loading
For pages that need multiple data sources, kick them all off in parallel:
export const Route = createFileRoute('/dashboard')({
component: Dashboard,
loader: ({ context }) => {
// All requests start simultaneously - fire and forget
context.queryClient.prefetchQueryData(postsQueries.all())
context.queryClient.prefetchQueryData(usersQueries.me())
context.queryClient.prefetchQueryData(statsQueries.overview())
},
})
Server Functions for Trusted Operations
Sometimes you need code that runs exclusively on the server:
- Using the Supabase admin client (with the service role key)
- Accessing server-only environment variables
- Operations that must never be exposed to the client
Protecting Admin Operations with Middleware
Admin server functions should be protected by middleware that validates the caller is actually an admin. This prevents malicious users from invoking these endpoints directly:
// server/middleware/admin.ts
import { createMiddleware } from '@tanstack/react-start'
import { createServerSupabaseClient } from '@/lib/supabase/server-client'
export const adminMiddleware = createMiddleware().server(async ({ next }) => {
const supabase = createServerSupabaseClient()
const { data: { user }, error } = await supabase.auth.getUser()
if (error || !user) {
throw new Error('Unauthorized')
}
// Check if user has admin role (adjust based on your schema)
const { data: profile } = await supabase
.from('profiles')
.select('role')
.eq('id', user.id)
.single()
if (profile?.role !== 'admin') {
throw new Error('Forbidden: Admin access required')
}
return next({ context: { adminUser: user } })
})
Complete Example: Template Repository
To see all these patterns working together in a production-ready-ish template, check out the example repository:
tanstack-start-isomorphic-supabase-template
The repository includes:
- Complete Supabase client architecture (browser, server, isomorphic, and admin clients)
- Service layer with dependency injection
- Query and mutation options with automatic invalidation
- Route loaders with streaming SSR
- Server functions with middleware for protected operations
- Authentication flow with Supabase Auth
- TypeScript throughout with generated database types
Benefits and Trade-offs
Benefits
Single source of truth for business logic. All your Supabase queries and data transformations live in services. When requirements change, you update one place.
Testable services. Since services receive the Supabase client as a dependency, you can easily mock it in tests:
describe('PostsService', () => {
it('should list posts', async () => {
const mockSupabase = {
from: () => ({
select: () => ({
order: () => Promise.resolve({ data: mockPosts, error: null }),
}),
}),
}
const service = new PostsService(mockSupabase as any)
const posts = await service.list()
expect(posts).toEqual(mockPosts)
})
})
Clear separation of concerns. Each layer has a single responsibility:
- Supabase client factory: handle environment differences
- Services: business logic
- Query options: caching configuration
- Components: UI
Seamless SSR and client navigation. The same code runs everywhere. No special handling for different environments.
Streaming support. By using prefetchQueryData as a fire-and-forget cache warmer, you get streaming SSR with minimal configuration.
Lower costs and better performance. When the isomorphic client runs in the browser, it talks directly to Supabase. Your server doesn't need to proxy these requests, which means:
- Reduced server load and compute costs
- Lower latency by eliminating an extra network hop
- Better scalability since browser clients handle their own connections
Trade-offs
Slight indirection. You're adding a service layer between your components and Supabase. For very simple apps, this might feel like overhead.
Cache invalidation requires thought. You need to remember to invalidate the right queries after mutations. TanStack Query's invalidateQueries with partial matching helps, but it's still something you need to consider. For a more automated approach using mutation metadata, see TKDodo's article on Automatic Query Invalidation after Mutations — the template repository implements this pattern.
Cookie handling varies by runtime. The server client needs cookie accessors, and the exact implementation depends on your deployment target (Cloudflare Workers, Node.js, etc.). If you switch runtimes, this code may need updating.
Conclusion
The pattern presented here embraces the isomorphic nature of TanStack Start rather than fighting it. By creating an isomorphic Supabase client with createIsomorphicFn, abstracting business logic into environment-agnostic services, and letting TanStack Query own all data state, you get an architecture that is clean, testable, and works seamlessly across server and client.
The key pieces are:
- Separate client factories for browser, server, isomorphic, and admin clients
- Service layer with dependency injection for business logic
- Thin query/mutation wrappers using
queryOptionsandmutationOptions - Loaders using
prefetchQueryData(fire and forget) for streaming SSR, orensureQueryDatawhen you need loader data - Server functions with middleware for protected admin operations
- Zod validation for type-safe server function inputs
This pattern also has practical benefits: your server costs stay lower because browser clients talk directly to Supabase without proxying, and users get faster responses by eliminating unnecessary network hops.
The architecture scales well from small applications to large ones. Start simple, and it supports growing complexity without major refactoring.
Further Reading
Share this article
Enjoyed this article?
Subscribe to our newsletter for more insights and updates.