Appearance
Recipe: Vercel AI SDK + Generative DOM
Render
streamText()output anduseChat()messages through Generative DOM instead of a string-based markdown component.
Why this recipe
The Vercel AI SDK (ai package) gives you two useful primitives: streamText() on the server and useChat() on the client. Both deliver model output as a stream of text deltas — but the default rendering path is "assign message.content to a <ReactMarkdown> and let it re-render on every token." That approach is what Generative DOM exists to replace. Point useChat()'s incremental output at useGenerativeDom().push instead and you get incremental DOM patches for free.
What you need
aiv3+ and the provider adapter you're using (e.g.@ai-sdk/openai,@ai-sdk/anthropic)@generative-dom/coreplus plugins@generative-dom/reactfor the hook example- React 18 or 19 for
useChat()
sh
pnpm add ai @ai-sdk/openai @generative-dom/core @generative-dom/react \
@generative-dom/plugin-markdown-base @generative-dom/plugin-markdown-inline \
@generative-dom/plugin-markdown-heading @generative-dom/plugin-markdown-code \
@generative-dom/plugin-markdown-listServer route
The server side is unchanged from standard AI SDK usage — you stream text over the AI SDK's data stream protocol. The client is where Generative DOM plugs in.
ts
// app/api/chat/route.ts (Next.js App Router)
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o-mini'),
messages,
// see official SDK docs for full options
});
return result.toDataStreamResponse();
}Vanilla example: streamText() async iterator
When you call streamText() directly (not through useChat), the result exposes a textStream async iterator of string deltas. Feed it to Generative DOM the same way you would any other iterator.
ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { GenerativeDom } from '@generative-dom/core';
import { markdownBase } from '@generative-dom/plugin-markdown-base';
import { markdownInline } from '@generative-dom/plugin-markdown-inline';
import { markdownHeading } from '@generative-dom/plugin-markdown-heading';
import { markdownCode } from '@generative-dom/plugin-markdown-code';
import { markdownList } from '@generative-dom/plugin-markdown-list';
const md = new GenerativeDom({
container: document.getElementById('out')!,
plugins: [
markdownBase(),
markdownInline(),
markdownHeading(),
markdownCode(),
markdownList(),
],
});
export async function run(prompt: string): Promise<void> {
md.reset();
const { textStream } = await streamText({
model: openai('gpt-4o-mini'),
prompt,
});
for await (const delta of textStream) md.push(delta);
md.flush();
}This form is server-side or edge-runtime friendly — the streamText() call belongs wherever your API key lives. In a browser-only app, fetch from your backend and use the SSE or fetch-streams recipe.
React example: useChat + useGenerativeDom
useChat() exposes messages, each with a growing content string. Re-rendering that string through a markdown component is the thing Generative DOM eliminates. Pair the two so that only incremental deltas reach the renderer.
tsx
import { useChat } from 'ai/react';
import { useEffect, useMemo, useRef } from 'react';
import { useGenerativeDom } from '@generative-dom/react';
import { markdownBase } from '@generative-dom/plugin-markdown-base';
import { markdownInline } from '@generative-dom/plugin-markdown-inline';
import { markdownHeading } from '@generative-dom/plugin-markdown-heading';
import { markdownCode } from '@generative-dom/plugin-markdown-code';
import { markdownList } from '@generative-dom/plugin-markdown-list';
export function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({ api: '/api/chat' });
return (
<div>
{messages.map((m) => (
<div key={m.id} className={m.role}>
{m.role === 'assistant' ? <AssistantMessage content={m.content} /> : m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}
function AssistantMessage({ content }: { content: string }) {
const plugins = useMemo(
() => [
markdownBase(),
markdownInline(),
markdownHeading(),
markdownCode(),
markdownList(),
],
[],
);
const { ref, push, flush, reset } = useGenerativeDom({ plugins });
const lastLength = useRef(0);
useEffect(() => {
// useChat re-renders with a growing `content` string. Only push the new tail.
if (content.length < lastLength.current) {
// Rare: content shrank (e.g. retry). Reset and re-push.
reset();
lastLength.current = 0;
}
const delta = content.slice(lastLength.current);
if (delta) push(delta);
lastLength.current = content.length;
// useChat doesn't expose a per-message "done" signal; flush on every tick is cheap.
flush();
}, [content, push, flush, reset]);
return <div ref={ref} className="prose" />;
}The trick is lastLength.current. useChat gives you the full accumulated string, not deltas — so slice off only what's new before calling push. Generative DOM then diffs the added tokens into the DOM.
What this gets you
- No full-document re-parse between tokens, even though
useChatre-renders the entirecontentstring on every update - Works with any AI SDK provider —
@ai-sdk/openai,@ai-sdk/anthropic,@ai-sdk/google, etc. - Keeps the AI SDK's useful bits (tool calls, state management, retries) while replacing only the render layer
- Survives Next.js streaming SSR —
useGenerativeDomis client-only, mount it inside a'use client'boundary
Common pitfalls
- Pushing the full
contentevery render — you must track the previous length and push only the delta. Otherwise Generative DOM sees duplicated text. - Forgetting
'use client'— the hook touchesdocument. Mount it only in client components. - Sharing one Generative DOM instance across all messages — give each assistant message its own
<AssistantMessage>so their DOM is isolated andreset()only clears one message. - Tool-use deltas —
streamText()can emit tool call deltas in the data stream.useChat'scontentfield is text-only, so this is usually fine, but if you switch to the rawresult.fullStreamiterator, filter fortype === 'text-delta'before pushing.
Related
- OpenAI recipe — direct SDK alternative
- fetch + ReadableStream — the underlying transport
- Streaming guide