Appearance
Recipes
Copy-paste integrations between Generative DOM and the streaming sources LLM clients actually use. Each recipe is a working tutorial — the code compiles, the prose explains why each piece exists.
When to use which
| Recipe | Use when |
|---|---|
| OpenAI | You're calling OpenAI Chat Completions with the official openai SDK. |
| Anthropic | You're calling Claude through @anthropic-ai/sdk. |
| Vercel AI SDK | You use ai / @ai-sdk/* with streamText() or useChat(). |
| Server-Sent Events | Your backend pushes tokens over SSE (EventSource) — SDK-agnostic. |
| fetch + ReadableStream | You want the lowest-level primitive. Works anywhere fetch works. |
The pattern, in one paragraph
Every streaming LLM API gives you a sequence of small string deltas. The naive approach — concatenate into a growing buffer and re-render the whole thing on every delta — re-parses previous tokens, flickers, and drops the user's scroll position. Generative DOM turns that sequence into a stream of DOM patches: push(delta) appends the chunk to the parser's buffer, and the internal scheduler diffs only the new AST nodes into the container. You do not maintain the accumulated string. You do not pick a re-render strategy. You wire push(chunk) into whichever iterator your SDK returns and call flush() at the end.
Shared setup
All recipes assume you have:
sh
pnpm add @generative-dom/core \
@generative-dom/plugin-markdown-base \
@generative-dom/plugin-markdown-inline \
@generative-dom/plugin-markdown-heading \
@generative-dom/plugin-markdown-code \
@generative-dom/plugin-markdown-list \
@generative-dom/plugin-markdown-linkFor React examples, also install @generative-dom/react and make sure your bundler's peer deps point at React 18 or 19.
Pick the plugin set that matches the markdown your LLM actually produces. For Claude and GPT-4-class models, the six plugins above are usually enough. Add @generative-dom/plugin-markdown-quote, @generative-dom/plugin-markdown-table, and @generative-dom/plugin-highlight if your prompts produce blockquotes, tables, or syntax-highlighted code blocks.
Shared plumbing
The vanilla shape is the same across every recipe:
ts
import { GenerativeDom } from '@generative-dom/core';
import { markdownBase } from '@generative-dom/plugin-markdown-base';
import { markdownInline } from '@generative-dom/plugin-markdown-inline';
import { markdownHeading } from '@generative-dom/plugin-markdown-heading';
import { markdownCode } from '@generative-dom/plugin-markdown-code';
import { markdownList } from '@generative-dom/plugin-markdown-list';
import { markdownLink } from '@generative-dom/plugin-markdown-link';
export function createRenderer(container: HTMLElement): GenerativeDom {
return new GenerativeDom({
container,
plugins: [
markdownBase(),
markdownInline(),
markdownHeading(),
markdownCode(),
markdownList(),
markdownLink(),
],
});
}Every recipe imports that createRenderer and plugs a different streaming source into .push(). If you're following along, extract it into src/renderer.ts once and reuse it.
The React shape is equally uniform:
tsx
import { useMemo } from 'react';
import { useGenerativeDom } from '@generative-dom/react';
import { markdownBase } from '@generative-dom/plugin-markdown-base';
// ...other plugin imports
export function useChatRenderer() {
const plugins = useMemo(
() => [markdownBase(), /* ...same list */],
[],
);
return useGenerativeDom({ plugins });
}Memoize the plugin array. Identity changes recreate the Generative DOM instance and throw away your rendered DOM.
What every recipe covers
- Vanilla — plain TypeScript, no framework
- React — using
@generative-dom/reactwithuseGenerativeDom() - Common pitfalls — the mistakes we see most often
Start with the recipe that matches your SDK. If your setup is unusual, fetch + ReadableStream is the lowest common denominator.
Related
- Streaming guide — how the buffer, scheduler, and diff work
- Performance — tuning
debounceMsand pool reuse - Playground — try the full stack in-browser