Appearance
Recipe: Server-Sent Events + Generative DOM
Turn an
EventSourcefeed of markdown tokens into a live-rendered document with three lines of glue.
Why this recipe
Server-Sent Events (SSE) is the web-platform way to push text from a server to a browser over a single HTTP connection. Most LLM proxy endpoints expose their token stream this way because it's the simplest thing that works everywhere — no WebSocket upgrade, no polling, no SDK required. Generative DOM pairs with it naturally: every SSE message event carries a string payload that you hand to push().
What you need
- A browser runtime (SSE uses the built-in
EventSource) - An endpoint that returns
text/event-streamwith markdown tokens in thedata:field @generative-dom/coreplus plugins- Optional:
@generative-dom/react
sh
pnpm add @generative-dom/core @generative-dom/plugin-markdown-base \
@generative-dom/plugin-markdown-inline @generative-dom/plugin-markdown-heading \
@generative-dom/plugin-markdown-code @generative-dom/plugin-markdown-listServer payload format
SSE payloads are line-based. Each token is sent as one event:
data: # Hello
data: , world
data: !
event: done
data:The server writes each chunk with the data: prefix and terminates with a blank line. When the stream is finished, emit a custom done event so the client can call flush() and close.
Vanilla example
ts
import { GenerativeDom } from '@generative-dom/core';
import { markdownBase } from '@generative-dom/plugin-markdown-base';
import { markdownInline } from '@generative-dom/plugin-markdown-inline';
import { markdownHeading } from '@generative-dom/plugin-markdown-heading';
import { markdownCode } from '@generative-dom/plugin-markdown-code';
import { markdownList } from '@generative-dom/plugin-markdown-list';
export function streamFromSSE(url: string, container: HTMLElement): () => void {
const md = new GenerativeDom({
container,
plugins: [
markdownBase(),
markdownInline(),
markdownHeading(),
markdownCode(),
markdownList(),
],
});
const source = new EventSource(url);
source.onmessage = (event) => {
md.push(event.data);
};
source.addEventListener('done', () => {
md.flush();
source.close();
});
source.onerror = () => {
md.flush();
source.close();
};
return () => {
source.close();
md.destroy();
};
}Three integration points:
onmessagefires once per default-event payload. The payload is already a string; pass it straight topush().doneis a server-defined custom event. Listen withaddEventListenerbecause custom events do not fireonmessage.onerrorfires on network errors and on normal EOF without an explicit close. Flush buffered content before cleanup so the user sees the full tail.
React example
tsx
import { useEffect, useMemo } from 'react';
import { useGenerativeDom } from '@generative-dom/react';
import { markdownBase } from '@generative-dom/plugin-markdown-base';
import { markdownInline } from '@generative-dom/plugin-markdown-inline';
import { markdownHeading } from '@generative-dom/plugin-markdown-heading';
import { markdownCode } from '@generative-dom/plugin-markdown-code';
import { markdownList } from '@generative-dom/plugin-markdown-list';
export function StreamedDoc({ url }: { url: string }) {
const plugins = useMemo(
() => [
markdownBase(),
markdownInline(),
markdownHeading(),
markdownCode(),
markdownList(),
],
[],
);
const { ref, push, flush, reset } = useGenerativeDom({ plugins });
useEffect(() => {
reset();
const source = new EventSource(url);
const onMessage = (e: MessageEvent) => push(e.data);
const onDone = () => {
flush();
source.close();
};
const onError = () => {
flush();
source.close();
};
source.addEventListener('message', onMessage);
source.addEventListener('done', onDone);
source.addEventListener('error', onError);
return () => {
source.removeEventListener('message', onMessage);
source.removeEventListener('done', onDone);
source.removeEventListener('error', onError);
source.close();
};
}, [url, push, flush, reset]);
return <div ref={ref} className="prose" />;
}The useEffect owns the EventSource lifecycle. When url changes the effect cleans up the previous connection, calls reset() on Generative DOM, and opens a new one — so switching between two streams never bleeds rendered content.
Newline encoding gotcha
SSE uses \n as a field terminator. If a markdown token itself contains a newline (e.g. the boundary between a paragraph and a heading), the server must encode it as multiple data: lines in the same event:
data: # Title
data:
data: Next paragraphThe browser reassembles these into a single event.data with \n separators. On the server side, the usual recipe is:
ts
// Pseudo-code on the server
function sendSSE(payload: string): string {
return payload.split('\n').map((line) => `data: ${line}`).join('\n') + '\n\n';
}If you skip that step, a token with an embedded newline becomes two separate events, which Generative DOM still renders correctly — but you may see the split-line logged as an extra SSE message and double-count in metrics.
What this gets you
- Zero-SDK integration — works with any backend that can write
text/event-stream - Automatic reconnection on transient network errors (SSE's built-in behavior)
- Clean separation between transport (SSE) and rendering (Generative DOM); neither knows about the other
- Works in Safari, Firefox, Chrome, and all modern mobile browsers without polyfills
Common pitfalls
- Forgetting the trailing blank line between events — without it, the browser buffers everything until the connection closes.
- Using
fetch+text/event-streamby hand — if you don't need the auto-reconnect behavior ofEventSource, you're better off with the fetch-streams recipe, which gives you finer control. - Leaking connections on unmount — always call
source.close()in cleanup. Browsers limit the number of concurrent SSE connections per origin. - Not flushing on error — if the network hiccups mid-stream, whatever made it through should still render. Call
md.flush()inonerror.
Related
- fetch + ReadableStream — lower-level alternative
- OpenAI recipe — server uses the OpenAI SDK; client uses this recipe
- Anthropic recipe — same idea with Claude