Stream-First Architecture
Push markdown chunk by chunk — even mid-word, mid-syntax. Generative DOM renders incrementally, never waiting for complete input.
Stream-first parser that renders markdown chunk by chunk — perfect for LLM output, WebSocket feeds, and real-time content.
Traditional parsers re-parse everything on each update. Generative DOM processes only new chunks.
import { GenerativeDom } from '@generative-dom/core';
import { markdownBase, markdownInline, markdownHeading } from '@generative-dom/plugins';
const md = new GenerativeDom({
container: document.getElementById('output'),
plugins: [markdownBase(), markdownInline(), markdownHeading()],
});
// Stream from any source — LLM, WebSocket, file reader
for await (const chunk of stream) {
md.push(chunk); // Renders incrementally
}
md.flush();Three lines of integration code. The rest is your streaming source.
| Generative DOM | marked | markdown-it | remark | |
|---|---|---|---|---|
| Streaming | Native | Batch only | Batch only | Batch only |
| Plugin System | All syntax | Limited | Good | Rich |
| Bundle Size | < 5 KB | ~ 7 KB | ~ 12 KB | Large (multi-pkg) |
| Dependencies | Zero | Zero | Zero | Many |
| XSS Safety | Architectural (DOM API) | Sanitizer required | Sanitizer required | Sanitizer required |
| SSR | No (browser only) | Yes | Yes | Yes |
Generative DOM is not a general-purpose replacement for batch parsers. It is the tool for the job they were not designed for: rendering markdown that arrives piece by piece.
LLM Client Developers — Build ChatGPT-like UIs with smooth token-by-token rendering. No flickering, no re-parsing, no scroll position loss.
Real-Time App Developers — WebSocket feeds, collaborative editors, live previews. Generative DOM handles arbitrary chunk boundaries without breaking syntax.
Security-Conscious Teams — Zero innerHTML means zero XSS vectors from markdown content. Security is structural, not a filter bolted on after the fact.
Experience Generative DOM with all 11 plugins, streaming controls, chunk-size tuning, and event monitoring.
Copy-paste integrations for the streaming sources LLM clients actually use:
openai SDK@anthropic-ai/sdkstreamText() and useChat()EventSource transportOr see the full recipe index.