Skip to content

Philosophy

Generative DOM exists because of a set of beliefs about how a streaming markdown renderer should work. This page explains the design decisions behind the project.

Why from Scratch?

Existing markdown parsers -- marked, markdown-it, remark, and others -- are excellent batch processors. They accept a complete markdown string and return HTML or an AST. But they are not designed for streaming.

Retrofitting streaming onto a batch parser produces suboptimal user experience. The parser must either:

  1. Re-parse everything on each chunk, which is O(total) per chunk instead of O(new content). For a 10,000-line document receiving 500 tokens, that means re-parsing the entire document 500 times.

  2. Buffer until a "safe" boundary, which introduces latency. Users see chunks of rendered output appearing in bursts rather than a smooth, character-by-character flow.

Generative DOM is stream-first by design. The tokenizer maintains a cursor and only processes new content. The differ only updates changed DOM nodes. The scheduler batches renders to animation frames. These are not afterthoughts bolted onto a batch parser -- they are the core architecture.

Why Plugins for Everything?

In most markdown libraries, syntax support is baked into the parser. Adding custom syntax means monkey-patching internals or writing complex visitor transforms.

Generative DOM takes a different approach: the core knows nothing about markdown. It is a plugin host that provides:

  • A buffer with cursor tracking
  • A priority-based tokenization loop
  • An AST differ
  • A DOM renderer with object pooling
  • An event system

Every markdown feature -- headings, bold, lists, code blocks -- is a plugin. This means:

  • Separation of concerns. Each syntax rule is isolated in its own module. A bug in the list plugin cannot affect heading rendering.
  • Testability. Each plugin can be tested independently with focused test cases.
  • Replaceability. Do not like how bold works? Replace markdown-inline with your own implementation.
  • Composability. Users include only the plugins they need. A project that only renders headings and paragraphs does not pay the cost of list parsing.
  • Extensibility. Custom syntax is a first-class citizen. The stress test plugins (custom elements, events, interactive elements) prove this works for non-markdown syntax too.

Why Web Components?

Generative DOM uses Custom Elements (Web Components) for its stress test plugins. The choice is deliberate:

  • Standards-based. Custom Elements are a browser-native API, supported in all modern browsers. They do not require a framework, a build step, or a runtime library.
  • No framework lock-in. A Web Component works in vanilla JS, React, Vue, Svelte, Angular, or any other framework. Generative DOM users are not forced into a specific ecosystem.
  • Encapsulated styling. Shadow DOM allows components to style themselves without affecting the host page. A <md-plot> component's CSS does not leak into the user's application.
  • Lifecycle hooks. connectedCallback and disconnectedCallback provide clean setup and teardown. When a <md-clock> is removed from the DOM, its interval is automatically cleared.

Why No Dependencies?

The Generative DOM core package has zero node_modules dependencies in production. This is a security decision.

Generative DOM renders user-provided content into the DOM. It is in the trust boundary between untrusted input and the user's browser. A supply chain attack on any dependency could introduce XSS or data exfiltration.

Zero dependencies means:

  • Zero supply chain risk for the runtime. The only code that runs is code in this repository.
  • Full auditability. The entire codebase can be read and reviewed by a single person.
  • No version conflicts. Generative DOM never breaks because a transitive dependency shipped a breaking change.
  • Smaller bundle. No unused code from general-purpose utility libraries.

Development dependencies (Vitest, VitePress, TypeScript) are used for testing and documentation but do not ship in the production bundle.

Performance Philosophy

Generative DOM's performance strategy can be summarized as: do not optimize prematurely, but design the architecture so optimization is possible.

The core includes:

  • Object pooling -- DOM elements are recycled instead of created and destroyed. This is an architectural decision, not a micro-optimization. It changes how the renderer thinks about elements.
  • Incremental parsing -- The tokenizer resumes from a cursor, not from the beginning. This is an architectural decision that makes streaming O(new content) instead of O(total content).
  • rAF scheduling -- Renders are batched to animation frames. This is an architectural decision that prevents DOM thrashing during rapid push() calls.
  • Debouncing -- A configurable minimum interval between renders. This is a tuning knob, not a hack.

These features are built into the core because they are hard to add later. Object pooling requires every plugin to use ctx.createElement instead of document.createElement. Incremental parsing requires the buffer to track a cursor. rAF scheduling requires the renderer to be decoupled from push(). These are architectural constraints that must be present from the start.

What Generative DOM does not do:

  • Virtualize the DOM (render only visible elements). This could be added as a plugin but is not in the core.
  • Use Web Workers for parsing. The synchronous plugin dispatch model does not support this without a major redesign.
  • Pre-compile plugins. All matching is done at runtime with standard regex and string operations.

The philosophy is: build the architecture right, and performance follows. The 80/20 optimizations (pooling, incremental parsing, scheduling) are in the foundation. The remaining 20% is left for the future, when profiling shows where it is needed.