17:00:09CCU, IN
Back to projects

From Static to Reactive: Rethinking AI Writing

Last updated: November 29, 2025

How the "stale context" problem in AI editors led me to build a document system where text blocks behave like spreadsheet cells.


The Spark

It started with a frustration I couldn't shake.

I was using AI writing tools Claude, ChatGPT, Gemini, Notion AI and they all had the same fundamental problem: once the AI generates text, it's dead. You edit the paragraph before it, change the context completely, and the AI-generated analysis below becomes irrelevant. You have to manually regenerate everything.

It felt wrong. If AI is supposed to understand context, why doesn't it stay in sync with the document as it evolves?

I wanted a document editor where AI blocks were alive—where they could "watch" other parts of the document and automatically update when their source context changed. Like a spreadsheet, but for rich text.

When I saw a tweet from Vercel's CEO about reactive documents, it clicked . The idea wasn't just interesting, it was the future of AI writing. The scope for features was immense: charts that update based on text, summaries that regenerate when content changes, analyses that stay fresh as the document evolves.

I built Flowdocs to solve this.


The Vision

Documents that Think.

I wanted to build an editor where:

  1. AI blocks subscribe to context - Text and chart blocks can depend on other sections
  2. Automatic re-evaluation - When you edit the source, dependent blocks update themselves
  3. Non-destructive suggestions - AI edits appear as suggestions (like Google Docs), not replacements
  4. Fluid typing experience - 60fps editing even while complex AI tasks run in the background

The Stack

**Next.js ** • React 19 • **TipTap ** • ZustandGoogle GeminiPostgreSQLBetter-Auth


Engineering Highlights

1. The Reactive Engine (LLM Orchestrator)

This is the core innovation. Integrating AI into a text editor usually feels "bolted on", you generate text once, and it's static. Keeping AI outputs in sync with changing document context is hard.

The Solution: I built a client-side orchestration engine that runs alongside the TipTap editor instance.

  • Dependency tracker: The orchestrator tracks "dependency scopes" for custom nodes, which blocks depend on which sections of the document
  • Smart Re-evaluation: Uses content hashing to detect when significant changes occur in a scope, triggering re-generation only when necessary
  • Debouncing & Queuing: Manages a queue of update requests to prevent API thrashing and race conditions
  • Result: The UI remains responsive while the AI works in the background. No stuttering, no blocking—just smooth, reactive updates.

2. Custom "Smart" Nodes (ProseMirror Extensions)

Instead of treating the document as a giant string of HTML, I created custom ProseMirror/TipTap nodes that are React components rendered inside the editor.

The Implementation:

  • TextNode & ChartNode: These aren't just HTML—they hold their own state (loading status, error states, prompt configuration)
  • Communication Layer: Nodes communicate back to the orchestrator, requesting re-generation when their dependencies change
  • Data Visualization: The ChartNode takes raw JSON data generated by the LLM and renders interactive charts (Bar, Pie) on the fly using Recharts
  • Result: The editor becomes a canvas for intelligent, self-updating components rather than just static text.

3. Non-Destructive "Suggested Edits" (Diff Extension)

The biggest UX challenge: how do you show AI changes without destroying the user's work?

The Approach: I wrote a custom extension using ProseMirror Decorations.

  • Instead of overwriting text, it overlays AI suggestions visually on top of the existing content
  • Users get a "Google Docs Suggestion Mode" experience—Accept or Reject changes granularly
  • The document history remains clean, with AI generations tracked separately
  • Result: Users maintain full control. The AI suggests, but never overwrites.

4. High-Performance Syncing

Managing state across the editor, orchestrator, and database without killing performance was crucial.

The Solution:

  • Optimistic UI: Interface updates immediately while data syncs in the background
  • Debounced Saves: Custom hook that auto-saves only after the user pauses typing
  • Decoupled AI State: AI generation state is separate from document history, allowing non-blocking updates
  • Result: Fluid 60fps typing experience even while complex AI tasks run in the background.

Database Architecture

The schema supports both ephemeral and permanent workflows:

  • History Tracking: The History model links specifically to prompt/content pairs, enabling "undo/redo" flows for AI generations distinct from standard text undo
  • Rate Limiting: Native UserDailyLimit tracking to manage AI API costs per user tier

What I Learned

The hardest part of building wasn't the AI integration, it was ensuring that the context is interpreted smartly, always kept in loop, and that we it was reactive enough to not give a feel of stale behavior at the same time not getting updated on every word that got added, so as to give user the feel of smartness as well as keepig the costs low

I also learned that constraint breeds creativity. By limiting AI blocks to specific dependency scopes rather than "the whole document," I made the system both more performant and more predictable.


Features that can be added

Real-time Collaboration - Since state syncing is already built, adding WebSockets (via Tiptap Collab) is the natural next step.

Vector Search Context (RAG) - Instead of just "previous paragraph" context, use Retrieval Augmented Generation to let the AI "see" the whole document library.

Advanced Chart Types - Expand beyond Bar/Pie to Line, Scatter, and custom data visualizations.

More integrations - Expand beyond the setup of only textual and charts and get into images and other stuffs.

Plugin Marketplace - Allow users to create and share custom reactive node types.