Last updated: November 29, 2025
How the "stale context" problem in AI editors led me to build a document system where text blocks behave like spreadsheet cells.
It started with a frustration I couldn't shake.
I was using AI writing tools Claude, ChatGPT, Gemini, Notion AI and they all had the same fundamental problem: once the AI generates text, it's dead. You edit the paragraph before it, change the context completely, and the AI-generated analysis below becomes irrelevant. You have to manually regenerate everything.
It felt wrong. If AI is supposed to understand context, why doesn't it stay in sync with the document as it evolves?
I wanted a document editor where AI blocks were alive—where they could "watch" other parts of the document and automatically update when their source context changed. Like a spreadsheet, but for rich text.
When I saw a tweet from Vercel's CEO about reactive documents, it clicked . The idea wasn't just interesting, it was the future of AI writing. The scope for features was immense: charts that update based on text, summaries that regenerate when content changes, analyses that stay fresh as the document evolves.
I built Flowdocs to solve this.
Documents that Think.
I wanted to build an editor where:
**Next.js ** • React 19 • **TipTap ** • Zustand • Google Gemini • PostgreSQL • Better-Auth
This is the core innovation. Integrating AI into a text editor usually feels "bolted on", you generate text once, and it's static. Keeping AI outputs in sync with changing document context is hard.
The Solution: I built a client-side orchestration engine that runs alongside the TipTap editor instance.
Instead of treating the document as a giant string of HTML, I created custom ProseMirror/TipTap nodes that are React components rendered inside the editor.
The Implementation:
The biggest UX challenge: how do you show AI changes without destroying the user's work?
The Approach: I wrote a custom extension using ProseMirror Decorations.
Managing state across the editor, orchestrator, and database without killing performance was crucial.
The Solution:
The schema supports both ephemeral and permanent workflows:
UserDailyLimit tracking to manage AI API costs per user tierThe hardest part of building wasn't the AI integration, it was ensuring that the context is interpreted smartly, always kept in loop, and that we it was reactive enough to not give a feel of stale behavior at the same time not getting updated on every word that got added, so as to give user the feel of smartness as well as keepig the costs low
I also learned that constraint breeds creativity. By limiting AI blocks to specific dependency scopes rather than "the whole document," I made the system both more performant and more predictable.
Real-time Collaboration - Since state syncing is already built, adding WebSockets (via Tiptap Collab) is the natural next step.
Vector Search Context (RAG) - Instead of just "previous paragraph" context, use Retrieval Augmented Generation to let the AI "see" the whole document library.
Advanced Chart Types - Expand beyond Bar/Pie to Line, Scatter, and custom data visualizations.
More integrations - Expand beyond the setup of only textual and charts and get into images and other stuffs.
Plugin Marketplace - Allow users to create and share custom reactive node types.