ChatGPT, Claude, Gemini — they can all review a passage of text. They’ll spot weak dialogue, suggest a stronger opening, recommend more sensory detail in descriptions. They do this well. But there is one task none of them is designed for: actively tracking what you’re building chapter by chapter.
What an LLM does with your text
Modern language models have enormous context windows — Claude and Gemini can “see” hundreds of pages of text at once. So technically: yes, a model can have all your chapters in front of it simultaneously. The problem isn’t memory. The problem is what the model does with that text.
An LLM reads text as a flat stream of words. It doesn’t build structure from it. It doesn’t extract relationships: that the unnamed character in chapter 1 is the same person who appears with a name in chapter 4. That a thread opened in chapter 3 was never closed. That the physical description of the protagonist changed between scenes.
The model sees words. It does not see the story those words create.
There’s an additional problem: with long texts, analysis quality drops. The well-known “lost in the middle” phenomenon — the model processes information from the middle of its context less well than from the beginning and end. The longer the novel, the more the middle chapters fall out of focus.
A novel is not a document — it’s a sequence
When you write a novel, you’re building something that has internal memory. The reader remembers the antagonist’s face described in chapter 1 when he appears nameless in chapter 6. They remember that the protagonist said something in chapter 3 that they shouldn’t have said. They remember a certain object — mentioned in passing at the start — that returns in the finale and takes on new meaning.
This continuity is the mechanism that makes a novel work. Feedback that ignores it may be technically correct — but it’s detached from what you actually wrote. Sometimes it will suggest adding something that’s already in the text, just three chapters earlier, hidden in a gesture or a line of dialogue.
What a novel analysis tool needs to do
To give meaningful feedback on chapter 8, a tool needs to actively track:
- how character X looked and behaved in chapters 1–7
- which threads have been opened and which have closed
- what the reader already knows — and what they don’t yet
- which elements recur, which evolve, which disappear
This isn’t a matter of “pasting all the chapters into one window.” It’s a matter of structuring knowledge about the novel as you read it — the way an attentive editor with a notebook does.
How Vellam approaches the problem
Vellam reads chapters one by one — the way we read novels. After each chapter it actively updates its databases: character profiles (appearance, behaviour, role in that specific scene), locations, open and closed threads. This isn’t just “seeing the text” — it’s structuring what is in the text.
When Vellam analyses chapter 8, it has seven chapters read and catalogued behind it. It can say things that flat text analysis cannot:
- “The physical description of this character slightly diverges from chapter 1 — worth checking for consistency.”
- “This thread appears here for the first time since chapter 4. If the gap is intentional it works well. If not — it might be worth briefly reminding the reader of the context.”
- “The character maintains full continuity with previous scenes here — their reaction to this situation is consistent with what we’ve known about them since chapter 2.”
- “The motif introduced in chapter 3 returns here in altered form. This is one of the stronger moments in this part of the manuscript.”
This isn’t generic feedback. This is feedback about this specific novel, based on its history — not just the text of the current chapter.
Who this matters for
If you’re writing a short story or brief form — a language model will probably suffice. One text, one query. If you’re writing a novel — and you care about feedback that takes into account what you’ve actually built — you need a tool that tracks that story alongside you.