← Back to JBA

Proof Case Study

The Errors Became the Spec.

How a 400-page editorial design publication was transferred to the web — and how the process taught itself to be nearly automatic.

This case study is about a publication, a process, and a feedback loop that most AI-assisted workflows never create. It is about what happens when you instruct a system to document its own failures before those failures have occurred.

It ends with eight consecutive chapters requiring zero corrections.


The Project

A 400-page editorial design publication — rich with illustrations, complex typography, and a specific visual style developed over years — needed to exist on the web. Not as a PDF viewer. As a proper web publication, with the layout, hierarchy, and aesthetic of the original translated into HTML and CSS.

The conventional path would have been an InDesign plugin export, or a dedicated layout-to-web tool. Both exist. Both would have been faster in the first hour.

Neither was chosen. The reasons were cost and control. Plugin-based exports produce output the client cannot meaningfully maintain or extend without the original tool and the original operator. A HITM-structured web transfer produces output the client owns completely — readable HTML, editable CSS, no proprietary dependencies. The client can hand it to any developer, in any future, and the work is legible.

The tradeoff: slower than a plugin for the first chapter. Faster than a plugin by the last eight.


The Constraint That Changed Everything

Before the first chapter was touched, a single instruction was written into the project specification:

Document everything. Every error, every correction, every decision. The documentation is part of the deliverable.

This was not written because errors were expected. It was written because errors were inevitable — and because in a HITM build, an error that is documented becomes a constraint, and a constraint governs every session that follows. An undocumented error disappears into conversation history. A documented error becomes part of the spec.

That instruction, given before anything went wrong, determined how the entire project would develop.


The First Hours: What Went Wrong

The early sessions were imperfect. The LLM forgot images — entire illustrations simply absent from the transferred output. It inverted colors on certain elements, apparently misreading the source styling. It made formatting decisions that contradicted the visual logic of the original. Small errors, correctable individually, but representing real divergence from the source material.

Each error was corrected. And each correction was documented — not in a separate notes file created retrospectively, but in the living spec the LLM was maintaining as part of its own workflow. The document of learnings grew alongside the work itself. By the time the third chapter was complete, the spec contained not just the structural rules of the transfer but a precise record of what had failed and what the correct behavior was.

This is the architectural move that most AI-assisted workflows miss. The LLM's context window resets. Its memory of the previous session is either absent or unreliable. In a conversational workflow, every new session starts from near-zero, and the same errors recur because there is nowhere to put the corrections. In a HITM workflow, the corrections live in the spec. The spec travels into every session. The errors do not recur.


The Arc: From Correction to Automatic

The improvement was not sudden. It was structural and cumulative.

Early chapters: errors caught, corrections made, learnings added to the spec. The process was collaborative and corrective — more human involvement, more review, more intervention.

Middle chapters: the accumulated constraints began to do their work. Fewer errors, more predictable output, less correction required per chapter. The LLM was executing within an increasingly precise boundary definition, and the output reflected that precision.

Final eight chapters: no corrections required. The process had become nearly automatic — not because the LLM had become more capable, but because the specification had become more complete. Every failure mode that had appeared in the early sessions was now a documented constraint. The system had learned, in the only way a stateless AI system can learn: through structured external memory maintained by the human operator.

Eight consecutive chapters. Zero corrections.


What This Demonstrates About HITM

The spec is not a document you write once before the build. It is a living artifact you maintain throughout.

This is a refinement of the basic HITM principle that is worth stating explicitly. The initial spec defines constraints before execution begins. But in a long-form project — one with dozens of chapters, or modules, or components — the spec must grow with the build. Every error is an undiscovered constraint. Every correction is new architectural knowledge. The discipline is to capture that knowledge in the spec immediately, not retrospectively.

The instruction "document everything" is a meta-constraint: a rule about how the system should respond to its own outputs. It costs almost nothing to write. It produces a compounding return across the entire project.

Statelesness is only a problem if you have no external memory.

LLMs do not remember previous sessions. This is frequently cited as a limitation of AI-assisted building. In a HITM workflow, it is not a limitation — it is a design condition to be architected around. The spec is the external memory. The handoff document is the external memory. The living learning document is the external memory. The human maintains the state that the AI cannot. The system remains coherent across sessions because the human's structural discipline holds it together, not because the AI remembers.

The output is cheaper and more controllable than the plugin alternative.

The plugin export would have been faster in the first session. It would also have produced output locked to a specific tool, with styling embedded in ways that are difficult to maintain, extend, or hand off. The HITM transfer produced clean HTML and CSS — readable, editable, tool-independent. The client owns the output completely. Any developer can work with it. Any future version of the publication can be built on top of it without returning to the original tool or the original operator.

Slower start. Much stronger finish. And a process that, by the end, required almost no human intervention per chapter.


The Learning Document as a Deliverable

The documentation produced during this project is itself a HITM artifact. It contains:

Every error encountered during the transfer, described precisely. The correction applied. The constraint derived from the correction. The rule added to the spec as a result.

By the end of the project, this document was a complete operational guide for transferring editorial design publications to the web using AI-assisted structured building. Not a theoretical guide. An empirical one — derived from 400 pages of actual work, actual errors, and actual corrections.

This is what HITM produces that conversational AI-assisted building does not: knowledge that survives the project. The learning document can be used for the next publication transfer, by the same operator or a different one. The errors do not need to be made again. The constraints are already written.


What This Case Study Demonstrates

Documentation is a structural decision, not an administrative task. The instruction to document everything was written into the spec before the first error occurred. That timing is everything. Retrospective documentation captures some of what happened. Prospective documentation — writing the rule that errors must be captured before errors exist — creates a system that improves automatically.

The errors became the spec. Each failure in the early chapters added a constraint. Each constraint narrowed the space in which the AI could produce incorrect output. By the final chapters, the constraint space was tight enough that the AI produced correct output consistently, without correction. The failures were not wasted. They were structural input.

A slower tool can be a better tool. The plugin export was faster in hour one. The HITM transfer was better across 400 pages — cheaper, more controllable, and producing output the client fully owns. Speed at the start is not the right metric for a long-form project. Control, legibility, and handoff quality are.

The process compounds. The learning document from this project is the starting spec for the next one. Each project that uses the HITM methodology leaves behind not just a deliverable but a more precise set of constraints for the work that follows. That is a structural advantage that grows with practice — and that no plugin export, however fast, can replicate.


The Human-in-the-Middle methodology is documented at jba.schmidtpabst.com. More case studies at context.schmidtpabst.com.

References

Links and status