Judgment Before Automation

Flagship Program

Human-in-the-Middle MVP Lab

A 5-day intensive built on the HITM method inside the JBA framework. Serious, small-group training for building with AI without losing architectural control.

Build Fast. Decide Slowly.

A 5-day intensive for building with AI without losing architectural control.

This lab teaches a disciplined way to build with AI so the result is still reviewable, handoff-ready, and technically coherent. You leave with a real MVP and a workflow you can reuse.

Cohort starts Will be announced soon.
Format Berlin + Remote
Capacity 6 participants
Outcome Real MVP + method

The Problem

Most AI-assisted builds fail because the process is weak, not because AI is weak.

Teams generate quickly, then discover that no one can explain why the repo looks the way it does, what the AI changed, or how to hand the work to another developer. Founders lose visibility. Developers inherit cleanup. Trust drops fast.

What usually breaks

  • Specs come after code
  • AI changes are hard to trace
  • Technical debt arrives early
  • Handoffs become painful
  • Control shifts away from the team

Proof of Method

This method already works on real projects.

CO2Calc, an emissions workflow MVP for a real client, was built into a working first version in two days using this method. The frozen snapshot shows the teachable architecture state. The live version shows the workflow continuing beyond that snapshot.

Frozen teaching version

CO2Calc Lab Snapshot

Clean architecture state for teaching, review, and case-study reference.

Open frozen snapshot ↗

Live evolving version

CO2Calc (Ongoing)

Current working version as development continues and workflows evolve.

Open live version ↗

You are not being asked to believe in a theory before seeing an outcome.

How The Method Works

Human-in-the-Middle is a build process, not a prompt trick.

You decide what is being built, what AI is allowed to touch, and what must be reviewed by a human. AI helps with execution. The structure stays human. This is different from Human-in-the-Loop, where the person mainly steps in after output appears.

In practice

  1. Define constraints
  2. Write structured specifications
  3. Generate in bounded steps
  4. Review and approve before shipping

Why this is not Human-in-the-Loop

  • HITL checks output after generation
  • HITM defines boundaries before generation
  • HITM keeps approvals inside the build process
  • Responsibility stays human from start to finish

This is where speed becomes usable instead of chaotic.

Architecture Model

Human-in-the-Middle architecture flow

The diagram shows the sequence behind the method: human-defined structure first, bounded AI execution second, approval before anything becomes part of the build.

Human-in-the-Middle architecture flow diagram A five-stage flow from research to stable MVP with a highlighted human approval gate before controlled AI execution. Problem / Research Research is human Structured Markdown Spec Spec before code Human authority checkpoint Human Approval Gate Nothing ships without review Controlled AI Execution Stepwise generation Stable Codebase / MVP Engineer-respectful output
Problem / Research Research is human
Structured Markdown Spec Spec before code
Human Approval Gate Nothing ships without review
Controlled AI Execution Stepwise generation
Stable Codebase / MVP Engineer-respectful output

What You Will Build

A real MVP and a process you can reuse.

Everyone works on the same example product. That keeps the complexity high enough to be useful, but controlled enough that the method stays visible.

What you leave with

  • A working MVP
  • A structured specification process
  • A cleaner, more reviewable repo
  • A handoff-ready README and limitations file
  • A workflow you can reuse on future projects

Why the project is shared

  • Shared vocabulary across the cohort
  • Shared debugging instead of private chaos
  • Controlled complexity
  • A realistic build instead of a toy exercise

The example product is a Spec-to-MVP Tool with authentication, API separation, export functionality, and proper documentation.

5-Day Intensive

Five consecutive days. Serious work. Small group.

Each day ends with a concrete artifact, not just a lecture or prompt session.

Day 1

Architecture & Spec

  • Define constraints
  • Write structured markdown specification
  • Plan folder structure
  • Define system boundaries
  • Define AI permissions

Output: Clear repo blueprint.

Day 2

Controlled Code Generation

  • Stepwise Codex / Claude execution
  • Human approval gates
  • Commit discipline
  • Commenting standards
  • Architecture checks

Output: Stable base structure.

Day 3

UI & Asset Layer

  • Design-to-code alignment
  • Clean component structure
  • Optional ComfyUI workflow overview
  • Naming governance

Output: Functional interface.

Day 4

Refinement & Documentation

  • Error handling
  • Edge case review
  • Refactoring prompts
  • README writing
  • Known limitations documentation

Output: Engineer-respectful prototype.

Day 5

Demo & Packaging

  • Problem framing
  • Architecture summary
  • Technical roadmap
  • Risk documentation
  • Funding-ready demo

Output: MVP ready for funding, handoff, or careful expansion.

Modular by Design

The MVP Lab is the full architecture experience. Modules are the scalable system around it.

Individual modules can be booked separately, delivered in-house, or combined into custom programs. The flagship remains the full Human-in-the-Middle MVP Lab.

Judgment Before Automation — Modular Series

Module 1

Spec Before Code™

Structured Markdown Architecture Workshop

  • Constraints definition
  • Structured markdown specifications
  • System boundary planning
  • AI permissions design

Audience: Designers, product thinkers, founders, UX architects

Module 2

Controlled AI Execution

Stepwise Generation & Approval Gates

  • Codex / Claude execution discipline
  • Human approval gates
  • Commit hygiene
  • Architecture integrity checks

Audience: Product builders, architects, technical designers

Module 3

Engineer-Respectful Repositories

Repo Discipline & Handoff Structuring

  • Folder governance
  • Naming systems
  • README discipline
  • Known limitations documentation

Audience: Founders, designers transitioning into build roles

Module 4

Asset Pipelines with Authority

ComfyUI & Structured Visual Generation

  • Controlled asset pipelines
  • Design-to-code alignment
  • Mac to PC render workflows
  • Visual governance and consistency

Audience: Designers, hybrid builder-designers, creative technologists

Module 5

Funding-Ready MVP Packaging

From Prototype to Credible Demo

  • Architecture summary
  • Technical roadmap
  • Risk documentation
  • Responsible Version 2 planning

Audience: Founders, indie SaaS builders, product designers

Custom corporate versions available. Modules are secondary pathways, not competing offers.

Who This Is For

Serious builders who want speed without surrender.

This is for people who want to work with AI at a professional level, not just use it as a shortcut generator.

  • Product designers transitioning into build roles
  • UX architects
  • Indie SaaS founders
  • Technical designers
  • Developers curious about disciplined AI workflows

This Is Not For

Shortcut culture and automation-by-default.

The lab is demanding on purpose. It assumes patience, documentation discipline, and a willingness to review your own process.

  • Complete beginners
  • Prompt hobbyists
  • People looking for shortcuts
  • "Just generate everything" workflows

If you want to automate responsibility away, this is not your lab.

Founding Cohort (Intensive Edition)

Small, deliberate, and manually reviewed.

Applications are open. Cohort limited to 6 participants. Seats are confirmed individually upon acceptance and payment.

5 consecutive days Berlin-based (in-person option) Open to remote participants English Application required Cohort start: Will be announced soon.

Founding Cohort

€1,100

Next 3 confirmed seats

Regular Price (future cohorts)

€2,100

Published reference price

Rolling admissions (structured)

Applications are reviewed weekly on a rolling basis until all 6 seats are filled. This creates a clear timeline without fake countdown pressure.

Manual review and seat confirmation

Applications are reviewed manually. This is not an automated enrollment process. Accepted participants receive an acceptance email and payment link. A seat is confirmed upon payment.

Course language can be adapted to cohort preferences (English/German) where group composition allows.

About

Built by a designer moving into product architecture and AI-assisted building.

My workflow is research-first, markdown-spec driven, and repository-disciplined. I use AI for stepwise execution while keeping human approval at every stage.

Using this method, I built a working MVP in two days, clean enough to apply for funding and structured enough to hand off. This lab formalizes that workflow.

Workflow principles

  • Research-first
  • Markdown specification
  • Repo discipline
  • Stepwise AI generation
  • Human approval at every stage

FAQ

Practical questions

Do I need to be an experienced developer?

No. But you must understand basic product structure and be comfortable with structured thinking.

Is this no-code?

No. This is AI-assisted architecture with human control.

Can I bring my own project?

Not in the founding cohort. Advanced cohorts may allow this later.

Is this anti-developer?

No. The goal is engineer-respectful output.

Will we use a specific stack?

Yes. The stack will be defined before the cohort begins to ensure alignment and maintainability.

Apply

Build fast. Decide slowly.

If you want controlled acceleration instead of chaos, apply for the Founding Cohort. Applications are reviewed manually, and seats are confirmed individually.

After submission, you’ll be redirected to a confirmation page. Applications are reviewed manually.