What we mean by "minimum viable"
We do not mean an agent that reads your inbox, integrates with your CRM, and auto-drafts responses. That is what vendors sell, and for a one-person operation it is overwhelmingly too much machinery.
We mean a three-step pipeline — structured intake, a prompt library, and an AI-assisted draft — with a human review step before anything goes to the client. The boundary test: if a tool requires you to learn a new platform's DSL, pay for three products to work together, or hire a developer to keep it running, it is not minimum viable.
The three-step shape
Step 1 — Structured intake. Replace the "let me know what you need" blank-canvas conversation with a form. Tally, Google Forms, or Typeform's free tier all work. Eight to twelve fields are enough to capture eighty per cent of the context you end up asking for on every engagement. The goal is not to reduce the number of conversations, but to stop re-extracting the same information three times per job.
Step 2 — Prompt library. A Google Doc — yes, a plain document — with six to twelve prompts, one per deliverable type you produce. Each prompt has four parts: a role sentence, the input shape it expects, the output format it should produce, and one worked example. When a new engagement comes in, you paste the client's intake data into the matching prompt, then run it through ChatGPT or Claude.
Step 3 — AI-assisted draft. The model writes the first seventy per cent. You edit the last thirty. You do not ship raw output. The human review step is what keeps the quality bar, and the liability, acceptable.
That is the whole thing. No Zapier, no n8n, no agent framework, no vector database, no fine-tuning.
What it replaces
- Thirty to forty-five minutes of staring at a blank page trying to decide what shape the deliverable should take, per engagement.
- Ten to fifteen minutes of re-asking clients for information you already had elsewhere but could not find.
- The slow drift of each client interaction being subtly different, which makes quality control difficult once you have more than a few active engagements at once.
A typical consultant running four to six engagements a month will save roughly eight to fourteen hours a month once the prompt library is filled out — after an initial week of setup.
What not to build yet
Do not build an agent. Agents are appropriate when you have more than three sequential steps with branching logic you cannot be bothered to execute manually. For a solo operator, you are the branching logic. You will outrun the agent's reasoning, and debugging it when it gets confused is a much worse use of your time than doing the step yourself.
Do not integrate with your CRM. The gain is small; the lock-in is real. If you change CRMs next year, you rebuild the whole pipeline. Keep the automation independent of where your contact data lives.
Do not fine-tune a model. For almost every consultancy use case, the prompt library does ninety-five per cent of the same job for one per cent of the cost. Fine-tuning is a tool for product companies with stable, high-volume tasks — not for individual deliverables that vary by client.
Do not automate the send. The human review step is the entire value proposition to the client. Removing it turns your service into the same commodity output your clients could generate themselves.
When to upgrade
Three signals indicate you have outgrown this shape:
- You are handling more than ten engagements per month and the intake form submissions are arriving faster than you can triage them by hand.
- You are re-pasting the same three pieces of context into every prompt. That is the moment to wire them in programmatically — a small script beats a sprawling prompt.
- Clients are asking to run the automation themselves. That moves it from internal tool to product, which is a materially different engagement.
Until any of those are true, resist the temptation to build more. A prompt library with a review step, used consistently for six months, beats an ambitious agent platform that never ships.
What it costs
One ChatGPT Team or Claude Pro seat is £15–£20 per month. A form tool's free tier covers the intake step at no cost. The real investment is the weekend you spend writing the first six prompts. Everything after that is rearranging how you already work.
— Zheng Zhong, Founder, SOLO TECH LTD