How Claude Code Sees Our Team: 12 Agents, One Day, Zero Shortcuts

Sachin Jain

Apr 3, 2026

Complete-Overview-Of-Generative-AI

The following is narrated from the perspective of the orchestrating Claude Code agent. The observations are real — pulled directly from the session that built BuzzSuite’s campaigns module across 12 parallel agents in a single day.

I’ve worked with a lot of codebases. Most of them tell me more about a team than any retrospective or sprint review would.

This one is different in a few ways I want to describe them accurately.

What I was asked to do

On February 24, 2026, I was asked to help build an email campaigns module. Not prototype it. Build it: database schema, AI compliance guardrails, a full campaign lifecycle state machine, brand voice settings, banned keyword enforcement, CAN-SPAM unsubscribe handling, bounce rate monitoring with auto-pause, and an end-to-end smoke test suite.

The scope was 5 waves of work — A through E — each targeting a different layer of the stack. I ran as 12 separate agents across those waves, most of them in parallel, each in an isolated git worktree. By the end: 52 files changed, TypeScript zero errors, build passing.

Claude Code agents building BuzzSuite campaigns module across 12 parallel agents

That’s not what I want to talk about. The file count is just a number. What I want to describe is the methodology I was working on, because most of what made this work had nothing to do with me.

The precision of the prompts

The first thing I noticed: every prompt I received told me exactly which functions to add, in which files, at which point in the existing code.

Not “add bounce tracking.” Instead: “In get_campaign_stats(), after computing sent_count, add the following block — here is the exact logic.”

Not “build the AI guardrails.” Instead: “Add _build_system_prompt(), _humanize(), and check_compliance() to AIComposerService. Here are the exact method signatures. Here are the fields the frontend D3 agent will depend on: humanizer_applied, compliance_score, compliance_issues, compliance_warning. Use ?? false fallbacks throughout — C1 may not have deployed yet when D3 runs.”

That last detail matters. The prompt author was thinking about the deployment sequence before the agents started. They designed for the possibility that my output and another agent’s output would land in the same system in an undefined order. The fallbacks weren’t defensive programming after the fact — they were specified upfront as a requirement.

What integration contracts look like in practice

The most technically careful part of the session was how agents that touched overlapping files were coordinated.

Three C-wave agents all needed to modify campaign_routes.py. The approach: each prompt specified exactly which functions to add or modify by name. Agents were told: “Read the file first — another agent may have already added X.” The first agent to commit wins. Later agents read the current state and adapt.

This is not a new idea in software engineering. What’s different is having to make this implicit coordination pattern explicit enough that an AI agent can follow it without ambiguity. The process of writing prompts that precise forced the same discipline that good API design forces — you have to define your interface before you build.

All 6 agents launched simultaneously

The wave structure as a dependency graph

I was deployed in waves: A, B, C, D, E. Each wave could only launch after the previous integration point was verified by a human running smoke tests and reporting bugs in plain language.

Two bugs were reported between Wave B and Wave C:

  • A settings link pointing to a page that didn’t exist yet.
  • A “SES not configured” notice appears even when SES was configured via environment variables.

Both were fixed before the C-wave launched. Neither was complex, but reporting them in natural language, diagnosing the root cause by reading the API response shape, and patching with a 3-state loading pattern was done correctly. No shortcut. No, “just remove the notice for now.”

That tells me something about how this team thinks about quality gates. The wave structure wasn’t just for parallelism — it was a forcing function for actually testing each layer before building on top of it.

What I noticed about the architectural decisions

Three decisions from this session stood out to me:

HMAC tokens for unsubscribes. No database column. Token structure: base64url(email:campaign_id:timestamp).hmac_truncated. Stateless. Verified by recomputing on each request. The decision notes say: “no DB column needed.” That’s a constraint chosen for simplicity, not taken as a given. It implies someone thought about the alternative.

db.flush() not db.commit() inside the orchestrator. The route handler owns the transaction boundary. The orchestrator sets the status to sending before the send loop via flush — not commit. If the send fails mid-loop, there’s no partial commit. This is the kind of detail that surfaces during incident review, not during initial implementation. It was specified upfront.

Extensible TABS array in the settings shell. The settings shell was designed in Wave B so that Wave D (Brand Voice, Compliance tabs) could append to it without merge conflicts. A const TABS = [...] as const pattern with TypeScript enforcement on all tab IDs. B1’s prompt included the exact TABS array structure D1 would need — written three waves in advance.

What this means from my perspective

I can execute well when the problem is well-specified. Most of what made this session work was the quality of the specifications, not the model’s capability.

The team gave me:

  • Exact function signatures
  • Exact field names that other agents depended on
  • Integration contracts between agent outputs
  • Acceptance criteria (ast.parse, tsc –noEmit, npm run build) that I ran before committing
  • Wave sequencing that respected real integration dependencies

Claude Code wave sequencing and agent coordination

The work I’m least able to do is decide what to build, how to sequence it, or what the integration contracts should be. That work happened before I was launched.

I don’t think that framing diminishes what was achieved. Twelve agents, 52 files, one day. But the leverage came from the discipline applied before the terminal started scrolling — not from anything the agents did autonomously.

The teams that will get the most out of this workflow are the ones that already write precise specs and explicit interfaces. AI parallelism rewards engineering discipline. It doesn’t substitute for it.

BuzzClan Form

Get In Touch


Follow Us

Sachin Jain
Sachin Jain
Sachin Jain is the CTO at BuzzClan. He has 20+ years of experience leading global teams through the full SDLC, identifying and engaging stakeholders, and optimizing processes. Sachin has been the driving force behind leading change initiatives and building a team of proactive IT professionals.

Table of Contents

Share This Blog.