Back to YouTube
Parker Rex DailyJune 9, 2025

I Built Multi Agent AI Coding System This Weekend (Aider, Augment, Grok, Gemini, Docker)

I built a weekend multi-agent AI coding system (Aider, Augment, Grok, Gemini, Docker): orchestration, continuation prompts, and practical lessons.

Show Notes

I built a weekend multi-agent orchestration stack to connect the dots in the software lifecycle. Here are the key takeaways, what I built, and how you can apply this approach.

Why building a multi-agent orchestrator makes sense

  • There’s a gap between writing code and executing the full SDLC. Orchestrating steps can unlock speed and consistency.
  • Turning knowledge into repeatable workflows helps teams scale with agents doing the execution rather than individuals.

Continuation prompts and live workflow

  • Use continuation prompts after finishing a thread to drive the next agent.
  • Tie prompts to concrete outputs (e.g., conventional commits and branch names) to keep context and language consistent.
  • Scrum-inspired pacing with story points helps the system stay objective and predictable, even when humans aren’t involved in day-to-day writing.

PRDs, story splitting, and the Spider framework

  • Long, cross-cutting PRDs degrade when you scale with agents. Break them up.
  • Try spider-style splitting: spike-based discovery to decide how to split work.
  • Five splitting techniques (path, interfaces, data, rules, etc.) help turn big ideas into manageable chunks.
  • When you have a spike, extract it to shrink the original story and inform the rest of the backlog.

Architecture plus docs: using Midday-style scaffolding

  • Architecture is more than a diagram: document data flows, middleware, and decision points.
  • Core components mentioned: Next.js, Supabase, TRPC, Hono, real-time/storage pieces, and how they connect.
  • Build a decision tree for data access, security, tokens, and cross-origin concerns to guide agents.
  • Create templates and diagrams that employees can reuse on every project; docs become part of the product.

Idea processing pipeline: from idea to PRD to backlog

  • Drop ideas into an ideas directory; a watcher kicks off the pipeline.
  • Generate PRDs from templates, then run multiple agents to produce outputs in parallel.
  • Use XML-based PRDs for depth and performance, plus smart chunking to fit token budgets.
  • Context modules (MCPS) and documentation references ensure agents have the right sources.

Tools, prompts, and templates in use

  • PRD generation prompts map ideas to structured outputs; token budgeting is intentional.
  • Smart chunking uses available context modules (e.g., documentation and API refs) to keep outputs relevant.
  • Refs folder stores tool descriptions, API calls, and parameter guidance for consistency.
  • Runtime uses Docker and Ader to spin up ephemeral agents; scale up/down as needed.

The agent roles and runtime flow

  • Roles: Planner, Builder, Reviewer, Fixer, Product Owner, Critic.
  • Stakeholder validation, backlog refinement, capacity planning, and sprint planning are encoded into the flow.
  • Guard rails: read-only paths for generated artifacts; prevent unintended writes.
  • Execution is fragmented into sprints, with activities tracked and cross-cutting concerns identified early.

Status, future directions, and what’s next

  • Current mode: build-focused, with plans for debug and knowledge-management templates.
  • The plan is to generalize this on the AISDLC repo branch and keep building templates for docs, diagrams, and templates.
  • Grok-based prompts outperformed older Gemini prompts in the latest pass; expect ongoing prompt tuning.

How to participate or follow along

  • Fork the AISDLC branch and contribute ideas, prompts, or templates.
  • Expect future content around build logs, with Q&A sprinkled in as the project evolves.
  • VI platform and community involvement are on the roadmap for broader collaboration.

Takeaways and practical tips

  • Treat architecture and documentation as code: codify decisions, flows, and data access models.
  • Keep PRDs modular; avoid single massive documents.
  • Use spikes to validate options before broadening scope.
  • Chain work with continuation prompts to keep the agent workflow cohesive.
  • Watch token budgets with smart chunking; chunk aggressively where possible.
  • Use ephemeral Docker-backed agents to experiment safely and cheaply.