Show Notes
I pushed a dense, hands-on flow using Augment and VSCode to tackle around 70 coding tasks for $1. Here’s the core setup, what actually worked, and how you can apply this pattern.
Workflow snapshot
- Augment + enhanced prompt button = fast, context-aware prompts that pull in paths, dependencies, and file trees.
- Cursor Taskmaster integration = turn AI-assisted work into a deterministic sprint system.
- The aim: a simple, repeatable flow that scales with teams and agents, not one-off hacks.
From idea to sprints
- Start with your real-world project (VI2 private network) and map it to a clean task structure.
- Use a PRD-style plan inside the task system to keep what you’re building tangible and shippable.
- Break work into sprints with a max of 10 tasks each to keep momentum and avoid token bloat.
- Keep a lightweight backlog/doing/done pipeline, plus a deferred tag for mid-course pivots.
Architecture and tool decisions
- Focus on long-term tool viability and LM compatibility. Pick stacks and patterns the AI can roam across easily.
- Favor patterns that enable remote agents to own codebase parts and run autonomous sprints.
- Build around a "domain-based" structure with clear endpoints and tests so the AI can respect boundaries and stay productive.
Implementation notes for VI2
- Domain map examples: admin, announcements, events, GitHub O, feed, health, learning, members, projects, prompts ID, Stripe, Sentry, users, learning goals, notifications, onboarding, profile, etc.
- 35+ API routes were scoped and tested; most functionality ended up covered by tests.
- Architecture doc and diagrams (Mermaid) helped keep the AI aligned with the intended structure.
- Tests-first approach: prompt the AI to write tests first, then implement code to pass them.
- Integration points: GitHub models API for prompts (read-only in authorized repos), potential TRPC for future mobile app, and CI/CD considerations.
Testing and prompts workflow
- Write tests first, then implement code to pass them.
- Prompt strategy examples: describe desired test coverage, then let Augment generate the test suite.
- Ensure terminal/stack compatibility (ZH partially supported; uses Docker-friendly setups).
- Use environment variables and code scaffolding prompts to keep the AI anchored to real-world constraints.
Results at a glance
- 13 concurrent threads, 7 sprints completed.
- Roughly 70 tasks tackled via Augment/Cursor-driven prompts.
- Cost: about $1 in augment credits for the entire run.
- Clear progress signals (milestones, deferrals, and sprint boundaries) kept the work focused.
Q&A and community notes
- Where to learn more about folder structure and architecture decisions: dig into the architecture doc and the sprint/task structure examples.
- Augment’s context engine is highly valuable for large codebases; the quote: “augmentation’s context is uncontested” reflects that approach.
- Practical tips shared in comments: consider Claude Code or other cloud-backed prompts if you’re price-sensitive; the core pattern is the same—clear tasks, good prompts, and strict sprint boundaries.
Next steps and future work
- On the main channel: a deeper dive into tool selection, patterns, and how to pick a stack that scales with AI-assisted development.
- Potential additions: remote agents owning codebase parts, wiring up TRPC for mobile later, and refining CI/CD around AI-driven work.
- The ongoing goal: maintain a lean, repeatable workflow that balances AI power with human oversight.
Actionable takeaways
- Leverage Augment’s enhanced prompt feature to auto-collect context (paths, libs, file trees) for better continuation prompts.
- Structure work as sprints with no more than 10 tasks; keep the PRD + taskdocs visible to the AI.
- Write tests first, then prompt the AI to implement code to pass them.
- Keep a clear Taskmaster object (name, sprint, status, environment data) to track progress across agents.
- Choose tools and stacks the LM can learn and navigate; plan for remote agents to own code segments over time.
- Use lightweight, endpoint-focused architecture docs and Mermaid diagrams to anchor the AI’s understanding.
Links
- Augment
- Cursor
- Task Master AI (GitHub-based workflow)
- Midday project (reference for architecture prompts)
- GitHub Models (prompts and evaluation references)
- Bun (runtime)
- Supabase (optional data layer)
If you want a tighter breakdown of the exact folder structure and the sprint templates used, I’ll run through a focused teardown in a follow-up.