Show Notes
In this conversation, Guy Gerari, co-founder of Augment and former Google research scientist, shares how Augment is accelerating software development with AI-powered tools, plus his views on planning, testing, and the future of AI-driven coding.
Guy’s Background and Augment’s Mission
- Ex-Google research scientist focused on vision tasks, optimization, and scaling with generative models.
- Physics background (Wiseman Institute, Stanford, IAS) and a career centered on pushing AI research into real products.
- Co-founded Augment to bring AI-assisted coding to developers via chat, code completions, guided edits, and a suite of agents (auto, background, remote).
- Core belief: rapid experimentation and scaling AI in coding can redefine how software is built.
Augment: The Platform and How It Fits Into a Dev Workflow
- Core tools: chat, code completions, guided edits, and a family of agents (auto and background) to accelerate tasks.
- Agents help with both inner-loop coding and outer-loop software development lifecycle (planning, reviews, triage, deployments).
- Practice tip: humans still supervise code; CI, unit tests, and code reviews remain essential even with AI agents.
Prompting Best Practices and Memory
- Context and precise instructions matter: more detailed, unambiguous prompts yield better results.
- Calibrate task size: too big a task can derail the model; break work into bite-sized pieces for reliability.
- Memory feature: agents learn from mistakes over time, reducing the need for constant guideline updates.
- If a recurring issue happens, update the system prompt to address it globally rather than micromanaging prompts.
Actionable takeaways:
- Start tasks by loading relevant code context and asking the agent to summarize before proceeding.
- Break problems into smaller chunks and iterate; reserve larger, end-to-end tasks for later.
- Leverage memory to minimize repetitive prompt tuning.
Guideline Evolution: Memories and System Prompts
- Guidelines are used less over time as memories improve accuracy.
- When you notice a widespread problem, updating the system prompt is often more effective than iterating individual prompts.
- The approach emphasizes building reliable, repeatable behavior rather than hand-crafting prompts for every case.
Actionable takeaways:
- Rely on memory to reduce friction; use system prompts to codify broad fixes for recurring issues.
- Periodically audit and refresh system prompts to reflect the current best practices and learnings.
Planning vs. Coding with Augment
- Start by familiarizing the agent with the codebase to load relevant context.
- Prefer implementing a solution directly with the agent to learn its approach; you’ll often get valuable implementation insights.
- Then switch to task lists to organize work into manageable PRs.
- Work in parallel with separate workspaces; map tasks to PR boundaries to avoid conflicts.
Actionable takeaways:
- Use the agent to implement a baseline, then organize the remainder with a task-list workflow.
- Maintain parallel workspaces for multiple PRs to scale throughput without constant context switching.
Day-to-Day at Google and the Research Mindset
- Google work centered on scientific discovery and rigorous experimentation, with metrics like loss and evaluation results guiding progress.
- Big question: how to reason and whether we can train models to improve reasoning by scaling up.
- The “shut up and calculate” mindset from physics guided practical experimentation and rapid iteration.
Takeaway:
- In fast-moving AI R&D, focus on measurable experiments and scale bets that yield actionable insights, even if the problem is abstract.
Shipping, Testing, and Trust in AI-Powered Coding
- Augment employs traditional software development quality controls: code reviews, CI, unit tests.
- AI-generated code is treated like human-written code—supervised, reviewed, and tested.
- The AI tools accelerate work, but do not eliminate the need for human oversight and robust workflows.
Takeaway:
- Build AI-assisted workflows that integrate with existing engineering practices rather than replacing them.
Agent Organization: Tasks, Sessions, and Workspaces
- Agents are organized by task and PR; multiple agents can run in parallel across separate workspaces.
- History compression allows continuing work across sessions, but you can keep agents focused on discrete PRs.
- Remote agents can handle some tasks autonomously, while more complex or sensitive work remains supervised in IDEs.
Takeaway:
- Structure agent usage around PR boundaries and code ownership; use parallel workspaces to maximize throughput.
Outer Loop and the Future of AI in Coding
- The big frontier is the software development lifecycle outside the IDE: code reviews, ticket management, production alerts triage.
- Remote agents are being positioned as both a feature and a platform to automate the outer loop tasks.
- The vision is to extend AI-driven coding beyond the editor into end-to-end lifecycle automation.
Takeaway:
- Expect AI to handle more outer-loop tasks; design tooling to act as a platform for automating lifecycle processes, not just code generation.
Three-Year Outlook: AI Coding and Multi-Agent Systems
- Strongly bullish: engineers and companies will increasingly rely on AI to automate substantial portions of engineering work.
- Multi-agent systems are expected to be the next big leap, enabling coordinated AI-driven workflows.
- Progress is rapid; the tooling and capabilities will outpace today’s expectations, opening up more creative possibilities.
Takeaway:
- Stay lean and experiment aggressively; plan for a future where AI robots many of your engineering tasks but require solid governance, reviews, and safety nets.
Final Takeaways
- AI is a practical tool that requires new workflows, not a magic replacement for engineers.
- The fastest path to value is to start implementing with agents, then organize and scale with task lists and proper supervision.
- The next few quarters will push AI further into the outer loop of software development, not just the code-writing inner loop.
Links
- Augment Code (AI-powered platform for software development)
- Modern CTO Podcast (reference for the "AI as a tool" mindset)
- Augment Documentation (planning and orchestration in workflows)