Back to YouTube
Parker RexMay 28, 2025

How to Actually Setup Your AI Coding Environment (VSCode, V0, Augment, GCP)

Set up a stable AI coding environment (VSCode, V0, Augment, GCP), avoid tool hopping, and lock in reusable prompts and markdown patterns.

Show Notes

Parker walks through his actual AI coding environment, why tool-hopping ruins momentum, and the primitives he uses to keep things portable, fast, and scalable across tools like VSCode, Augment, and GCP.

A portable, tool-agnostic workflow

  • The core idea: don’t chase the latest tool. anchor your workflow in portable patterns you can carry across environments.
  • Markdown-driven prompts and file structures let you switch underlying tools without changing how you think about problems.
  • The goal: be a “markdown god,” not a tool hopper—your patterns travel with you.

The core stack: VS Code, Augment, GCP

  • VS Code is the daily driver
    • No fork needed, performance is solid, and Microsoft’s open approach (open source, extensible APIs) matters for long-term viability.
    • Future-friendly: mid-June API/extension improvements planned; potential to build custom modes and tool calls inside VS Code.
  • Augment as the context engine
    • Best balance of performance and memory) with built-in memories.
    • Focuses on context rather than forcing you into a particular UI or vibe codebase.
  • Other tools in consideration
    • Cursor and Windsurf: not daily drivers for Parker anymore.
    • Klein (extensions): still watched for innovation, but not relied on as a sole driver.
    • Envim is mentioned but seen as less central than Augment for this setup.

Augment: context, memory, and background agents

  • Augment’s background agents are a standout: no obvious performance issues, can tailor the shell/script layer, and emphasize context retention.
  • The vibe is “agent-driven, not just editor-driven.” This aligns with Parker’s goal of scalable, memory-aware workflows.
  • The pattern: use augment to keep long-running context across tasks, so you don’t lose state between steps.

The AI architecture: prompts, folders, and a CLI

  • Folder layout (thisai folder)
    • Prompts drive the workflow; prompts are in GitHub instructions and organized for portability.
    • Key prompts:
      • Idea prompt: steelman or challenge an idea’s value; portable across tools.
      • PRD instructions: turn ideas into a practical product requirements outline.
      • PRD Plus: add mental-model scrutiny and cross-context checks.
      • Architecture instructions: architecture patterns, language-specific guidance (Python, TypeScript), and canonical design patterns (singleton, factory, facade, etc.).
    • Doing vs backlog: a bridge from high-level ideas to concrete tasks.
  • A self-contained CLI for memory and workflow
    • Runs steps, tracks progress in a local lock file, and keeps your context cohesive across runs.
    • Designed to be tool-agnostic: you can swap underlying tooling while preserving the workflow.
  • Token awareness and management
    • 16,000-token task instructions in one piece; use a token-count extension to stay aware of limits and optimize prompts.

Example prompt structure (high level):

  • Idea prompt -> PRD instructions -> PRD Plus -> Architecture instructions -> Task instructions -> Implementation backlog

Prompt portability: patterns that survive tool changes

  • The idea is to build prompts that survive tool swaps:
    • System prompts, role definitions, inputs/outputs, clear examples.
    • File-tree awareness (when using augment) to shape context around current/project state.
  • This approach reduces the need to fine-tune around a specific toolset (e.g., tan stack start/router) and favors well-documented, broadly adopted patterns.

MCPs on Docker: repeatable automation

  • MCPs (multi-chain problem Solvers) work best in Docker for consistency.
  • Parker’s current picks:
    • Sequential Thinking
    • GitHub MCP server
    • Playwright (paired with a prompt for navigation/testing)
  • Future candidate: Context 7 (needs a Dockerfile; a caution due to some docs quirks, but valuable if you can stabilize it)
  • Why Docker? Keeps environments predictable across machines and reduces “works on my machine” friction.

Workflow decisions and tradeoffs

  • Taskmaster is used in some setups but Parker finds it can degrade quality for larger projects when you over-index on pushing context into a single pass.
  • Cloud-based code (Claude Code) is tempting but not Parker’s default; he’s leaned away from it unless augment-based flow starts underperforming.
  • Browser-based flows (e.g., Gemini) are a fallback path for quick data operations, but not Parker’s day-to-day coding engine.
  • When coding AI apps, Parker prefers reliable, well-documented stacks and strong integration with the Google stack (Vertex AI) and Next.js for web apps.
  • For DevOps and infra, a beefy but cost-effective machine (Netcup, Debian-based) paired with PM2, Nginx, and Discord-based observability is the current setup.

Infra and deployment notes

  • Cloud and compute choices
    • Prefer GCP Vertex AI and related AI/ML tools for AI workloads.
    • Next.js for web apps and remote agent integration; strong community support and ecosystem.
    • FastAPI when you need a rich AI/ML backend with robust Python tooling.
  • DevOps and observability
    • PM2 runners, Nginx, and Discord bots for operational alerts.
    • Python-centric stacks align well with Google’s tooling and AI APIs.
  • Practical note: there’s a tradeoff between raw compute cost and the complexity of maintaining a custom stack. Parker’s current setup emphasizes cost-effective, maintainable components with strong docs.

Echoes of the setup in practice

  • The “thisai” folder pattern and the prompts you maintain can drive most AI-assisted coding workflows, regardless of tool, as long as you keep the prompts portable.
  • Augment’s memory and background agents put contextual intelligence at the core, reducing the overhead of constantly reinventing the wheel for every project.
  • The goal is leverage, not glorified tooling. Pick the pieces that scale with you and your projects.

Takeaways you can apply

  • Build portable prompts and a consistent file structure you can carry to any tool.
  • Use Augment (or an equivalent context engine) to keep long-running context and memories, instead of trying to fine-tune everything per tool.
  • Favor well-documented, widely supported stacks (e.g., Next.js, FastAPI, GCP Vertex) over niche, less-documented ecosystems.
  • Run MCPs in Docker to ensure repeatability and reliability across environments.
  • Start with a solid core (VS Code + Augment + GCP) and iterate on sub-stacks (Playwright, GitHub MCP, etc.) as needed.

If you want deeper context on Parker’s broader take on LLMs driving tech choices, check out the daily channel for further thoughts and contrasts.