Show Notes
I built a free CLI tool in a couple hours to make AI development more repeatable and less brittle. It scaffolds an SDLC-driven workflow using modular markdown-like outputs and prompt chaining so outputs don’t break as models improve.
What problem this solves
- Keeps engineering quality in AI projects by treating task creation like a software SDLC, not a marketing bullet.
- Moves beyond hype around “vibe code” by focusing on concrete artifacts (PRDs, architecture, tasks, tests) that survive model updates.
- Bridges between fast AI prompts and durable project structure via modular, self-healing patterns.
How this AI SDLC CLI works
- Self-contained workflow that prompts you for each SDLC step and builds the project structure accordingly.
- Uses prompt chaining: each step feeds into the next, preserving context and decisions.
- Keeps outputs stable via a file-tree and markdown-like docs, while letting the model improve the underlying reasoning over time.
Core concepts and flow
- Phases (order roughly followed in the tool):
- Idea
- PRD (Product Requirements Document)
- PRD + Architecture / System patterns
- Tasks (and placeholders for tests in the future)
- Optional: Tests (currently removed in the initial version)
- Architecture patterns included (Python and TypeScript focus initially):
- Singleton, Factory, Facade, Observer, Strategy, Decorator, Repository, Dependency Injection
- Python-specific typing and tooling patterns
- System patterns:
- File-tree awareness, filtering noise, proposing target structure
- Self-healing: if a pattern already exists, adapt rather than overwrite
- Output artifacts:
- Rules and decision rationales (why a pattern was chosen)
- Task lists with checkpoints and code snippets
- Reference to dependencies, paths, and example scaffolding
- Scope and tech:
- CLI-based, Mac-only initial rollout
- Python and TypeScript support planned
- Free to use
How to use (quickstart)
- Install and initialize
- CLI installation:
- pip install AI SDLC
- Initialize and start the workflow (example):
- ai_sdlc init
- ai_sdlc idea "Refactor authentication flow for better modularity"
- CLI installation:
- What happens next
- The tool creates a left-hand side project layout with:
- doing
- done
- prompts
- You then provide the idea, and the CLI generates the next set of files and prompts for each subsequent step
- The tool creates a left-hand side project layout with:
- Example file tree after init (illustrative)
- Code block:
. ├── doing │ └── (current active work) ├── done └── prompts ├── prd.md ├── arch_and_patterns.md ├── tasks.md └── tests.md (optional)
- Code block:
- Example flow (high-level)
- Idea -> PRD (captures problem, success criteria) -> PRD + Architecture (system patterns, file-tree proposal) -> Tasks (with examples, dependencies, paths) -> Tests (omitted in this first version)
- Each prompt uses the previous step as input to provide continuity
What’s included in this first release
- Self-healing rules:
- If a project already has a pattern, the tool updates it instead of replacing it
- Checks the current file tree to guide architecture decisions
- Pattern library:
- A set of common architectural patterns (listed above) that the tool can scaffold and reason about
- Language support in scope:
- Python and TypeScript examples and scaffolding
- CLI-only (for now):
- No GUI, no MCQ bells and whistles; focus on solid, repeatable scaffolds
Limitations and current scope
- Tests are temporarily minimized/elided in this initial version
- Mac-only environment in this iteration
- Not a finished product; a fresh build with user feedback will drive improvements
Takeaways and practical tips
- Treat AI-assisted development as a lifecycle, not a one-off prompt
- Use modular markdown-like docs to anchor long-term stability as models evolve
- Chain prompts across SDLC steps to preserve decisions and rationale
- Start with architecture patterns and a concrete file-tree scaffold to gain immediate traction
- Build the tool to adapt: prefer self-healing rules that adjust existing patterns over overwriting them
Next steps and ideas
- Expand tests to cover unit, integration, and behavioral tests within the prompts
- Extend language support beyond Python and TypeScript
- Add richer examples, more patterns, and deeper analysis of architectural trade-offs
- Consider adding a “move fast with safeguards” mode that surfaces riskier design choices for human review
Links
- Related tooling references: Jira, Notion, Linear discussed as context for workflow integration
- Cursor Memory Bank (inspiration for the self-improving approach)