Show Notes
Parker Rex breaks down practical prompt engineering: how to talk to AI to get better output, the core anatomy of prompts, and the tooling and workflows that actually work in practice.
Prompt Architecture: Models, Context, and When to Use What
- Model families
- Chat models: cheap, fast, great for high-frequency tasks
- Chain-of-thought (thinking) models: longer, deeper outputs but more expensive
- Hybrid models (e.g., Google Gemini): powerful all-around, increasingly the default for many tasks
- Practical guidance
- Use hybrid for writing or coding tasks
- Use chat models for quick, repeatable prompts
Context is King
- Context defines how the AI interprets your request
- Without context, even a brilliant model can produce generic or off-mark results
- Build prompts with a consistent structure to maximize alignment from run to run
The 5-Part Prompt Template
Always structure prompts with these five elements:
- Role: Define the assistant’s persona (e.g., “You are an expert direct-response copywriter.”)
- Purpose: State what you want the assistant to accomplish
- Instructions: Give step-by-step, atomic tasks
- Rules: Include constraints and anti-rules (e.g., “use fifth-grade writing level,” “avoid fluff”)
- Output: Define the exact format and expectations (e.g., a template, JSON, or a short draft)
Concrete example (writing task):
- Role: You are an expert direct-response copywriter.
- Purpose: You will rewrite a draft into a more persuasive version.
- Instructions: 1) Read the draft. 2) Identify 3-5 improvements. 3) Produce a revised draft. 4) Provide rationale. 5) List changes.
- Rules: - Write at an 8th-grade level. - No fluff. - Provide output in JSON:
{ "title": "", "body": "", "cta": "" }. - Output: Deliver a JSON object with fields title, body, and cta.
Tip: you can “end it” with the exact expected output so the model returns structured results you can consume downstream.
The Manual Prompt-Engineering Workflow
- Start with a solid draft
- Prompt the model, read the output, then evaluate
- Iterate by tweaking the prompt (not just the draft)
- Tools to help refine prompts:
- Anthrop ic: use the Generate Prompt button to improve your draft
- Google: use the “Help me write” feature to reshape prompts
- OpenAI: use the model’s output to refine further (a feedback loop)
- The core idea: prompts are artifacts; you improve them by deliberate, manual evaluation, then re-prompts
Example workflow:
- Write a draft
- Feed it into the model with the 5-part template
- Copy the output back into a sheet or notes
- Manually assess quality, adjust the prompt, and re-run
- Repeat until you’re satisfied
Token Efficiency and Structured Outputs
- Tokens are the unit of compute, not characters
- Dead space and verbosity waste tokens and money
- Prefer structured formats (JSON, XML) to minimize tokens and maximize parse-ability
- JSON often easier to read and process; XML can be more verbose but sometimes more expressive
- Visualizing data
- Think of JSON as a flat table or spreadsheet: each object is a row, fields are columns
- Where to try prompts
- Anthropic Playground (playground.anthropic)
- OpenAI Playground (platform.openai.com/playground)
- Other model explorers and IDEs exist, but focus on the two above for practical testing
Tools, Environments, and Prompts Workflows
- Versel Playground: explore multiple models side-by-side and compare outputs
- Google’s prompt tooling (e.g., “Help me write”)
- OpenAI Playground: experiment with different prompts and formats
- MIMO: notebook-style prompts for iterative, agent-like workflows
- Useful for building a sequence of steps (offers, headlines, hooks) and evaluating each stage
- Prompt management realities
- Prompts are artifacts to be stored and reused
- Plan for hotkeys and quick access; consider future tooling to manage expansions and compressions of prompts
Prompt Management and Future Plans
- Prompt artifacts and hotkeys
- Store and bind reusable prompts to shortcuts
- Expansion vs. compression prompts
- Expansion: take a small input and expand it (e.g., expand a headline into a full ad copy)
- Compression: condense long-form content into concise versions
- Vision for a centralized prompt tool
- Input modality (text, image, video, audio) → model outputs (text, code, media)
- Model selection pane shows the outputs per model
- Aims to streamline prompt creation and orchestration across media formats
Practical Takeaways
- Start with a strong context using the 5-part prompt framework
- Optimize input length to maximize output quality without wasting tokens
- Use structured outputs (JSON/XML) to simplify downstream processing
- Refine prompts manually before automating or scaling
- Treat prompts as repeatable artifacts you store, tag, and bind to workflows
- Explore community tools and early-access prompts generators when available
Quick Wins to Try Today
- Write a 5-part prompt for your current task (role, purpose, instructions, rules, output)
- Use JSON as the output format and define the fields you need
- Run a few iterations: draft → improved draft via the prompt → evaluate outputs in a spreadsheet or notes
- Experiment with Anthropic’s Generate Prompt button and Google’s “Help me write” feature to see how prompts can be improved automatically
Links
If you found this helpful, consider sharing a prompt you’re working on in the community to get feedback and accelerate your own improvements.