Show Notes
Parker walks through using OpenAI's 01-Preview model inside Cursor and the OpenAI Playground, showing a practical workflow to test prompts, prep a repo for LLMs, and implement a calendar-delete feature with smart prompts.
Setup: Enable 01-Preview in Cursor
- In Cursor, open Settings (top-right) > Models > Add Model.
- Select 01-D mini 01-Preview (the 01-Preview model).
- You’ll hit limits quickly. You have two options:
- Set a new hard limit (this can cost about $0.40 per request). Use sparingly.
- Use an OpenAI API key from platform.openai.com and paste it into Cursor (verify to enable).
Actionable note:
- If you’re prototyping or testing, the API key route avoids per-request throttling/costs the hard-limit path incurs.
Code snippet (conceptual):
text
# Enable 01-Preview in Cursor
Settings -> Models -> Add Model -> 01-D mini 01-Preview
Prompt optimization in OpenAI Playground
- In Playground, switch to the Assistant area.
- Bring in your Cursor rules:
- Copy Cursor rules from the Cursor directory (the prompt rules you use for Cursor).
- Paste them into the System Instructions in Playground.
- Click Create to generate a system prompt tailored for the 01-Preview model.
- Reopen the chat in Playground:
- Go to the Beta chat flow, paste the modified system prompt, and add it as the system prompt.
- Prep a target repo with repo pack:
- Use repo pack to optimize the entire repo or selected directories/files for LLMs.
- It outputs a structured text file (file summary, tree, repo files, etc.) at the root.
- If you don’t want to dive into docs, you can invoke it via Command-K in your environment and type repo pack to generate the file.
Code snippet (conceptual):
text
# In Playground
1) Copy Cursor rules from Cursor directory
2) Paste into System Instructions
3) Click Create -> use the generated system prompt
Repo prep workflow with repo pack
- Run repo pack to generate a summarized, LLM-friendly view of the repo (file tree, relevant files, etc.).
- It spits out a nice summary file at the repo root. Drag that file into Playground to guide the model on your codebase.
- Example workflow described:
- Generate repo-summary.txt at root
- Paste its contents into Playground to inform code tasks
- Use Playground to iterate on changes and get back results quickly
Simple outline of the flow (conceptual):
text
$ repo-pack
# outputs: repo-summary.txt at project root
Concrete task: Delete a calendar event with recurrence handling
- Objective: Add the ability to delete a single calendar event, handling recurring events gracefully.
- UI/UX reference: Model after Google Calendar’s dialog (show options for single instance vs. all future occurrences).
- Files mentioned: a calendar-related dialog TSX file (SheetDialog.tsx or similar) with:
- Recurrence check
- Options: delete this instance, delete all, delete future events
- Prompts and linting:
- Include linting errors you’ve encountered and enforce strict typing to surface dead code.
- Build the prompt to instruct the assistant to implement the new functionality accordingly.
- Execution flow:
- Copy and paste the prompt suite (including objective and lint cues) into Playground.
- Run the model to generate the implementation snippets, then iterate.
- Playground tip: you can compare results or iterate with the “O” (or related) quick actions to re-run smaller changes quickly.
- Typical turnaround: about 2–5 minutes depending on prompt size and repo scope.
Takeaway workflow:
- Use Cursor rules to shape prompts
- Use Playground for rapid iteration and testing
- Use repo pack to prep your repo for LLM-friendly prompts
- Iterate on a concrete feature (calendar delete), leveraging strict typing and lint feedback to refine
Quick tips and caveats
- 01-Preview limits can bite; prefer API key for a smoother workflow or use sparingly for testing.
- Copy and adapt Cursor rules rather than rewriting from scratch—consistency pays off.
- The Playground is great for rapid testing, prompt tuning, and validating integration prompts before writing code.
- Strict typing and linting help catch dead code early when you’re feeding code tasks to the model.
Takeaways
- Add 01-Preview in Cursor, but manage limits intelligently (hard limit vs API key).
- Move cursor prompts into Playground as a system prompt to experiment and refine without losing context.
- Use repo pack to produce a concise repo summary for LLMs, then feed that into Playground to guide prompt-based code changes.
- For real features (like calendar deletion with recurrence logic), build a concrete prompt around UX patterns (Google Calendar style dialogs) and validate with linted, strongly-typed prompts.