AIJanuary 18, 2026 · 2 min read

Comparing Compaction Prompts

Breaking down how different agentic coding tools handle context compaction when conversations get too long.

I'm using Open-Source-Snapshot to break down how agentic coding harnesses work under the hood. I also made a video walkthrough if you prefer that format. One of the most interesting patterns I've found is how different tools handle context compaction—the process of summarizing long conversations so the LLM can continue working without losing critical information.

Here's how two popular tools approach this problem.

OpenAI Codex Compaction Prompt

The default Codex compaction prompt lives in src/coding-agents/codex/codex-rs/core/templates/compact/prompt.md (loaded via src/coding-agents/codex/codex-rs/core/src/compact.rs).

prompt.md
You are performing a CONTEXT CHECKPOINT COMPACTION. Create a
handoff summary for another LLM that will resume the task.
 
Include:
- Current progress and key decisions made
- Important context, constraints, or user preferences
- What remains to be done (clear next steps)
- Any critical data, examples, or references needed to
continue
 
Be concise, structured, and focused on helping the next LLM
seamlessly continue the work.

OpenCode Compaction Prompt

The compaction prompt is defined in src/coding-agents/opencode/packages/opencode/src/agent/prompt/compaction.txt.

compaction.txt
You are a helpful AI assistant tasked with summarizing
conversations.
 
When asked to summarize, provide a detailed but concise
summary of the conversation.
Focus on information that would be helpful for continuing the
conversation, including:
- What was done
- What is currently being worked on
- Which files are being modified
- What needs to be done next
- Key user requests, constraints, or preferences that should
persist
- Important technical decisions and why they were made
 
Your summary should be comprehensive enough to provide
context but concise enough to be quickly understood.