Context Engineering

Context engineering is the practice of deliberately structuring, curating, and managing the information an AI model receives across sessions — as opposed to relying on the raw conversation window.

The Core Problem: Context Rot

LLM output quality degrades as the context window fills. In long coding sessions, this manifests as:

  • Increasing contradiction and hallucination
  • Loss of earlier requirements or decisions
  • Slower, costlier inference
  • Drift from the original design intent

Context rot is the primary reason AI-assisted development feels unreliable over multi-hour or multi-session work.

Key Techniques

Externalised state files: Project context lives in structured files (PROJECT.md, ROADMAP.md, REQUIREMENTS.md, STATE.md) that any session can read on startup — decoupling memory from window.

Window capping: Keep the active context window at 30–40% capacity by offloading heavy work to fresh subagent contexts. Prevents degradation even in long tasks.

Structured phases: Break work into discrete stages (plan → execute → verify) so the AI never mixes concerns in the same context.

Atomic commits: Each task step is committed independently, creating checkpoints that make partial failures recoverable.

Relationship to Memory Systems

Context engineering is adjacent to llm-memory-systems but distinct:

  • Memory systems (Mem0, Letta, Dakera) persist semantic information across agents
  • Context engineering manages structural information — what the AI is doing, what was decided, what’s next
  • They’re complementary: memory for facts, context engineering for workflow state

Implementations

  • get-shit-done-gsd — GSD is the most complete open-source implementation of context engineering for coding workflows