Context engineering is the discipline of designing and managing the complete information payload sent to an LLM — system prompts, tools, examples, retrieval results, memory, and conversation history — to optimize agent behavior across multiple turns and long-horizon tasks.

#Definition

Where prompt engineering focuses on crafting discrete instructions, context engineering architects the entire information flow that shapes agent behavior. It asks: "What should the agent know at each step?" rather than "What should we say to the model?"

Context engineering encompasses:

#Core Principles

#From Anthropic

Anthropic's guidance on effective context engineering emphasizes:

Source: Effective context engineering for AI agents

#From Manus

Manus's practical approach highlights:

Source: Context Engineering for AI Agents: Lessons from Building Manus

#Context Rot

A fundamental constraint in context engineering is context rot — the degradation of model attention as context length grows. As more tokens accumulate, earlier information receives less attention weight, leading to forgotten instructions or degraded performance.

Mitigation strategies:

#Relationship to Harness Engineering

Context engineering is a core component of harness engineering. While harness engineering builds the structural environment (CLAUDE.md, specs, directory layout), context engineering ensures that the right information flows to the agent at the right time within that structure.

A well-harnessed repository naturally supports good context engineering: clear file organization means agents retrieve relevant files efficiently, atomic commits mean history is parseable, and specs provide high-signal context for decision-making.