Learn Claude Code

Learn Claude Code

Build a nano Claude Code-like agent from 0 to 1, one mechanism at a time

The Core Pattern

Every AI coding agent shares the same loop: call the model, execute tools, feed results back. Production systems add policy, permissions, and lifecycle layers on top.

agent_loop.py
while True:
    response = client.messages.create(messages=messages, tools=tools)
    if response.stop_reason != "tool_use":
        break
    for tool_call in response.content:
        result = execute_tool(tool_call.name, tool_call.input)
        messages.append(result)

Message Growth

Watch the messages array grow as the agent loop executes

messages[]len=0
[]

Learning Path

12 progressive sessions, from a simple loop to isolated autonomous execution

s0198 LOC

The Agent Loop

The minimal agent kernel is a while loop + one tool

s02154 LOC

Tools

The loop stays the same; new tools register into the dispatch map

s03212 LOC

TodoWrite

An agent without a plan drifts; list the steps first, then execute

s04206 LOC

Subagents

Subagents use independent messages[], keeping the main conversation clean

s05234 LOC

Skills

Inject knowledge via tool_result when needed, not upfront in the system prompt

s06290 LOC

Compact

Context will fill up; three-layer compression strategy enables infinite sessions

s07251 LOC

Tasks

A file-based task graph with ordering, parallelism, and dependencies -- the coordination backbone for multi-agent work

s08230 LOC

Background Tasks

Run slow operations in the background; the agent keeps thinking ahead

s09386 LOC

Agent Teams

When one agent can't finish, delegate to persistent teammates via async mailboxes

s10466 LOC

Team Protocols

One request-response pattern drives all team negotiation

s11543 LOC

Autonomous Agents

Teammates scan the board and claim tasks themselves; no need for the lead to assign each one

s12737 LOC

Worktree + Task Isolation

Each works in its own directory; tasks manage goals, worktrees manage directories, bound by ID

s13256 LOC

Agent Evals

An agent that can't verify its own work is just guessing; verifiable output is the key to success

s14335 LOC

Workflow Patterns

Don't build agents — build workflow patterns; start simple, add complexity only when needed

s15291 LOC

Context Engineering

Context Engineering > Prompt Engineering; proactive budget allocation, not reactive compression

s16267 LOC

Long-Running Harness

Separate the generator from the evaluator; fresh context per iteration solves context anxiety

s17230 LOC

MCP

MCP is USB for AI tools — standardized discovery and invocation across any service

s18234 LOC

Auto Mode

Safety should not be the enemy of efficiency; 95% auto-approve, 5% human oversight

s19163 LOC

Think Tool

The simplest tool can be the most useful; giving the model permission to pause improves quality

s20264 LOC

Parallel Teams

The key to scaling is decoupling; task board + file locks = linear scalability

s21280 LOC

Tool Design

Good tools are hard to misuse; tool design impacts agent quality more than prompt design

Architectural Layers

Five orthogonal concerns that compose into a complete agent