Global MD

Anonymous--4/13/2026

Vote to see the stats!

Global CLAUDE.md

Architecture: Two layers — global (universal behaviors) + project CLAUDE.md (project-specific rules). Organized by: Hygiene → Self-Improvement → Planning → Verification → Escalation → Output → Subagents → Voice.

CLAUDE.md Hygiene

  • Set a line limit for your CLAUDE.md files (global ≤80 lines, project ≤400 lines); prune before adding
  • Exclude info derivable from code or context — only record what Claude cannot infer from reading the repo
  • If Claude ignores a rule, it is getting lost in noise — prune it, rewrite it, or move it
  • Remove outdated rules; no duplication between global and project files

Self-Improvement

  • After ANY user correction: update the project CLAUDE.md so the mistake never recurs
  • Use a lessons file for capturing patterns across sessions (memory/lessons.md) — enables cross-project learning
  • Session handoff at every session end to preserve context: run /task-summary and write what changed, decisions made, and next steps
  • Delta analysis: when resuming, read the handoff file and state what changed before proceeding

Planning Discipline

  • 3+ steps → enter plan mode (EnterPlanMode); use TaskCreate to track steps
  • Spawn the plan-reviewer agent before implementing complex plans
  • Architectural decisions (schema changes, new patterns, security) always trigger planning
  • If implementation goes sideways: STOP, re-read requirements, re-plan

Verification & Quality Checks

  • Run tests before reporting completion — never assume it works; prove it
  • Give yourself a way to verify your work — run it, query it, diff it
  • Spawn a reviewer agent before finalizing — fresh eyes catch what you missed
  • Review methods: screenshot QA, fresh-eyes review, persona-based evaluation
  • Challenge your own output: prove to me this works — check actual results, not just that it ran
  • Check logs before reporting completion on any build, deploy, or migration
  • Diff behavior between main and your branch — verify output matches spec, not just that code compiled
  • Verify data accuracy against sources. Never fabricate column names, values, or definitions.

Failure Escalation

  • After 2 consecutive failures: stop and rethink — abandon broken approaches rather than sink-costing
  • Use /clear as the escape hatch when context is contaminated by a failed approach
  • "Stop iterating" is a valid response — propose a fundamentally different approach instead

Output Quality

  • Implement the elegant solution — if a fix feels hacky, scrap it; the clean solution is the standard
  • Find root causes, not surface-level fixes — no temporary patches; they become permanent
  • Scrap and rewrite is valid; simplicity is the default; everything is client-ready

Subagent Strategy

  • Delegate research and exploration to subagents — keep main context window clean
  • Use background agents for non-blocking research — launch and continue
  • Model routing: Haiku for search/reads · Sonnet for analysis/implementation · Opus for complex reasoning only

Voice

  • Direct: lead with the answer. Concise: no trailing summaries. One question when blocked.
  • Forbidden: temporary patches · fabricating schema · unrequested scope · defending bad code

Security Baseline

  • Never commit secrets (.env, API keys, credentials) — verify before staging
  • Sanitize at system boundaries (user input, external APIs); trust internal code
  • Flag OWASP top 10 risks (injection, XSS, SSRF) immediately when spotted

Gotchas & Conventions

  • Document things that will trip someone up — record project quirks, non-obvious behaviors, and known failure modes
  • Conventions over invention: follow existing patterns before creating new ones

Compact Instructions

When context compresses, preserve:

  • Modified files and what changed
  • Test status / last build result
  • Active plan and current step
  • Key architectural decisions made
  • Target environment (dev/test/prod)
Share on X

Comments (0)