CCGM: A Modular Configuration System for Claude Code

In February I wrote about building a multi-agent AI development system with Claude Code. That post described claude-dotfiles - a single repository containing my global instructions, hooks, slash commands, and workflow documentation. It worked, but it had problems. The entire configuration was a monolith. Everything was tightly coupled to my specific setup. If you wanted to use the git workflow rules but not the multi-agent system, you had to surgically extract them from a 600+ line CLAUDE.md file and hope nothing broke.

CCGM (Claude Code God Mode) is the modular system that replaced it. Every capability is a self-contained module you can install independently. Pick what you want, skip what you don't. Install with one command. It works across Claude Code CLI, VS Code, Cursor, the macOS Claude app, and any other editor with Claude Code support.

Why modular over monolithic? A monolithic config forces you to fork if you want to diverge, then you're maintaining your own version indefinitely. Conditional config (if statements in one file) mixes concerns and makes dependencies invisible. CCGM's approach: each module is a git-tracked directory with explicit dependencies declared in a manifest. Install only what you need. Read each module's README independently. Update CCGM and your personal additions stay intact.


Table of Contents

The Problem with Monolithic Config

The original claude-dotfiles repo was a flat dump of everything into ~/.claude/. One giant CLAUDE.md with hundreds of lines of instructions. Hooks that assumed specific directory structures. Commands that referenced hardcoded paths. If you wanted to share it with someone, they had to fork the whole thing and strip out anything personal.

The deeper problem was coupling. The multi-agent coordination system depended on session logging, which depended on a specific log repo structure, which assumed you had multiple clones of every repository. A solo developer who just wanted better git workflow rules was pulling in an entire parallel agent infrastructure.

Each capability needed to be self-contained (installable without unrelated dependencies), documented (its own README explaining what it does and why), configurable (template variables for paths, usernames, and preferences), and composable (able to declare dependencies on other modules). CCGM is the result.


Architecture and Installation

CCGM is a Git repository with 35 modules organized into 5 categories. An interactive installer reads module manifests, resolves dependencies, expands templates, and places files into ~/.claude/.

ccgm/
├── start.sh                    # Interactive installer
├── update.sh                   # Pull latest and re-apply
├── uninstall.sh                # Remove only CCGM-installed files
├── presets/                    # Named module collections
├── lib/                        # Installer internals
├── modules/                    # 35 self-contained modules
├── docs/                       # 8 reference documents
└── tests/                      # Validation tests

The installer handles everything: prerequisite checks (Claude Code, jq, Python 3, gh CLI, gum), module selection via presets or individual checkboxes, dependency resolution via topological sort, configuration prompts for username and preferences, template expansion, file installation (copy or symlink), and settings merge. It records what it installed in a manifest so updates and uninstalls only touch CCGM-managed files.

git clone https://github.com/lucasmccomb/ccgm.git && cd ccgm && ./start.sh

Other install modes:

./start.sh --preset standard      # Skip the selection menu
./start.sh --scope project        # Install to .claude/ instead of ~/.claude/
./start.sh --link                 # Symlink instead of copy (for CCGM developers)

For AI agents installing programmatically:

CCGM_NON_INTERACTIVE=1 CCGM_USERNAME="github-user" ./start.sh --preset standard

CCGM places four types of files into ~/.claude/:

Directory What How Claude Uses It
rules/*.md Behavior rules Loaded automatically at session start
commands/*.md Slash commands Available as /commit, /pr, etc.
hooks/*.py Workflow hooks Triggered on Claude Code events
settings.json Permissions Controls tool access and auto-approval

Rules shape how Claude thinks about code, handles git, debugs problems, and coordinates with other agents. Commands are explicit actions you invoke with /command-name. Hooks fire automatically on Claude Code events and can approve, deny, or modify behavior.


Presets

Four presets handle different needs. Start here to decide what to install, then read the module details that follow.

Preset Modules Best For
minimal (3) global-claude-md, autonomy, git-workflow Trying CCGM for the first time
standard (8) + identity, settings, hooks, commands-core, commands-utility Most individual developers
team (10) + github-protocols, code-quality, systematic-debugging, verification Teams with shared repos
full (35) All modules Power users

The installer resolves dependencies automatically. Select xplan and it pulls in multi-agent and session-logging without you needing to know they're required.


The Module System

Every module lives in modules/{name}/ and contains a module.json manifest, a README.md, and one or more content files. The manifest declares what to install, where to put it, and what configuration to prompt for.

{
  "name": "autonomy",
  "displayName": "Autonomy",
  "description": "Claude as a fully autonomous engineer",
  "category": "core",
  "scope": ["global", "project"],
  "dependencies": [],
  "files": {
    "rules/autonomy.md": {
      "target": "rules/autonomy.md",
      "type": "rule",
      "template": false
    }
  },
  "tags": ["autonomy", "core"],
  "configPrompts": []
}

File types: rule (loaded at session start), command (becomes a /slash-command), hook (triggered by events), config (merged into settings.json), doc (reference material, not auto-loaded).

Some modules use template variables (__HOME__, __USERNAME__, __CODE_DIR__) that are expanded during installation. Rule files never use templates because they should work for anyone without substitution.


Core Modules

Six core modules form the foundation. Rules tell Claude what to do; hooks (covered later) make sure it actually does it.

global-claude-md

Installs a slim ~/.claude/CLAUDE.md that serves as the root configuration reference. Instead of packing hundreds of lines of instructions into one file, it points to the actual rule files, commands, hooks, and settings where behavior is defined. This prevents the context waste of loading the same instructions twice (once from CLAUDE.md, once from the rule file) and eliminates maintenance drift between duplicated rules.

identity

Two foundational context files that give Claude a persistent identity layer surviving across sessions and context resets:

  • soul.md defines the AI's personality, philosophy, reasoning principles, communication style, and boundaries
  • human-context.md defines who you are - your background, goals, domain expertise, working style, and life intentions

Together they transform generic AI sessions into a working relationship with a consistent, aligned collaborator. The installer includes an interactive personalization step where you define both files during setup. Design principles: concision beats comprehensiveness (1-3 pages per file), declarative values beat procedural rules ("I value simplicity" works better than "never use complex abstractions"), and stable identity over current tasks (the memory system handles evolving details).

autonomy

The philosophical core. Configures Claude as a fully autonomous Staff-level engineer who executes tasks end-to-end instead of describing steps for you to follow.

The rule establishes one principle: do it, don't describe it. If Claude can accomplish something from the command line, it should do it immediately. Run npm install yourself. Fix failing builds yourself. Debug fully yourself.

It also defines clear boundaries for when to ask: credentials Claude doesn't have, third-party dashboard actions requiring a browser session, ambiguous product decisions, and destructive actions on shared systems. Everything else, just do it.

git-workflow

Six rules extracted from real mistakes:

  1. No AI attribution - never add Co-Authored-By trailers or "Generated with Claude Code" footers. The human is the author and the one accountable for the code; AI is a tool.
  2. PR template detection - check the repo root, .github/, and the org's .github repo for PR templates before creating PRs.
  3. Sync before history changes - always git fetch before rebase, filter-branch, or reset. Running history-altering commands on a stale branch and force-pushing overwrites the remote.
  4. Rebase by default - use rebase instead of merge for feature branches. Linear history, clean diffs.
  5. Never stash - commit instead. Stashes are invisible and easy to forget.
  6. Return to main after merge - checkout main and pull after PRs are merged.

settings

A base settings.json with 900+ pre-configured tool permission entries. The allow list covers safe operations (git status, npm commands, file operations in safe paths). The deny list blocks dangerous commands (force push to main, rm -rf /, dropping databases). A configurable default mode lets you choose between ask (confirm before risky tools) and dontAsk (auto-approve everything not denied).

hooks

Ten Python hook scripts that enforce the rules, plus an orphaned process detector. Each hook is covered in The Hook System below.


Commands and Skills

CCGM installs up to 30 slash commands and skills across 15 modules. Here are the ones that matter most, grouped by what they do.

Daily Workflow (commands-core)

These keep you in the terminal without context-switching to GitHub's UI for routine operations.

/commit - Stage, verify (lint, type-check, tests, build), and commit. Issue number extracted from branch name automatically.

/pr - Verify, rebase, push, detect PR templates, create PR with Closes #{issue}.

/cpm - The full cycle in one command: commit, PR, squash-merge, close issue, return to main.

/gs - Git status dashboard: branch, sync state, working directory, open PRs, recommended next action.

/ghi - Create a GitHub issue with type labels and structured body.

Research and Ideation

These commands handle the upfront discovery phase before you write code. /research is fast and requires no setup; /deepresearch produces deterministic, reproducible results but requires local infrastructure (Docker, Ollama, ~40GB model). Use /research for quick exploration, /deepresearch when consistency matters, and /ideate when the idea itself isn't clear yet.

/research - Zero-dependency research using parallel Claude agents with WebSearch, WebFetch, GitHub CLI, and Reddit. Spawns up to 7 agents that each investigate from a different angle. Works out of the box.

/ideate - Structured ideation framework. Takes a half-formed idea and runs a Socratic interview to reach 95% clarity across 7 dimensions. Can delegate to /deepresearch for market validation mid-interview, then hand off to /xplan for execution planning.

/deepresearch - Local-first pipeline that replaces parallel subagents with Ollama + SearXNG + a single Sonnet API call. Covered in its own section.

Quality and Review

These analyze code and writing after the work is done, catching issues before they ship.

/audit - Codebase audit across 8 categories (security, dependencies, code quality, architecture, TypeScript/React, testing, documentation, performance) using parallel agents. Supports --fix for auto-remediation.

/editorial-critique - 8-pass editorial review of long-form writing: prose craft, AI-tell detection, argument architecture, conciseness, data accuracy, structure and pacing, impact, and grammar. Produces a scored report (80 points max) with a prioritized findings list. --fix applies changes automatically.

/design-review - 6-pass visual design review. Takes screenshots at desktop, tablet, and mobile, extracts DOM structure and computed styles, then analyzes spacing, typography, responsive behavior, visual hierarchy, accessibility (WCAG AA), and component consistency. Produces a scored report with exact CSS selectors and property values. --fix applies the CSS changes.

/debug - Structured root-cause debugging: reproduce, hypothesize, instrument, diagnose, fix, verify. Runs on Opus. Invoked automatically when you ask Claude to fix a bug.

Brand and Documentation

/brand - Full naming pipeline: word exploration via 4 parallel agents (Datamuse, ConceptNet, Big Huge Thesaurus, etymological sources), 150-250 candidate generation across 6 categories, verification through domain availability, USPTO/WIPO trademark screening, app store searches, and social handle checks.

/brand-check - Deep verification of a single name across 10+ TLDs, trademarks, app stores, and social handles.

/docupdate - Documentation audit: README accuracy, TOC vs headings, onboarding flow vs prerequisites, package lists vs installed dependencies. Works in any project type.

Orchestration

These coordinate multi-step and multi-agent work. /startup initializes each session (pulls logs, checks git status, shows the tracking dashboard). /xplan handles the full lifecycle of a new feature or project. /mawf handles ad-hoc multi-agent work without the full planning overhead.

/xplan - Interactive planning and execution framework. Built around three human gates: after discovery, you confirm the concept; after research and planning, you approve the technical direction; after peer review by security, architecture, and business logic agents, you launch execution across parallel agent clones. Supports --light to skip the interactive phases.

/mawf - Multi-Agent Workflow. Takes unstructured input ("Fix the login bug, add dark mode, update the API docs"), parses into issues, plans dependency waves, spawns parallel agents, monitors progress, merges results.

/startup - Session initialization. Derives agent identity, pulls session logs, reads cross-agent activity, checks git status, queries the tracking dashboard, and presents a session dashboard. Runs automatically at session start.

Other Commands

  • /pwv - Playwright visual verification
  • /walkthrough - step-by-step guided mode
  • /promote-rule - promote repo rules to global
  • /cws-submit - Chrome Web Store submission walkthrough
  • /ccgm-sync - sync local config changes back to CCGM repo
  • /user-test - browser-based user testing
  • /onremote - run commands on a configured remote server
  • /workspace-setup - create multi-agent workspace directories
  • /reflect - run the self-improving reflection checklist
  • /consolidate - memory maintenance pass (deduplicate, clean stale entries)
  • /xplan-status - check progress on a running xplan
  • /xplan-resume - resume an interrupted xplan
  • /log-init - lightweight session log initialization

The Hook System

Rules are instructions in markdown files that shape Claude's thinking. Hooks are Python scripts that intercept events and enforce constraints at runtime: blocking dangerous operations, prompting for required context, or automating approval for safe commands. Together they create guardrails that work even when Claude deviates from the rules.

Claude Code supports four hook types:

Hook Type When Can Block?
PreToolUse Before a tool call Yes
PostToolUse After a tool call No
UserPromptSubmit When user submits a message No (context injection)
SessionStart When a session begins No (context injection)

CCGM installs 13 hooks across 3 modules (10 from hooks, 2 from self-improving, 1 from session-logging):

Git Safety

enforce-git-workflow.py (PreToolUse:Bash) - The most critical hook. Blocks commits directly to protected branches (main, master, develop, staging, production, and custom branches). Blocks commits without the #N: issue prefix. Blocks pushes to protected branches. Claude will happily commit directly to main if you don't stop it; in a multi-agent setup, an unprotected main branch is a disaster. Escape hatches exist for non-issue commits (sync: prefix) and emergencies (ALLOW_MAIN_COMMIT=1).

check-migration-timestamps.py (PreToolUse) - Validates Supabase migration file timestamps before commits. Duplicate timestamps break supabase db push because the CLI can't distinguish the files. Catching this before commit prevents hard-to-debug migration state issues.

Workflow Adherence

enforce-issue-workflow.py (UserPromptSubmit) - Detects implementation requests (keywords like "update", "fix", "add", "create") and injects a reminder: check for an existing issue, create a branch, commit with issue prefix, create a PR.

auto-startup.py (SessionStart) - Triggers /startup at the beginning of each new session. Only fires on fresh starts, not resume or context compaction.

Permissions

auto-approve-bash.py (PreToolUse:Bash) - Enforces Bash command permissions from settings.json. Ensures consistent permission behavior across all Claude Code environments (CLI, VS Code, Cursor).

auto-approve-file-ops.py (PreToolUse) - Enforces path-based read/edit/write permissions using glob pattern matching.

Multi-Agent Coordination

agent-tracking-pre.py (PreToolUse:Bash) - Warns when a branch creation command is about to claim an issue already claimed by another agent.

agent-tracking-post.py (PostToolUse:Bash) - The engine of multi-agent issue tracking. Records branch creation (claim), first commit (in-progress), PR creation (pr-created), merge (merged), and issue close (closed) in a tracking CSV. Uses git commit + pull --rebase + push for concurrency; different-row edits auto-resolve since each agent modifies only its own rows.

Meta-Learning

reflection-trigger.py (PostToolUse:Bash) - Fires after gh pr merge and gh issue close commands, injecting a reminder to run the self-improving reflection checklist before moving to the next task. This is the integration point between the self-improving module and the development workflow.

precompact-reflection.py (PreCompact) - Fires before context compaction, prompting Claude to capture any unwritten patterns from the session before context is compressed and observations are lost.

Advisory

ccgm-update-check.py (PreToolUse) - Checks once per day whether CCGM has upstream updates.

port-check.py (PreToolUse:Bash) - Warns about dev server port conflicts. Reads port assignments from .env.clone, checks lsof. Advisory only, never blocks.

orphan-process-check.py (PreToolUse) - Detects orphaned test worker processes left behind by crashed test runs. Warns before they accumulate and consume resources.


Workflow Modules

Where core modules and commands handle individual tasks, workflow modules coordinate work across sessions, agents, and time.

session-logging

Structured logging system for tracking work across sessions. Each agent gets a unique ID derived from its directory name (e.g., lem-fyi-0 becomes agent-0). Logs are markdown files stored in a dedicated git repo, updated at mandatory trigger points: after commits, PR creation, PR merge, issue close, and before context compaction. The /startup command pulls logs, reads other agents' activity for cross-agent awareness, queries the tracking dashboard, and presents a session dashboard. This is what lets an agent pick up where it (or another agent) left off in a previous session.

multi-agent

Enables parallel development with multiple Claude Code instances on the same repo. Two organization models: the workspace model provides isolated groups of 4 clones with a coordinator agent per workspace, and the flat clone model puts all clones as siblings in one directory. The workspace model isolates sets of clones under a workspace parent so each coordinator only sees its own clones; the flat model is simpler but all agents can see all clones.

Port allocation gives each clone unique dev server ports via port-registry.json and .env.clone, preventing collisions when multiple agents run servers simultaneously. The tracking CSV records which agent claimed which issue, with automatic status transitions: claimed -> in-progress -> pr-created -> merged / closed. The tracking hooks handle all CSV writes automatically.

xplan

The planning and execution framework, bridging "I have an idea" to "I have a running codebase." It enforces three human gates: after discovery you confirm the concept is worth building, after research and planning you approve the technical approach, and after peer review by security, architecture, and business logic agents you launch execution. The full phase list covers discovery interview, deep research, research synthesis, optional naming, tech stack validation, scope negotiation, dependency wave planning, peer review, and parallel execution across clones. Each gate prevents the system from over-investing before you've validated the direction. The --light flag skips the interactive phases for automated execution.

remote-server

SSH access to a configured remote machine. The /onremote command lets Claude run health checks, view logs, restart services, and execute maintenance tasks on the remote server without interactive shell sessions.

self-improving

Meta-learning system with automated triggers. After completing significant tasks, Claude reflects on what went well, what surprised it, and what it would do differently, then writes reusable patterns to memory files that persist across sessions.

Unlike the earlier version (which was just a passive rule), this module is now integrated into the development workflow through hooks and cross-module references. The reflection-trigger.py hook fires after PR merges and issue closes, injecting a reminder to run the reflection checklist. The precompact-reflection.py hook captures unwritten patterns before context compaction. The session-logging module's mandatory triggers include a reflection step after every PR merge. The systematic-debugging module feeds debugging patterns to memory after three-strike situations. And common-mistakes is a living document - the reflection loop can add new anti-patterns when it identifies ones that caused significant wasted time.

Two commands: /reflect runs the full reflection checklist inline, and /consolidate runs a memory maintenance pass (deduplication, contradiction resolution, stale entry cleanup).

subagent-patterns

Methodology for decomposing tasks and delegating to subagents. Covers when to use subagents, how to write specs (objective, context, constraints, deliverable), dispatch patterns, and a two-stage review process (spec compliance, then code quality).


Pattern Modules

Workflow modules coordinate agents. Pattern modules shape how each agent thinks about problems, regardless of tech stack.

code-quality

Covers dependency minimization (prefer built-in, then library, then framework), migration validation (quote PostgreSQL reserved keywords, use idempotent patterns), build verification (pre-push only, not after every change), and living document maintenance (update README.md after merges that change capabilities).

The core principle is change-philosophy.md: when modifying an existing system, don't patch it. Redesign it into the solution that would have existed if the change had been a foundational assumption from the start. This prevents technical debt from accumulating as special cases. The result should look like it was always designed this way.

systematic-debugging

Four-phase root cause investigation: investigate (read the error, reproduce consistently), analyze (find patterns, check recent changes), hypothesize (form testable theories), implement (fix the root cause, not symptoms). Includes a three-strike rule: if three approaches fail, step back and reassess your understanding rather than continuing to guess.

test-driven-development

Strict red-green-refactor TDD. Write a failing test first, make it pass with the simplest code, then refactor. The module rejects common rationalizations: "This is too simple to test" (simple code has simple tests), "I'll add tests after" (tests written after implementation prove nothing about correctness), "The types guarantee correctness" (types catch type errors, not logic errors).

verification

Evidence-before-claims. Never assert that something works without fresh proof. Plan the verification command, execute it fresh, read the full output including exit codes, evaluate whether the output supports the claim, report honestly.

common-mistakes

Eight anti-patterns extracted from real mistakes: shallow directory exploration in monorepos, dependency blindness (branching without checking open PRs), ESLint Fast Refresh violations, suggesting already-tried solutions, premature solutions without full context, git multi-clone confusion, Cloudflare Pages vs Workers confusion, and CF Pages without Git integration.

frontend-design

Principles for distinctive interfaces. Intentional typography (not just Inter by default), cohesive color systems with semantic tokens, consistent spacing scales, purposeful animation. What to avoid: purple-to-blue gradients, overly rounded cards, default framework styles.

browser-automation

Tool selection hierarchy: CLI tools first (curl, gh, wrangler), then MCP servers, then API calls, then WebMCP, then browser automation last. Reserve browser automation for things that genuinely require visual verification.


Tech-Specific Modules

Each of these captures best practices for a common stack, preventing mistakes specific to that ecosystem.

cloudflare - Pages vs Workers selection, Git integration requirements. Prevents the mistake of creating a Workers project for a static site.

supabase - Current API key terminology (publishable key, not "anon key"; secret key, not "service_role key"), circuit breaker prevention for the connection pooler, migration workflow.

mcp-development - Building MCP servers: TypeScript recommended, stdio for local transport, {service}_{action}_{resource} naming, input schema design with Zod, testing with MCP Inspector.

shadcn - shadcn/ui patterns: composition over custom, semantic theming (bg-primary not bg-blue-500), form architecture, accessibility.

tailwind - Tailwind CSS v4: CSS-first configuration with @theme, OKLCH color system, CVA for variants, dark mode with @custom-variant. Includes the v4 cursor: pointer gotcha where buttons lose their pointer cursor because v4's preflight dropped the override.


The Statusline

CCGM includes a statusline script that displays live session metrics at the bottom of your terminal:

🧠 O-4.6 | code main | ctx:8% | 5h:62% ███░░ 2h26m | 7d:79% ████░ 3d8h

Model with tier emoji (🧠 Opus, 🐢 Sonnet, ⚠️ Haiku) and abbreviation, current directory and git branch, context window usage, 5-hour rate limit with bar and reset countdown, and 7-day rate limit with bar and reset countdown. Color-coded: green under 60%, yellow under 85%, red above 85%.


lem-deepresearch: The Companion Pipeline

lem-deepresearch provides the /deepresearch command as a companion to CCGM. It was extracted from CCGM into its own repo because it requires local infrastructure (Docker, Ollama with a ~40GB model, a Python venv) that not every CCGM user will want.

Why a Local Pipeline?

CCGM already includes /research, which spawns parallel Claude subagents and requires zero setup. It's fast and works everywhere. But each agent consumes a full context window, results vary between runs, and agents researching independently sometimes produce conflicting conclusions because they work from different sources with no reconciliation step.

lem-deepresearch trades setup complexity for consistency. It uses a deterministic local-first pipeline: local tools handle the cheap, parallelizable work (query generation, web search, fact extraction), and a single Sonnet API call handles the expensive step (synthesis). Same input produces the same output every time. The tradeoff is infrastructure: Docker, Ollama with a 72B parameter model (~40GB), and a Python venv.

The Pipeline

Topic
  -> Ollama (qwen2.5:72b, local): Generate 3-7 diverse search queries
  -> SearXNG (Docker): Run parallel searches across Google, Bing, DuckDuckGo
  -> Ollama (local): Extract facts from search results
  -> Anthropic Sonnet (single API call): Synthesize into research.md

Query generation uses a 72B parameter local model with temperature 0.7 for diversity, with a fallback to simple topic variations if generation fails. Parallel web search runs all queries concurrently via httpx.AsyncClient, getting up to 5 results per query from 3 engines. SSRF protection validates every URL against blocked networks (RFC 1918, loopback, link-local) and fails closed on DNS errors. HTML is stripped before any content reaches a model. Fact extraction filters by relevance and deduplicates across results. Synthesis produces a structured document with sections, citations, and source attribution.

Three depth presets: Standard (5 queries, ~6 min), Full (7 queries, ~8 min), Lite (3 queries, ~4 min).

Integration

lem-deepresearch installs to ~/.claude/commands/deepresearch.md and ~/.claude/bin/deepresearch-cli.py. It integrates with /xplan (xplan delegates its research phase to /deepresearch via --plan-dir) and with /ccgm-sync (edits to the command locally sync back to the lem-deepresearch repo).

Installation:

git clone https://github.com/lucasmccomb/lem-deepresearch.git && cd lem-deepresearch && ./install.sh

The original problem was a monolithic config that forced you to take everything or nothing. CCGM solves that: 35 modules across 5 categories, each installable independently with explicit dependencies. Start with the minimal preset, add capabilities as you need them, and update without losing your personal additions. The README has the full module catalog, and every module has its own README with manual install instructions if you prefer to cherry-pick.

Both CCGM and lem-deepresearch are MIT licensed.

Enjoy this post?

Consider leaving a small donation to support the blog.

$