FREE & OPEN SOURCE

Claude Setup Optimizer
Is your setup production-grade?

Scan your CLAUDE.md and rules against 20 behavioral discipline checks derived from Anthropic's own production system prompts. Score A–F, copy-paste fixes for every gap. 30 seconds.

pip install claude-setup-optimizer

Then run claude-setup-optimizer --open in your project directory

Buy me a coffee

Fuel more free open-source AI tools

20
Behavioral checks
5
Scoring categories
A–F
Grade + copy-paste fixes
$0
Cost

Before & after

The same setup, before and after applying the optimizer's recommendations.

BEFORE — UNOPTIMIZED SETUP

OAKEN AI — CLAUDE SETUP OPTIMIZER
Behavioral Discipline Report
42
/ 100
D
Weak — several critical gaps
8/20
Passed
7
Critical
5
Warnings
CATEGORY SCORES
60%
Output Discipline
33%
Routing Logic
33%
Memory & Recall
40%
Agent Safety
50%
Security

AFTER — OPTIMIZED SETUP

OAKEN AI — CLAUDE SETUP OPTIMIZER
Behavioral Discipline Report
95
/ 100
A
Production-grade setup
19/20
Passed
0
Critical
1
Warnings
CATEGORY SCORES
100%
Output Discipline
100%
Routing Logic
100%
Memory & Recall
100%
Agent Safety
75%
Security

What a fix looks like

Every gap includes a ready-to-use snippet. Create the file, paste the content, re-run to confirm the check passes.

Git destructive operation confirmationCRITICALAgent Safety

No rule requiring confirmation before destructive git commands (reset, checkout, clean, force-push). Claude will execute them silently, potentially destroying uncommitted work.

FIX: Create .claude/rules/git-safety.md
# Git & Deletion Safety Rules

**NEVER run destructive commands without explicit user confirmation.**

## Commands that require confirmation:
- `git checkout` (can overwrite uncommitted changes)
- `git reset` (can lose commits)
- `git clean` (deletes untracked files)
- `git push` (affects remote)
- `rm -rf` / `rm` (delete files)

## Before any state-modifying command:
1. Explain what the command will do
2. Ask: "Should I run this?"
3. Wait for explicit "yes" or approval
Recall before implementationWARNINGMemory & Recall

No rule requiring Claude to check memory for past solutions before starting implementation. Leads to repeated work and contradicts decisions already made.

FIX: Add to .claude/rules/dynamic-recall.md
# Dynamic Recall — Check Memory Before Implementing

Before starting implementation tasks, run:

```bash
cd $CLAUDE_OPC_DIR && uv run python scripts/core/recall_learnings.py \
  --query "<task keywords>" --k 3 --text-only
```

This is especially useful when:
- Implementing features similar to past work
- Debugging errors that may have been solved before
- Making architectural decisions

All 20 checks

Output Discipline

(5 checks)
  • When-not-to-create-files rule
  • Code length threshold enforcement
  • Output folder routing rule
  • No-truncation mandate
  • Output decision checklist

Routing Logic

(3 checks)
  • MCP-first rule
  • No narration / no routing commentary
  • Skill loading before specific output types

Memory & Recall

(3 checks)
  • Recall before implementation
  • Verify before claiming existence
  • Memory update rule

Agent Safety

(5 checks)
  • Elicitation trigger for heavy tasks
  • Destructive operation confirmation
  • Anti-churn rule
  • Revenue guard rule
  • Session sync rule

Security

(4 checks)
  • Client confidentiality rule
  • API key management
  • Git safety rule
  • No hardcoded secrets

How it works

1

Install (from anywhere)

pip install claude-setup-optimizer
2

Scan your project

claude-setup-optimizer /path/to/project --open
3

Read the report

An HTML report opens in your browser with your A–F grade, scores across 5 categories, and every failed check with a copy-paste fix.

4

Apply the fixes

Each gap shows the exact file to create and content to paste. Most setups go from D to A in under an hour.

Works with any Claude model

The checks analyze your CLAUDE.md and rules files — the instructions you give Claude — not the model itself. Whether you're running Haiku, Sonnet, Opus, or a self-hosted model, the same behavioral gaps create the same problems. The patterns were derived from Opus 4.7 production internals, but the discipline applies to any setup.

Where the patterns come from

The 20 checks were reverse-engineered from the Claude Opus 4.7 internal system prompt — the actual instructions Anthropic uses in production. They represent how the team that built Claude instructs it to behave: when to elicit clarification, how to handle destructive operations, when to check memory before implementing, how to route to the right tool first.

Search-First Mandate

Always check memory and existing solutions before starting new implementation. Prevents repeated work and contradictory decisions.

Elicitation Before Heavy Tasks

Ask clarifying questions before large, ambiguous, or irreversible actions. Catches misalignment before it costs tokens or causes damage.

Destructive Op Confirmation

Any command that deletes, rewrites, or publishes requires explicit user confirmation. Non-negotiable in production agentic systems.

Routing Discipline

Use the right tool first. MCP before bash, skill before improvising, structured recall before guessing. No narrating the routing decision.

Context Management

Session sync, memory updates, output folder routing. Keeping the workspace clean is a behavioral discipline problem, not a tooling one.

Security Defaults

API keys from vault, no hardcoded secrets, client confidentiality enforced at the rule level — not assumed from training.

Requirements

  • Python 3.10+ python --version
  • Zero external dependencies — pure Python stdlib
  • Does NOT need to run inside your Claude Code workspace
  • Read-only — never modifies your files

Frequently asked questions

Does this send my data anywhere?+
No. The tool runs entirely on your machine. No API calls, no data collection, no telemetry. Your CLAUDE.md and rules never leave your computer.
Does this only work with Opus 4.7?+
No. The checks analyze your own rules files, not the model. The patterns were derived from the Opus 4.7 system prompt, but they apply to any Claude Code setup — Haiku, Sonnet, Opus, or future models.
Can it automatically fix the issues?+
The optimizer generates the report — it doesn't modify your files. But you can feed the report to your Claude Code instance and ask it to implement the fixes. The fix snippets are copy-paste ready.
What counts as a passing score?+
90+ is production-grade (A). Most new setups score 40–65. After applying the recommended fixes, 85–95 is typical. Each category scores independently so you can see exactly where the gaps are.
Where do the patterns come from?+
They were derived from the Claude Opus 4.7 internal system prompt, leaked and published by @elder_plinius. The patterns reflect how Anthropic instructs Claude in production.

Share with your team

Know someone running Claude Code without behavioral rules? This takes 30 seconds and shows them exactly what's missing.

pip install claude-setup-optimizer

MIT license. Open source. View on GitHub

CREDITS

The behavioral patterns powering this tool's checks were derived from the Claude Opus 4.7 internal system prompt, leaked and published by @elder_plinius. Without that research, this tool wouldn't exist.

ALSO FROM OAKEN AI

Optimize your workspace too

The Setup Optimizer checks behavioral discipline in your rules. The Workspace Optimizer checks context bloat, invisible memory, and rule load efficiency — a different layer of the same problem.

Disclaimer: This tool is provided as-is with no warranty. Oaken AI and its contributors accept zero responsibility for any changes made to your setup based on this tool's output. The report contains recommendations, not instructions. Always review changes before applying them.

Buy me a coffee

Fuel more free open-source AI tools

BUILT BY OAKEN AI

Need more than a scan?

Oaken AI builds AI automation systems for businesses. From behavioral rule architecture to full production pipelines, we help teams get more from their AI tools.