Ship AI Code That Actually
Follows the Blueprint.
AI agents write code fast — but speed without structure creates debt. Rulebound enforces your engineering standards at every layer: AST, gateway, and commit.
$ rulebound watch src/
Watching src/ for changes...
[07:38:12] src/api/auth.ts changed
[ERROR] session-mgmt:14 — JWT expiry exceeds 30min team policy (found: 7d)
[WARN] token-storage:23 — localStorage forbidden for tokens, use httpOnly cookies
[07:38:15] src/api/auth.ts saved
All rules passed. Clean.
Problem
Fast Code Without Guard Rails is Just Fast Debt.
Your AI agents don't read your architecture docs, ignore your naming conventions, and skip your security policies. Every PR becomes a review marathon.
Rulebound fixes this.
Codify
Turn your engineering standards into structured, versioned rules. Markdown files, git-tracked, inherited across projects.
Intercept
Rulebound analyzes code at the AST level and intercepts LLM responses through a gateway proxy — before violations reach your repo.
Enforce
Pre-commit hooks, CI checks, and real-time IDE diagnostics. Choose advisory, moderate, or strict — block or warn, your call.
Getting Started
How Rulebound Works
Write Rules as Blueprints
Plain Markdown with frontmatter. Categorize by domain, assign severity, tag by stack. Version-controlled alongside your code.
# security/input-validation.md
## Rule: Validate All User Input
- Use Zod schemas for every API endpoint
- Never trust client-side validation alone
- Sanitize HTML output to prevent XSS
- Log validation failures for monitoringConnect Your Stack
One install — CLI, MCP server, or gateway proxy. Works with Claude Code, Cursor, Copilot, and any OpenAI-compatible API.
$ npm install -g @rulebound/cli
$ rulebound init
Initializing Rulebound...
Created .rulebound/config.json
Created .rulebound/rules/
Installed pre-commit hook.
Rulebound initialized. Ready.Rules Find the Code
Rulebound matches rules to tasks by stack, category, and semantic relevance. No manual selection — the right rules surface automatically.
$ rulebound find-rules --task "build checkout API"
Matching rules to task context...
api/rest-conventions.md (0.94)
security/input-validation.md (0.91)
testing/api-integration.md (0.87)
3 rules injected into agent context.Enforce at Every Layer
AST analysis catches structural anti-patterns. The gateway scans LLM responses. Pre-commit hooks gate your repo. CI annotations flag PRs.
$ rulebound validate --plan task-plan.json
Checking 3 rules against plan...
rest-conventions PASS
input-validation PASS
api-integration PASS
All rules passed. Ready to ship.Architecture
End-to-End Rule Enforcement
From the moment a developer prompts an AI agent to the final compliance report — every step is governed by your rules.
Writes task prompt for AI agent
Claude Code, Cursor, Copilot, etc.
Intercepts request, injects rules into system prompt
OpenAI, Anthropic, Google — generates code
Keyword, Semantic, LLM, and AST analysis pipeline
Compliance scores, audit log, trend charts
Slack, Teams, Discord, PagerDuty alerts
Comparison
From Scattered Docs to Structural Enforcement
Without Rulebound
- CLAUDE.md and .cursorrules copy-pasted across repos
- Rules exist as tribal knowledge in Slack threads
- AI agents pass code review by luck, not compliance
- New hires spend weeks learning unwritten standards
- Violations caught in PR review, days after writing
- No visibility into which rules are followed or ignored
- One rule hub, inherited per project and stack
- AST + semantic analysis catches violations at write-time
- Enforcement modes from advisory to strict blocking
- Rules are code — versioned, reviewed, auditable
- Real-time IDE diagnostics via LSP
- Compliance scores and audit trail per project
Features
Engineered to Enforce
AST Code Analysis
Tree-sitter WASM parser detects structural anti-patterns across 10 languages. Built-in structural queries — no regex, real AST matching.
LLM Gateway
Transparent proxy between AI tools and LLM APIs. Injects rules into prompts, scans responses for violations in real-time.
Semantic Rule Matching
Rulebound analyzes each task and selects only the relevant rules. No context window bloat — just the rules that matter.
MCP Server
AI agents query and validate against rules in real-time via Model Context Protocol. Auto-detects project stack and filters rules.
Enforcement Modes
Choose advisory, moderate, or strict enforcement. Control when violations block commits and CI pipelines with configurable score thresholds.
Rule Registry
Store all your engineering rules in one place. Organize by domain, team, or project. Version-controlled and always in sync.
Open Source
No Vendor Lock-in. No SaaS Tax.
Rulebound is MIT-licensed and self-hostable. Run it on your infrastructure, audit every line, and own your enforcement pipeline.
Real-Time
Enforcement That Never Sleeps
Rulebound doesn't wait for commit time. It actively monitors, intercepts, and enforces at every stage of your AI-assisted workflow — from the LLM response stream to your IDE gutter.
Gateway Proxy
Intercepts every LLM API call. Buffers streaming responses, scans completed code blocks with AST analysis. Strict mode blocks violations before they reach your editor.
LSP Diagnostics
Real-time inline warnings as you type. 300ms debounced AST + semantic analysis. Violations appear as underlines in your editor — same as TypeScript errors.
Watch CLI
Monitors your working directory for file changes. Every save triggers validation against your rules. Pretty terminal output or JSON for CI pipelines.
MCP Pre-Write Gate
AI agents must pass validate_before_write before creating any file. Unapproved code is blocked at the source — before it touches your repo.
Your Standards. Every Agent. Every Commit.
Open source, self-hostable, MIT licensed. Clone the repo, define your rules, and start enforcing in under 5 minutes.
MIT License · Claude Code · Copilot · Cursor