v0.5.1 · Open Source · MIT License

You wouldn't merge
your own PR. Why does
your AI agent?

So I built a template that enforces TDD, separates the coder from the reviewer, and never lets quality go down. It's opinionated, battle-tested on my own projects, and free.

copilot — PR #478
All 1145 tests pass (1 new test). Opening PR...
 
PR #478 created. Invoking Sentinel...
 
Sentinel is running for PR #478. This is a small,
  surgical fix — 2 commits in proper TDD order,
  ~25 lines changed. Waiting for the review.
 
CONDITIONAL — no 🔴 blockers. The 🟡 about
  listener cleanup is valid. Let me file the
  issue and merge.
 
  Pre-Merge Checklist:
  ☑ Report: SR-PR478-184643b-...
  ☑ Verdict: CONDITIONAL
  ☑ SHA == HEAD: 184643b
  ☑ Follow-up issue filed: #479
 
Merged!

The problem I kept running into.

My AI agents were productive but sloppy. They'd skip tests, review their own code, and forget what they learned last session. I needed guardrails.

Self-Review Bias

Agents reviewing their own code is like grading your own exam. They miss bugs, skip edge cases, and rubber-stamp their work.

No Quality Enforcement

Without gates, there's nothing preventing untested, insecure, or poorly structured code from reaching production.

Knowledge Erosion

Every agent session starts from scratch. Decisions, learnings, and architectural context vanish between runs.

What the template does.

Four interlocking systems that keep agents honest — from first test to final merge.

01

TDD Enforcement

"No code exists without a failing test first."

Agents must write tests before implementation. Red → Green → Refactor is mandatory, not aspirational. Every line of code earns its place by satisfying a test.

02

Sentinel Review System

"The coder ≠ the reviewer. Always."

A separate agent with read-only access reviews every PR using 6 parallel sub-agents — one per review dimension. Required for ALL changes — even 1-line fixes. 🔴 findings block, 🟡 tracked as issues, auto-fix loop up to 3 cycles.

03

Quality Ratchet

"Coverage can only go up. Never down."

The quality baseline is a one-way valve: test coverage, lint scores, and quality metrics increase monotonically. Compounding violations are tracked in LEARNINGS.md.

04

Companion Documents

"Institutional memory for your agents."

LEARNINGS.md, DECISIONS.md, CHANGELOG.md, ARCHITECTURE.md — structured knowledge that persists across sessions. Agents write here, not to AGENTS.md.

From plan to production,
every step is gated.

📋
STEP 01

Plan

Agent creates a plan and waits for human approval before writing code.

🧪
STEP 02

Test First

Write failing tests that define the expected behavior. Red phase.

STEP 03

Implement

Write minimal code to make tests pass. Green phase. Then refactor.

🔍
STEP 04

Sentinel

Separate review agent spawns 6 parallel sub-agents to scan for bugs, security issues, and quality gaps.

🚀
STEP 05

Ship

Only Sentinel-approved code merges to main. No exceptions.

What it's caught so far.

Real results from gitnotate, Arbol, and Council. The Sentinel caught things I would've missed.

🛡️
XSS
Vulnerabilities caught before deploy
🔒
5 CVEs
Flagged in dependency audits
📈
45→98%
Test coverage improvement
🐛
Memory
Leaks caught by Sentinel review

Up and running in
under 5 minutes.

Whether you're starting fresh, migrating from an existing config, or updating — there's a path for you.

1

Copy the template

Download the template/ files into your project root — manually or let your agent fetch them.

2

Auto-configure

Your agent scans package.json, tsconfig, pyproject.toml, etc. and fills in the placeholders. It asks you for anything it can't infer.

3

Start building

Your agent now follows TDD, creates worktree branches, and runs Sentinel review on every PR.

Copy this prompt to your AI agent:
Fetch the agents-template from https://github.com/pedrofuentes/agents-template — download all files from the template/ directory into this project's root. Then read AGENTS.md and follow the First Run setup instructions. Scan my project files to auto-fill what you can, then ask me for anything you can't infer.
1

Backup existing config

Your agent finds and backs up all agent config files — AGENTS.md, CLAUDE.md, .cursorrules, copilot-instructions.md, and more.

2

Extract & merge

Project-specific rules, patterns, and conventions are extracted from your old config and merged into the template's structure.

3

Confirm & commit

Your agent shows a migration summary — what was extracted, what's new, what moved to companion docs. You approve before it commits.

Copy this prompt to your AI agent:
Fetch the agents-template from https://github.com/pedrofuentes/agents-template — first back up any existing agent config files (AGENTS.md, CLAUDE.md, .cursorrules, copilot-instructions.md, etc.) to .agent-backup/, then download all files from the template/ directory into this project's root. Read AGENTS.md and follow the Migration setup path. Extract all project-specific information from the backed-up files and use it to configure the template. Ask me to confirm before finalizing.
1

Fetch latest template

Your agent pulls the latest template/ files from the repo and compares them with your current versions.

2

Smart diff & merge

It shows you what changed, applies updates while preserving your project-specific customizations — filled placeholders, custom rules, code examples.

3

Approve changes

You review the proposed changes and approve before anything is written. Your customizations are never overwritten.

Copy this prompt to your AI agent:
Fetch the latest agents-template from https://github.com/pedrofuentes/agents-template — compare the template/ files with my current versions. Show me what changed, apply updates while preserving my project-specific configuration (filled-in placeholders, custom rules, code examples). Do NOT overwrite my customizations. Ask me to confirm before applying changes.

Works for me.
Might work for you too.

The template is free, MIT licensed, and ready to drop into any project. Give it a try.

Get Started on GitHub →