feat(adversarial-reviewer): audit fixes + full ecosystem integration
- Enriched SKILL.md frontmatter (tier, category, dependencies, author, version) - Added Description, Features, Usage, Examples sections for validator compliance - Fixed cross-reference path (engineering/code-reviewer → engineering-team/code-reviewer) - Updated domain plugin.json count (30→31) - Added mkdocs.yml nav entry - Synced Codex, Gemini indexes - Generated docs page https://claude.ai/code/session_01XY4i7SR4BHLWJpdjwGnNLG
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"name": "gemini-cli-skills",
|
||||
"total_skills": 263,
|
||||
"total_skills": 264,
|
||||
"skills": [
|
||||
{
|
||||
"name": "README",
|
||||
@@ -438,6 +438,11 @@
|
||||
"category": "engineering",
|
||||
"description": "Accessibility audit skill for scanning, fixing, and verifying WCAG 2.2 Level A and AA compliance across React, Next.js, Vue, Angular, Svelte, and plain HTML codebases. Use when auditing accessibility, fixing a11y violations, checking color contrast, generating compliance reports, or integrating accessibility checks into CI/CD pipelines."
|
||||
},
|
||||
{
|
||||
"name": "adversarial-reviewer",
|
||||
"category": "engineering",
|
||||
"description": "Adversarial code review that breaks the self-review monoculture. Use when you want a genuinely critical review of recent changes, before merging a PR, or when you suspect Claude is being too agreeable about code quality. Forces perspective shifts through hostile reviewer personas that catch blind spots the author's mental model shares with the reviewer."
|
||||
},
|
||||
{
|
||||
"name": "aws-solution-architect",
|
||||
"category": "engineering",
|
||||
@@ -508,11 +513,6 @@
|
||||
"category": "engineering",
|
||||
"description": "Incident Commander Skill"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"category": "engineering",
|
||||
"description": ">-"
|
||||
},
|
||||
{
|
||||
"name": "migrate",
|
||||
"category": "engineering",
|
||||
@@ -623,21 +623,26 @@
|
||||
"category": "engineering",
|
||||
"description": "Security engineering toolkit for threat modeling, vulnerability analysis, secure architecture, and penetration testing. Includes STRIDE analysis, OWASP guidance, cryptography patterns, and security scanning tools. Use when the user asks about security reviews, threat analysis, vulnerability assessments, secure coding practices, security audits, attack surface analysis, CVE remediation, or security best practices."
|
||||
},
|
||||
{
|
||||
"name": "skills-init",
|
||||
"category": "engineering",
|
||||
"description": ">-"
|
||||
},
|
||||
{
|
||||
"name": "skills-review",
|
||||
"category": "engineering",
|
||||
"description": ">-"
|
||||
},
|
||||
{
|
||||
"name": "skills-status",
|
||||
"category": "engineering",
|
||||
"description": "Memory health dashboard showing line counts, topic files, capacity, stale entries, and recommendations."
|
||||
},
|
||||
{
|
||||
"name": "snowflake-development",
|
||||
"category": "engineering",
|
||||
"description": "Use when writing Snowflake SQL, building data pipelines with Dynamic Tables or Streams/Tasks, using Cortex AI functions, creating Cortex Agents, writing Snowpark Python, configuring dbt for Snowflake, or troubleshooting Snowflake errors."
|
||||
},
|
||||
{
|
||||
"name": "status",
|
||||
"category": "engineering",
|
||||
"description": "Memory health dashboard showing line counts, topic files, capacity, stale entries, and recommendations."
|
||||
},
|
||||
{
|
||||
"name": "stripe-integration-expert",
|
||||
"category": "engineering",
|
||||
@@ -763,6 +768,11 @@
|
||||
"category": "engineering-advanced",
|
||||
"description": "Helm chart development agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw \u2014 chart scaffolding, values design, template patterns, dependency management, security hardening, and chart testing. Use when: user wants to create or improve Helm charts, design values.yaml files, implement template helpers, audit chart security (RBAC, network policies, pod security), manage subcharts, or run helm lint/test."
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"category": "engineering-advanced",
|
||||
"description": "Create a new AgentHub collaboration session with task, agent count, and evaluation criteria."
|
||||
},
|
||||
{
|
||||
"name": "interview-system-designer",
|
||||
"category": "engineering-advanced",
|
||||
@@ -826,7 +836,7 @@
|
||||
{
|
||||
"name": "run",
|
||||
"category": "engineering-advanced",
|
||||
"description": "One-shot lifecycle command that chains init \u2192 baseline \u2192 spawn \u2192 eval \u2192 merge in a single invocation."
|
||||
"description": "Run a single experiment iteration. Edit the target file, evaluate, keep or discard."
|
||||
},
|
||||
{
|
||||
"name": "runbook-generator",
|
||||
@@ -858,26 +868,16 @@
|
||||
"category": "engineering-advanced",
|
||||
"description": "Skill Tester"
|
||||
},
|
||||
{
|
||||
"name": "skills-init",
|
||||
"category": "engineering-advanced",
|
||||
"description": "Create a new AgentHub collaboration session with task, agent count, and evaluation criteria."
|
||||
},
|
||||
{
|
||||
"name": "skills-run",
|
||||
"category": "engineering-advanced",
|
||||
"description": "Run a single experiment iteration. Edit the target file, evaluate, keep or discard."
|
||||
"description": "One-shot lifecycle command that chains init \u2192 baseline \u2192 spawn \u2192 eval \u2192 merge in a single invocation."
|
||||
},
|
||||
{
|
||||
"name": "skills-status",
|
||||
"category": "engineering-advanced",
|
||||
"description": "Show DAG state, agent progress, and branch status for an AgentHub session."
|
||||
},
|
||||
{
|
||||
"name": "skills-status",
|
||||
"category": "engineering-advanced",
|
||||
"description": "Show experiment dashboard with results, active loops, and progress."
|
||||
},
|
||||
{
|
||||
"name": "spawn",
|
||||
"category": "engineering-advanced",
|
||||
@@ -893,6 +893,11 @@
|
||||
"category": "engineering-advanced",
|
||||
"description": "Use when the user asks to write SQL queries, optimize database performance, generate migrations, explore database schemas, or work with ORMs like Prisma, Drizzle, TypeORM, or SQLAlchemy."
|
||||
},
|
||||
{
|
||||
"name": "status",
|
||||
"category": "engineering-advanced",
|
||||
"description": "Show experiment dashboard with results, active loops, and progress."
|
||||
},
|
||||
{
|
||||
"name": "tech-debt-tracker",
|
||||
"category": "engineering-advanced",
|
||||
@@ -1337,7 +1342,7 @@
|
||||
"description": "Command resources"
|
||||
},
|
||||
"engineering": {
|
||||
"count": 45,
|
||||
"count": 46,
|
||||
"description": "Engineering resources"
|
||||
},
|
||||
"engineering-advanced": {
|
||||
|
||||
1
.gemini/skills/adversarial-reviewer/SKILL.md
Symbolic link
1
.gemini/skills/adversarial-reviewer/SKILL.md
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../engineering-team/adversarial-reviewer/SKILL.md
|
||||
@@ -1 +1 @@
|
||||
../../../engineering-team/playwright-pro/skills/init/SKILL.md
|
||||
../../../engineering/agenthub/skills/init/SKILL.md
|
||||
@@ -1 +1 @@
|
||||
../../../engineering/agenthub/skills/run/SKILL.md
|
||||
../../../engineering/autoresearch-agent/skills/run/SKILL.md
|
||||
@@ -1 +1 @@
|
||||
../../../engineering/agenthub/skills/init/SKILL.md
|
||||
../../../engineering-team/playwright-pro/skills/init/SKILL.md
|
||||
@@ -1 +1 @@
|
||||
../../../engineering/autoresearch-agent/skills/run/SKILL.md
|
||||
../../../engineering/agenthub/skills/run/SKILL.md
|
||||
@@ -1 +1 @@
|
||||
../../../engineering/autoresearch-agent/skills/status/SKILL.md
|
||||
../../../engineering/agenthub/skills/status/SKILL.md
|
||||
@@ -1 +1 @@
|
||||
../../../engineering-team/self-improving-agent/skills/status/SKILL.md
|
||||
../../../engineering/autoresearch-agent/skills/status/SKILL.md
|
||||
252
docs/skills/engineering-team/adversarial-reviewer.md
Normal file
252
docs/skills/engineering-team/adversarial-reviewer.md
Normal file
@@ -0,0 +1,252 @@
|
||||
---
|
||||
title: "Adversarial Code Reviewer — Agent Skill & Codex Plugin"
|
||||
description: "Adversarial code review that breaks the self-review monoculture. Use when you want a genuinely critical review of recent changes, before merging a. Agent skill for Claude Code, Codex CLI, Gemini CLI, OpenClaw."
|
||||
---
|
||||
|
||||
# Adversarial Code Reviewer
|
||||
|
||||
<div class="page-meta" markdown>
|
||||
<span class="meta-badge">:material-code-braces: Engineering - Core</span>
|
||||
<span class="meta-badge">:material-identifier: `adversarial-reviewer`</span>
|
||||
<span class="meta-badge">:material-github: <a href="https://github.com/alirezarezvani/claude-skills/tree/main/engineering-team/adversarial-reviewer/SKILL.md">Source</a></span>
|
||||
</div>
|
||||
|
||||
<div class="install-banner" markdown>
|
||||
<span class="install-label">Install:</span> <code>claude /plugin install engineering-skills</code>
|
||||
</div>
|
||||
|
||||
|
||||
## Description
|
||||
|
||||
Adversarial code review skill that forces genuine perspective shifts through three hostile reviewer personas (Saboteur, New Hire, Security Auditor). Each persona MUST find at least one issue — no "LGTM" escapes. Findings are severity-classified and cross-promoted when caught by multiple personas.
|
||||
|
||||
## Features
|
||||
|
||||
- **Three adversarial personas** — Saboteur (production breaks), New Hire (maintainability), Security Auditor (OWASP-informed)
|
||||
- **Mandatory findings** — Each persona must surface at least one issue, eliminating rubber-stamp reviews
|
||||
- **Severity promotion** — Issues caught by 2+ personas are promoted one severity level
|
||||
- **Self-review trap breaker** — Concrete techniques to overcome shared mental model blind spots
|
||||
- **Structured verdicts** — BLOCK / CONCERNS / CLEAN with clear merge guidance
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/adversarial-review # Review staged/unstaged changes
|
||||
/adversarial-review --diff HEAD~3 # Review last 3 commits
|
||||
/adversarial-review --file src/auth.ts # Review a specific file
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example: Reviewing a PR Before Merge
|
||||
|
||||
```
|
||||
/adversarial-review --diff main...HEAD
|
||||
```
|
||||
|
||||
Produces a structured report with findings from all three personas, deduplicated and severity-ranked, ending with a BLOCK/CONCERNS/CLEAN verdict.
|
||||
|
||||
## Problem This Solves
|
||||
|
||||
When Claude reviews code it wrote (or code it just read), it shares the same mental model, assumptions, and blind spots as the author. This produces "Looks good to me" reviews on code that a fresh human reviewer would flag immediately. Users report this as one of the top frustrations with AI-assisted development.
|
||||
|
||||
This skill forces a genuine perspective shift by requiring you to adopt adversarial personas — each with different priorities, different fears, and different definitions of "bad code."
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Quick Start](#quick-start)
|
||||
2. [Review Workflow](#review-workflow)
|
||||
3. [The Three Personas](#the-three-personas)
|
||||
4. [Severity Classification](#severity-classification)
|
||||
5. [Output Format](#output-format)
|
||||
6. [Anti-Patterns](#anti-patterns)
|
||||
7. [When to Use This](#when-to-use-this)
|
||||
|
||||
## Quick Start
|
||||
|
||||
```
|
||||
/adversarial-review # Review staged/unstaged changes
|
||||
/adversarial-review --diff HEAD~3 # Review last 3 commits
|
||||
/adversarial-review --file src/auth.ts # Review a specific file
|
||||
```
|
||||
|
||||
## Review Workflow
|
||||
|
||||
### Step 1: Gather the Changes
|
||||
|
||||
Determine what to review based on invocation:
|
||||
|
||||
- **No arguments:** Run `git diff` (unstaged) + `git diff --cached` (staged). If both empty, run `git diff HEAD~1` (last commit).
|
||||
- **`--diff <ref>`:** Run `git diff <ref>`.
|
||||
- **`--file <path>`:** Read the entire file. Focus review on the full file rather than just changes.
|
||||
|
||||
If no changes are found, stop and report: "Nothing to review."
|
||||
|
||||
### Step 2: Read the Full Context
|
||||
|
||||
For every file in the diff:
|
||||
1. Read the **full file** (not just the changed lines) — bugs hide in how new code interacts with existing code.
|
||||
2. Identify the **purpose** of the change: bug fix, new feature, refactor, config change, test.
|
||||
3. Note any **project conventions** from CLAUDE.md, .editorconfig, linting configs, or existing patterns.
|
||||
|
||||
### Step 3: Run All Three Personas
|
||||
|
||||
Execute each persona sequentially. Each persona MUST produce at least one finding. If a persona finds nothing wrong, it has not looked hard enough — go back and look again.
|
||||
|
||||
**IMPORTANT:** Do not soften findings. Do not hedge. Do not say "this might be fine but..." — either it's a problem or it isn't. Be direct.
|
||||
|
||||
### Step 4: Deduplicate and Synthesize
|
||||
|
||||
After all three personas have reported:
|
||||
1. Merge duplicate findings (same issue caught by multiple personas).
|
||||
2. Promote findings caught by 2+ personas to the next severity level.
|
||||
3. Produce the final structured output.
|
||||
|
||||
## The Three Personas
|
||||
|
||||
### Persona 1: The Saboteur
|
||||
|
||||
**Mindset:** "I am trying to break this code in production."
|
||||
|
||||
**Priorities:**
|
||||
- Input that was never validated
|
||||
- State that can become inconsistent
|
||||
- Concurrent access without synchronization
|
||||
- Error paths that swallow exceptions or return misleading results
|
||||
- Assumptions about data format, size, or availability that could be violated
|
||||
- Off-by-one errors, integer overflow, null/undefined dereferences
|
||||
- Resource leaks (file handles, connections, subscriptions, listeners)
|
||||
|
||||
**Review Process:**
|
||||
1. For each function/method changed, ask: "What is the worst input I could send this?"
|
||||
2. For each external call, ask: "What if this fails, times out, or returns garbage?"
|
||||
3. For each state mutation, ask: "What if this runs twice? Concurrently? Never?"
|
||||
4. For each conditional, ask: "What if neither branch is correct?"
|
||||
|
||||
**You MUST find at least one issue. If the code is genuinely bulletproof, note the most fragile assumption it relies on.**
|
||||
|
||||
---
|
||||
|
||||
### Persona 2: The New Hire
|
||||
|
||||
**Mindset:** "I just joined this team. I need to understand and modify this code in 6 months with zero context from the original author."
|
||||
|
||||
**Priorities:**
|
||||
- Names that don't communicate intent (what does `data` mean? what does `process()` do?)
|
||||
- Logic that requires reading 3+ other files to understand
|
||||
- Magic numbers, magic strings, unexplained constants
|
||||
- Functions doing more than one thing (the name says X but it also does Y and Z)
|
||||
- Missing type information that forces the reader to trace through call chains
|
||||
- Inconsistency with surrounding code style or project conventions
|
||||
- Tests that test implementation details instead of behavior
|
||||
- Comments that describe *what* (redundant) instead of *why* (useful)
|
||||
|
||||
**Review Process:**
|
||||
1. Read each changed function as if you've never seen the codebase. Can you understand what it does from the name, parameters, and body alone?
|
||||
2. Trace one code path end-to-end. How many files do you need to open?
|
||||
3. Check: would a new contributor know where to add a similar feature?
|
||||
4. Look for "the author knew something the reader won't" — implicit knowledge baked into the code.
|
||||
|
||||
**You MUST find at least one issue. If the code is crystal clear, note the most likely point of confusion for a newcomer.**
|
||||
|
||||
---
|
||||
|
||||
### Persona 3: The Security Auditor
|
||||
|
||||
**Mindset:** "This code will be attacked. My job is to find the vulnerability before an attacker does."
|
||||
|
||||
**OWASP-Informed Checklist:**
|
||||
|
||||
| Category | What to Look For |
|
||||
|----------|-----------------|
|
||||
| **Injection** | SQL, NoSQL, OS command, LDAP — any place user input reaches a query or command without parameterization |
|
||||
| **Broken Auth** | Hardcoded credentials, missing auth checks on new endpoints, session tokens in URLs or logs |
|
||||
| **Data Exposure** | Sensitive data in error messages, logs, or API responses; missing encryption at rest or in transit |
|
||||
| **Insecure Defaults** | Debug mode left on, permissive CORS, wildcard permissions, default passwords |
|
||||
| **Missing Access Control** | IDOR (can user A access user B's data?), missing role checks, privilege escalation paths |
|
||||
| **Dependency Risk** | New dependencies with known CVEs, pinned to vulnerable versions, unnecessary transitive dependencies |
|
||||
| **Secrets** | API keys, tokens, passwords in code, config, or comments — even "temporary" ones |
|
||||
|
||||
**Review Process:**
|
||||
1. Identify every trust boundary the code crosses (user input, API calls, database, file system, environment variables).
|
||||
2. For each boundary: is input validated? Is output sanitized? Is the principle of least privilege followed?
|
||||
3. Check: could an authenticated user escalate privileges through this change?
|
||||
4. Check: does this change expose any new attack surface?
|
||||
|
||||
**You MUST find at least one issue. If the code has no security surface, note the closest thing to a security-relevant assumption.**
|
||||
|
||||
## Severity Classification
|
||||
|
||||
| Severity | Definition | Action Required |
|
||||
|----------|-----------|-----------------|
|
||||
| **CRITICAL** | Will cause data loss, security breach, or production outage. Must fix before merge. | Block merge. |
|
||||
| **WARNING** | Likely to cause bugs in edge cases, degrade performance, or confuse future maintainers. Should fix before merge. | Fix or explicitly accept risk with justification. |
|
||||
| **NOTE** | Style issue, minor improvement opportunity, or documentation gap. Nice to fix. | Author's discretion. |
|
||||
|
||||
**Promotion rule:** A finding flagged by 2+ personas is promoted one level (NOTE becomes WARNING, WARNING becomes CRITICAL).
|
||||
|
||||
## Output Format
|
||||
|
||||
Structure your review as follows:
|
||||
|
||||
```markdown
|
||||
## Adversarial Review: [brief description of what was reviewed]
|
||||
|
||||
**Scope:** [files reviewed, lines changed, type of change]
|
||||
**Verdict:** BLOCK / CONCERNS / CLEAN
|
||||
|
||||
### Critical Findings
|
||||
[If any — these block the merge]
|
||||
|
||||
### Warnings
|
||||
[Should-fix items]
|
||||
|
||||
### Notes
|
||||
[Nice-to-fix items]
|
||||
|
||||
### Summary
|
||||
[2-3 sentences: what's the overall risk profile? What's the single most important thing to fix?]
|
||||
```
|
||||
|
||||
**Verdict definitions:**
|
||||
- **BLOCK** — 1+ CRITICAL findings. Do not merge until resolved.
|
||||
- **CONCERNS** — No criticals but 2+ warnings. Merge at your own risk.
|
||||
- **CLEAN** — Only notes. Safe to merge.
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### What This Skill is NOT
|
||||
|
||||
| Anti-Pattern | Why It's Wrong |
|
||||
|-------------|---------------|
|
||||
| "LGTM, no issues found" | If you found nothing, you didn't look hard enough. Every change has at least one risk, assumption, or improvement opportunity. |
|
||||
| Cosmetic-only findings | Reporting only whitespace/formatting while missing a null dereference is worse than no review at all. Substance first, style second. |
|
||||
| Pulling punches | "This might possibly be a minor concern..." — No. Be direct. "This will throw a NullPointerException when `user` is undefined." |
|
||||
| Restating the diff | "This function was added to handle authentication" is not a finding. What's WRONG with how it handles authentication? |
|
||||
| Ignoring test gaps | New code without tests is a finding. Always. Tests are not optional. |
|
||||
| Reviewing only the changed lines | Bugs live in the interaction between new code and existing code. Read the full file. |
|
||||
|
||||
### The Self-Review Trap
|
||||
|
||||
You are likely reviewing code you just wrote or just read. Your brain (weights) formed the same mental model that produced this code. You will naturally think it looks correct because it matches your expectations.
|
||||
|
||||
**To break this pattern:**
|
||||
1. Read the code **bottom-up** (start from the last function, work backward).
|
||||
2. For each function, state its contract **before** reading the body. Does the body match?
|
||||
3. Assume every variable could be null/undefined until proven otherwise.
|
||||
4. Assume every external call will fail.
|
||||
5. Ask: "If I deleted this change entirely, what would break?" — if the answer is "nothing," the change might be unnecessary.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- **Before merging any PR** — especially self-authored PRs with no human reviewer
|
||||
- **After a long coding session** — fatigue produces blind spots; this skill compensates
|
||||
- **When Claude said "looks good"** — if you got an easy approval, run this for a second opinion
|
||||
- **On security-sensitive code** — auth, payments, data access, API endpoints
|
||||
- **When something "feels off"** — trust that instinct and run an adversarial review
|
||||
|
||||
## Cross-References
|
||||
|
||||
- Related: `engineering-team/senior-security` — deep security analysis
|
||||
- Related: `engineering/code-reviewer` — general code quality review
|
||||
- Complementary: `ra-qm-team/` — quality management workflows
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "engineering-skills",
|
||||
"description": "30 production-ready engineering skills: architecture, frontend, backend, fullstack, QA, DevOps, security, AI/ML, data engineering, Playwright (9 sub-skills), self-improving agent, Stripe integration, TDD guide, Google Workspace CLI, a11y audit (WCAG 2.2), Azure cloud architect, GCP cloud architect, security pen testing, Snowflake development, and more. Agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw.",
|
||||
"description": "31 production-ready engineering skills: architecture, frontend, backend, fullstack, QA, DevOps, security, AI/ML, data engineering, Playwright (9 sub-skills), self-improving agent, Stripe integration, TDD guide, Google Workspace CLI, a11y audit (WCAG 2.2), Azure cloud architect, GCP cloud architect, security pen testing, Snowflake development, and more. Agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw.",
|
||||
"version": "2.1.2",
|
||||
"author": {
|
||||
"name": "Alireza Rezvani",
|
||||
|
||||
@@ -1,10 +1,46 @@
|
||||
---
|
||||
name: "adversarial-reviewer"
|
||||
description: "Adversarial code review that breaks the self-review monoculture. Use when you want a genuinely critical review of recent changes, before merging a PR, or when you suspect Claude is being too agreeable about code quality. Forces perspective shifts through hostile reviewer personas that catch blind spots the author's mental model shares with the reviewer."
|
||||
tier: "STANDARD"
|
||||
category: "Engineering / Code Quality"
|
||||
dependencies: "None (prompt-only, no external tools required)"
|
||||
author: "ekreloff"
|
||||
version: "1.0.0"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
# Adversarial Code Reviewer
|
||||
|
||||
## Description
|
||||
|
||||
Adversarial code review skill that forces genuine perspective shifts through three hostile reviewer personas (Saboteur, New Hire, Security Auditor). Each persona MUST find at least one issue — no "LGTM" escapes. Findings are severity-classified and cross-promoted when caught by multiple personas.
|
||||
|
||||
## Features
|
||||
|
||||
- **Three adversarial personas** — Saboteur (production breaks), New Hire (maintainability), Security Auditor (OWASP-informed)
|
||||
- **Mandatory findings** — Each persona must surface at least one issue, eliminating rubber-stamp reviews
|
||||
- **Severity promotion** — Issues caught by 2+ personas are promoted one severity level
|
||||
- **Self-review trap breaker** — Concrete techniques to overcome shared mental model blind spots
|
||||
- **Structured verdicts** — BLOCK / CONCERNS / CLEAN with clear merge guidance
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/adversarial-review # Review staged/unstaged changes
|
||||
/adversarial-review --diff HEAD~3 # Review last 3 commits
|
||||
/adversarial-review --file src/auth.ts # Review a specific file
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example: Reviewing a PR Before Merge
|
||||
|
||||
```
|
||||
/adversarial-review --diff main...HEAD
|
||||
```
|
||||
|
||||
Produces a structured report with findings from all three personas, deduplicated and severity-ranked, ending with a BLOCK/CONCERNS/CLEAN verdict.
|
||||
|
||||
## Problem This Solves
|
||||
|
||||
When Claude reviews code it wrote (or code it just read), it shares the same mental model, assumptions, and blind spots as the author. This produces "Looks good to me" reviews on code that a fresh human reviewer would flag immediately. Users report this as one of the top frustrations with AI-assisted development.
|
||||
@@ -207,5 +243,5 @@ You are likely reviewing code you just wrote or just read. Your brain (weights)
|
||||
## Cross-References
|
||||
|
||||
- Related: `engineering-team/senior-security` — deep security analysis
|
||||
- Related: `engineering/code-reviewer` — general code quality review
|
||||
- Related: `engineering-team/code-reviewer` — general code quality review
|
||||
- Complementary: `ra-qm-team/` — quality management workflows
|
||||
|
||||
@@ -121,6 +121,7 @@ nav:
|
||||
- Engineering - Core:
|
||||
- Overview: skills/engineering-team/index.md
|
||||
- "A11y Audit": skills/engineering-team/a11y-audit.md
|
||||
- "Adversarial Code Reviewer": skills/engineering-team/adversarial-reviewer.md
|
||||
- "AWS Solution Architect": skills/engineering-team/aws-solution-architect.md
|
||||
- "Azure Cloud Architect": skills/engineering-team/azure-cloud-architect.md
|
||||
- "Code Reviewer": skills/engineering-team/code-reviewer.md
|
||||
|
||||
Reference in New Issue
Block a user