Complete Claude Code plugin with: - 9 skills (/pw:init, generate, review, fix, migrate, coverage, testrail, browserstack, report) - 3 specialized agents (test-architect, test-debugger, migration-planner) - 55 test case templates across 11 categories (auth, CRUD, checkout, search, forms, dashboard, settings, onboarding, notifications, API, accessibility) - TestRail MCP server (TypeScript) — 8 tools for bidirectional sync - BrowserStack MCP server (TypeScript) — 7 tools for cross-browser testing - Smart hooks (auto-validate tests, auto-detect Playwright projects) - 6 curated reference docs (golden rules, locators, assertions, fixtures, pitfalls, flaky tests) - Leverages Claude Code built-ins (/batch, /debug, Explore subagent) - Zero-config for core features; TestRail/BrowserStack via env vars - Both TypeScript and JavaScript support throughout Co-authored-by: Leo <leo@openclaw.ai>
3.0 KiB
name, description
| name | description |
|---|---|
| review | Review Playwright tests for quality. Use when user says "review tests", "check test quality", "audit tests", "improve tests", "test code review", or "playwright best practices check". |
Review Playwright Tests
Systematically review Playwright test files for anti-patterns, missed best practices, and coverage gaps.
Input
$ARGUMENTS can be:
- A file path: review that specific test file
- A directory: review all test files in the directory
- Empty: review all tests in the project's
testDir
Steps
1. Gather Context
- Read
playwright.config.tsfor project settings - List all
*.spec.ts/*.spec.jsfiles in scope - If reviewing a single file, also check related page objects and fixtures
2. Check Each File Against Anti-Patterns
Load anti-patterns.md from this skill directory. Check for all 20 anti-patterns.
Critical (must fix):
waitForTimeout()usage- Non-web-first assertions (
expect(await ...)) - Hardcoded URLs instead of
baseURL - CSS/XPath selectors when role-based exists
- Missing
awaiton Playwright calls - Shared mutable state between tests
- Test execution order dependencies
Warning (should fix):
8. Tests longer than 50 lines (consider splitting)
9. Magic strings without named constants
10. Missing error/edge case tests
11. page.evaluate() for things locators can do
12. Nested test.describe() more than 2 levels deep
13. Generic test names ("should work", "test 1")
Info (consider):
14. No page objects for pages with 5+ locators
15. Inline test data instead of factory/fixture
16. Missing accessibility assertions
17. No visual regression tests for UI-heavy pages
18. Console error assertions not checked
19. Network idle waits instead of specific assertions
20. Missing test.describe() grouping
3. Score Each File
Rate 1-10 based on:
- 9-10: Production-ready, follows all golden rules
- 7-8: Good, minor improvements possible
- 5-6: Functional but has anti-patterns
- 3-4: Significant issues, likely flaky
- 1-2: Needs rewrite
4. Generate Review Report
For each file:
## <filename> — Score: X/10
### Critical
- Line 15: `waitForTimeout(2000)` → use `expect(locator).toBeVisible()`
- Line 28: CSS selector `.btn-submit` → `getByRole('button', { name: 'Submit' })`
### Warning
- Line 42: Test name "test login" → "should redirect to dashboard after login"
### Suggestions
- Consider adding error case: what happens with invalid credentials?
5. For Project-Wide Review
If reviewing an entire test suite:
- Spawn sub-agents per file for parallel review (up to 5 concurrent)
- Or use
/batchfor very large suites - Aggregate results into a summary table
6. Offer Fixes
For each critical issue, provide the corrected code. Ask user: "Apply these fixes? [Yes/No]"
If yes, apply all fixes using Edit tool.
Output
- File-by-file review with scores
- Summary: total files, average score, critical issue count
- Actionable fix list
- Coverage gaps identified (pages/features with no tests)