Files
claude-skills-reference/docs/skills/engineering-team/playwright-pro-review.md
Reza Rezvani 670930c69d feat(docs): implement unified design system across all generated pages
- Add CSS components: .page-meta badges, .domain-header, .install-banner
- Fix invisible tab navigation (explicit color for light/dark modes)
- Rewrite generate-docs.py with design system templates
- Domain indexes: centered headers with icons, install banners, grid cards
- Skill pages: pill badges (domain, skill ID, source), install commands
- Agent/command pages: type badges with domain icons
- Regenerate all 210 pages (180 skills + 15 agents + 15 commands)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 12:32:49 +01:00

111 lines
3.4 KiB
Markdown

---
title: "Review Playwright Tests"
description: "Review Playwright Tests - Claude Code skill from the Engineering - Core domain."
---
# Review Playwright Tests
<div class="page-meta" markdown>
<span class="meta-badge">:material-code-braces: Engineering - Core</span>
<span class="meta-badge">:material-identifier: `review`</span>
<span class="meta-badge">:material-github: <a href="https://github.com/alirezarezvani/claude-skills/tree/main/engineering-team/playwright-pro/skills/review/SKILL.md">Source</a></span>
</div>
<div class="install-banner" markdown>
<span class="install-label">Install:</span> <code>claude /plugin install engineering-skills</code>
</div>
Systematically review Playwright test files for anti-patterns, missed best practices, and coverage gaps.
## Input
`$ARGUMENTS` can be:
- A file path: review that specific test file
- A directory: review all test files in the directory
- Empty: review all tests in the project's `testDir`
## Steps
### 1. Gather Context
- Read `playwright.config.ts` for project settings
- List all `*.spec.ts` / `*.spec.js` files in scope
- If reviewing a single file, also check related page objects and fixtures
### 2. Check Each File Against Anti-Patterns
Load `anti-patterns.md` from this skill directory. Check for all 20 anti-patterns.
**Critical (must fix):**
1. `waitForTimeout()` usage
2. Non-web-first assertions (`expect(await ...)`)
3. Hardcoded URLs instead of `baseURL`
4. CSS/XPath selectors when role-based exists
5. Missing `await` on Playwright calls
6. Shared mutable state between tests
7. Test execution order dependencies
**Warning (should fix):**
8. Tests longer than 50 lines (consider splitting)
9. Magic strings without named constants
10. Missing error/edge case tests
11. `page.evaluate()` for things locators can do
12. Nested `test.describe()` more than 2 levels deep
13. Generic test names ("should work", "test 1")
**Info (consider):**
14. No page objects for pages with 5+ locators
15. Inline test data instead of factory/fixture
16. Missing accessibility assertions
17. No visual regression tests for UI-heavy pages
18. Console error assertions not checked
19. Network idle waits instead of specific assertions
20. Missing `test.describe()` grouping
### 3. Score Each File
Rate 1-10 based on:
- **9-10**: Production-ready, follows all golden rules
- **7-8**: Good, minor improvements possible
- **5-6**: Functional but has anti-patterns
- **3-4**: Significant issues, likely flaky
- **1-2**: Needs rewrite
### 4. Generate Review Report
For each file:
```
## <filename> — Score: X/10
### Critical
- Line 15: `waitForTimeout(2000)` → use `expect(locator).toBeVisible()`
- Line 28: CSS selector `.btn-submit` → `getByRole('button', { name: "submit" })`
### Warning
- Line 42: Test name "test login" → "should redirect to dashboard after login"
### Suggestions
- Consider adding error case: what happens with invalid credentials?
```
### 5. For Project-Wide Review
If reviewing an entire test suite:
- Spawn sub-agents per file for parallel review (up to 5 concurrent)
- Or use `/batch` for very large suites
- Aggregate results into a summary table
### 6. Offer Fixes
For each critical issue, provide the corrected code. Ask user: "Apply these fixes? [Yes/No]"
If yes, apply all fixes using `Edit` tool.
## Output
- File-by-file review with scores
- Summary: total files, average score, critical issue count
- Actionable fix list
- Coverage gaps identified (pages/features with no tests)