Files
claude-skills-reference/docs/skills/engineering-team/playwright-pro-review.md
Reza Rezvani 2f57ef8948 feat(agenthub): add AgentHub plugin with cross-domain examples, SEO optimization, and docs site fixes
- AgentHub: 13 files updated with non-engineering examples (content drafts,
  research, strategy) — engineering stays primary, cross-domain secondary
- AgentHub: 7 slash commands, 5 Python scripts, 3 references, 1 agent,
  dry_run.py validation (57 checks)
- Marketplace: agenthub entry added with cross-domain keywords, engineering
  POWERFUL updated (25→30), product (12→13), counts synced across all configs
- SEO: generate-docs.py now produces keyword-rich <title> tags and meta
  descriptions using SKILL.md frontmatter — "Claude Code Skills" in site_name
  propagates to all 276 HTML pages
- SEO: per-domain title suffixes (Agent Skill for Codex & OpenClaw, etc.),
  slug-as-title cleanup, domain label stripping from titles
- Broken links: 141→0 warnings — new rewrite_skill_internal_links() converts
  references/, scripts/, assets/ links to GitHub source URLs; skills/index.md
  phantom slugs fixed (6 marketing, 7 RA/QM)
- Counts synced: 204 skills, 266 tools, 382 refs, 16 agents, 17 commands,
  21 plugins — consistent across CLAUDE.md, README.md, docs/index.md,
  marketplace.json, getting-started.md, mkdocs.yml
- Platform sync: Codex 163 skills, Gemini 246 items, OpenClaw compatible

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 12:10:46 +01:00

3.4 KiB

title, description
title description
Review Playwright Tests — Agent Skill & Codex Plugin >-. Agent skill for Claude Code, Codex CLI, Gemini CLI, OpenClaw.

Review Playwright Tests

:material-code-braces: Engineering - Core :material-identifier: `review` :material-github: Source
Install: claude /plugin install engineering-skills

Systematically review Playwright test files for anti-patterns, missed best practices, and coverage gaps.

Input

$ARGUMENTS can be:

  • A file path: review that specific test file
  • A directory: review all test files in the directory
  • Empty: review all tests in the project's testDir

Steps

1. Gather Context

  • Read playwright.config.ts for project settings
  • List all *.spec.ts / *.spec.js files in scope
  • If reviewing a single file, also check related page objects and fixtures

2. Check Each File Against Anti-Patterns

Load anti-patterns.md from this skill directory. Check for all 20 anti-patterns.

Critical (must fix):

  1. waitForTimeout() usage
  2. Non-web-first assertions (expect(await ...))
  3. Hardcoded URLs instead of baseURL
  4. CSS/XPath selectors when role-based exists
  5. Missing await on Playwright calls
  6. Shared mutable state between tests
  7. Test execution order dependencies

Warning (should fix): 8. Tests longer than 50 lines (consider splitting) 9. Magic strings without named constants 10. Missing error/edge case tests 11. page.evaluate() for things locators can do 12. Nested test.describe() more than 2 levels deep 13. Generic test names ("should work", "test 1")

Info (consider): 14. No page objects for pages with 5+ locators 15. Inline test data instead of factory/fixture 16. Missing accessibility assertions 17. No visual regression tests for UI-heavy pages 18. Console error assertions not checked 19. Network idle waits instead of specific assertions 20. Missing test.describe() grouping

3. Score Each File

Rate 1-10 based on:

  • 9-10: Production-ready, follows all golden rules
  • 7-8: Good, minor improvements possible
  • 5-6: Functional but has anti-patterns
  • 3-4: Significant issues, likely flaky
  • 1-2: Needs rewrite

4. Generate Review Report

For each file:

## <filename> — Score: X/10

### Critical
- Line 15: `waitForTimeout(2000)` → use `expect(locator).toBeVisible()`
- Line 28: CSS selector `.btn-submit` → `getByRole('button', { name: "submit" })`

### Warning
- Line 42: Test name "test login" → "should redirect to dashboard after login"

### Suggestions
- Consider adding error case: what happens with invalid credentials?

5. For Project-Wide Review

If reviewing an entire test suite:

  • Spawn sub-agents per file for parallel review (up to 5 concurrent)
  • Or use /batch for very large suites
  • Aggregate results into a summary table

6. Offer Fixes

For each critical issue, provide the corrected code. Ask user: "Apply these fixes? [Yes/No]"

If yes, apply all fixes using Edit tool.

Output

  • File-by-file review with scores
  • Summary: total files, average score, critical issue count
  • Actionable fix list
  • Coverage gaps identified (pages/features with no tests)