Release v1.10.0: Add qa-expert skill and improve SOP
## New Skill: qa-expert (v1.0.0) Comprehensive QA testing infrastructure with autonomous LLM execution: - One-command QA project initialization with complete templates - Google Testing Standards (AAA pattern, 90% coverage targets) - Autonomous LLM-driven test execution via master prompts (100x speedup) - OWASP Top 10 security testing (90% coverage target) - Bug tracking with P0-P4 severity classification - Quality gates enforcement (100% execution, ≥80% pass rate, 0 P0 bugs) - Ground Truth Principle for preventing doc/CSV sync issues - Day 1 onboarding guide (5-hour timeline) - 30+ ready-to-use LLM prompts for QA tasks - Bundled scripts: init_qa_project.py, calculate_metrics.py ## Documentation Updates - Updated marketplace to v1.10.0 (16 → 17 skills) - Updated CHANGELOG.md with v1.10.0 entry - Updated README.md (EN) with qa-expert skill section - Updated README.zh-CN.md (ZH) with skills 11-16 and qa-expert - Updated CLAUDE.md with qa-expert in available skills list - Updated marketplace.json with qa-expert plugin entry ## SOP Improvements Enhanced "Adding a New Skill to Marketplace" workflow: - Added mandatory Step 7: Update README.zh-CN.md - Added 6 new Chinese documentation checklist items - Added Chinese documentation to Common Mistakes (#2, #3, #4, #5, #7, #8) - Updated File Update Summary Template (7 files including zh-CN) - Added verification commands for EN/ZH sync - Made Chinese documentation updates MANDATORY Total: 17 production-ready skills 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
322
qa-expert/references/day1_onboarding.md
Normal file
322
qa-expert/references/day1_onboarding.md
Normal file
@@ -0,0 +1,322 @@
|
||||
# Day 1 Onboarding Checklist
|
||||
|
||||
**Purpose**: Complete 5-hour onboarding guide for new QA engineers joining a software testing project.
|
||||
|
||||
**Time**: 5 hours (with breaks)
|
||||
|
||||
---
|
||||
|
||||
## Hour 1: Environment Setup (60 min)
|
||||
|
||||
### 1.1 Clone Repository & Install Dependencies
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd <project-dir>
|
||||
pnpm install # or npm install
|
||||
```
|
||||
|
||||
### 1.2 Start Local Database (if using Supabase/PostgreSQL)
|
||||
```bash
|
||||
npx supabase start # Wait 2-3 minutes for all containers
|
||||
docker ps | grep supabase # Verify 8-11 containers running
|
||||
```
|
||||
|
||||
### 1.3 Configure Environment Variables
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env with local development URLs
|
||||
```
|
||||
|
||||
### 1.4 Apply Database Migrations
|
||||
```bash
|
||||
# Apply all migrations in order
|
||||
for file in database/migrations/*.sql; do
|
||||
docker exec -i <db-container-name> psql -U postgres -d postgres < "$file"
|
||||
done
|
||||
```
|
||||
|
||||
### 1.5 Verify Database Seeded
|
||||
```bash
|
||||
docker exec <db-container-name> psql -U postgres -d postgres -c "SELECT COUNT(*) FROM <main-table>;"
|
||||
# Should return expected row count
|
||||
```
|
||||
|
||||
### 1.6 Start Development Server
|
||||
```bash
|
||||
pnpm dev
|
||||
# Verify: http://localhost:8080 (or configured port)
|
||||
```
|
||||
|
||||
**Checkpoint**: ✅ Environment running, database seeded, website loads correctly.
|
||||
|
||||
---
|
||||
|
||||
## Hour 2: Documentation Review (60 min)
|
||||
|
||||
### 2.1 Read Quick Start Guide (30 min)
|
||||
- Understand project scope (total test cases, timeline)
|
||||
- Identify test categories (CLI, Web, API, Security)
|
||||
- Memorize quality gates (pass rate target, P0 bug policy)
|
||||
- Review execution schedule (Week 1-5 plan)
|
||||
|
||||
### 2.2 Review Test Strategy (30 min)
|
||||
- Understand AAA pattern (Arrange-Act-Assert)
|
||||
- Learn bug classification (P0-P4 severity levels)
|
||||
- Study test case format (TC-XXX-YYY numbering)
|
||||
- Review OWASP security coverage target
|
||||
|
||||
**Checkpoint**: ✅ Strategy understood, test case format memorized.
|
||||
|
||||
---
|
||||
|
||||
## Hour 3: Test Data Setup (60 min)
|
||||
|
||||
### 3.1 Create Test Users (20 min)
|
||||
**Via UI** (if auth page available):
|
||||
1. Navigate to `/auth` or `/signup`
|
||||
2. Create 5 regular test users
|
||||
3. Create 1 admin user
|
||||
4. Create 1 moderator user
|
||||
|
||||
**Via SQL** (assign roles):
|
||||
```sql
|
||||
INSERT INTO user_roles (user_id, role)
|
||||
SELECT id, 'admin'
|
||||
FROM auth.users
|
||||
WHERE email = 'admin@test.com';
|
||||
```
|
||||
|
||||
### 3.2 Install CLI for Testing (20 min)
|
||||
```bash
|
||||
# Global installation (for testing `ccpm` command directly)
|
||||
cd packages/cli
|
||||
pnpm link --global
|
||||
|
||||
# Verify
|
||||
ccpm --version
|
||||
ccpm --help
|
||||
```
|
||||
|
||||
### 3.3 Configure Browser DevTools (20 min)
|
||||
- Install React Developer Tools extension
|
||||
- Set up network throttling presets (Slow 3G, Fast 3G, Fast 4G)
|
||||
- Configure responsive design mode (Mobile, Tablet, Desktop viewports)
|
||||
- Test viewport switching
|
||||
|
||||
**Checkpoint**: ✅ Test users created, CLI installed, DevTools configured.
|
||||
|
||||
---
|
||||
|
||||
## Hour 4: Execute First Test Case (60 min)
|
||||
|
||||
### 4.1 Open Test Execution Tracking Spreadsheet (5 min)
|
||||
- File: `tests/docs/templates/TEST-EXECUTION-TRACKING.csv`
|
||||
- Open in Google Sheets, Excel, or LibreOffice Calc
|
||||
- Find first test case: `TC-CLI-001` or equivalent
|
||||
|
||||
### 4.2 Read Full Test Case Documentation (10 min)
|
||||
- Locate test case in documentation (e.g., `02-CLI-TEST-CASES.md`)
|
||||
- Read: Prerequisites, Test Steps, Expected Result, Pass/Fail Criteria
|
||||
|
||||
### 4.3 Execute TC-001 (20 min)
|
||||
**Example (CLI install command)**:
|
||||
```bash
|
||||
# Step 1: Clear previous installations
|
||||
rm -rf ~/.claude/skills/<skill-name>
|
||||
|
||||
# Step 2: Run install command
|
||||
ccpm install <skill-name>
|
||||
|
||||
# Step 3: Verify installation
|
||||
ls ~/.claude/skills/<skill-name>
|
||||
cat ~/.claude/skills/<skill-name>/package.json
|
||||
```
|
||||
|
||||
### 4.4 Document Test Results (15 min)
|
||||
Update `TEST-EXECUTION-TRACKING.csv`:
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Status** | Completed |
|
||||
| **Result** | ✅ PASS or ❌ FAIL |
|
||||
| **Bug ID** | (leave blank if passed) |
|
||||
| **Execution Date** | 2025-11-XX |
|
||||
| **Executed By** | [Your Name] |
|
||||
| **Notes** | Brief description (e.g., "Skill installed in 3.2s, all files present") |
|
||||
|
||||
**If test failed**:
|
||||
1. Open `BUG-TRACKING-TEMPLATE.csv`
|
||||
2. Create new bug entry (Bug ID: BUG-001, BUG-002, etc.)
|
||||
3. Fill in: Title, Severity (P0-P4), Steps to Reproduce, Environment
|
||||
4. Link bug to test case in tracking CSV
|
||||
|
||||
### 4.5 Celebrate! (10 min)
|
||||
✅ First test executed successfully!
|
||||
|
||||
**Checkpoint**: ✅ First test case executed and documented.
|
||||
|
||||
---
|
||||
|
||||
## Hour 5: Team Onboarding & Planning (60 min)
|
||||
|
||||
### 5.1 Meet the Team (20 min)
|
||||
**Scheduled meeting with**:
|
||||
- QA Lead (your manager)
|
||||
- QA Engineers (peers)
|
||||
- Engineering Lead (answers technical questions)
|
||||
- DevOps Lead (handles infrastructure)
|
||||
|
||||
**Agenda**:
|
||||
1. Introductions
|
||||
2. Project overview and goals
|
||||
3. Your role and responsibilities
|
||||
4. Q&A and troubleshooting
|
||||
|
||||
### 5.2 Review Week 1 Plan (20 min)
|
||||
**With QA Lead**: Review weekly execution schedule.
|
||||
|
||||
**Example Week 1: CLI Testing (93 test cases)**
|
||||
| Day | Test Cases | Hours | Deliverables |
|
||||
|-----|------------|-------|--------------|
|
||||
| Monday | TC-CLI-001 to TC-CLI-015 | 5h | 15 test cases executed |
|
||||
| Tuesday | TC-CLI-016 to TC-CLI-030 | 5.5h | 15 test cases executed |
|
||||
| Wednesday | TC-CLI-031 to TC-CLI-045 | 5.5h | 15 test cases executed |
|
||||
| Thursday | TC-CLI-046 to TC-CLI-060 | 5.5h | 15 test cases executed |
|
||||
| Friday | TC-CLI-061 to TC-CLI-093 | 6.5h | 33 test cases + weekly report |
|
||||
|
||||
**Discuss**:
|
||||
- Any blockers from today's setup?
|
||||
- Confident with tools and documentation?
|
||||
- Adjustments needed?
|
||||
|
||||
### 5.3 Bookmark Critical Resources (10 min)
|
||||
**Create browser bookmarks folder**: "Project QA Resources"
|
||||
|
||||
**Essential links**:
|
||||
- Local website (http://localhost:8080 or configured port)
|
||||
- Database admin UI (Supabase Studio, phpMyAdmin, etc.)
|
||||
- GitHub repository
|
||||
- Test case documents
|
||||
- Tracking spreadsheets
|
||||
|
||||
### 5.4 Final Q&A (10 min)
|
||||
**Common questions**:
|
||||
|
||||
**Q: What if I find a critical bug (P0) on Day 1?**
|
||||
A: Immediately notify QA Lead. Document in bug tracker. P0 bugs block release and must be fixed within 24 hours.
|
||||
|
||||
**Q: What if I can't complete all test cases in a day?**
|
||||
A: Prioritize P0 tests first. Update QA Lead by end of day. Schedule can be adjusted.
|
||||
|
||||
**Q: Can I run tests in a different order?**
|
||||
A: Yes, but follow priority order (P0 → P1 → P2 → P3). Update tracking spreadsheet.
|
||||
|
||||
**Q: What if a test case is unclear?**
|
||||
A: Ask in team Slack/chat. Document question in tracking spreadsheet for future improvement.
|
||||
|
||||
**Checkpoint**: ✅ Team met, Week 1 plan reviewed, all questions answered.
|
||||
|
||||
---
|
||||
|
||||
## Day 1 Completion Checklist
|
||||
|
||||
Before leaving for the day, verify all setup complete:
|
||||
|
||||
### Environment
|
||||
- [ ] Repository cloned and dependencies installed
|
||||
- [ ] Database running (Docker containers or hosted instance)
|
||||
- [ ] `.env` file configured with correct URLs/keys
|
||||
- [ ] Database migrations applied and data seeded
|
||||
- [ ] Development server running and website loads
|
||||
|
||||
### Tools
|
||||
- [ ] CLI installed (if applicable) - global and/or local
|
||||
- [ ] Browser DevTools configured with extensions
|
||||
- [ ] Network throttling presets added
|
||||
- [ ] Responsive design mode tested
|
||||
|
||||
### Test Data
|
||||
- [ ] Regular test users created (5+ users)
|
||||
- [ ] Admin user created with role assigned
|
||||
- [ ] Moderator user created with role assigned (if applicable)
|
||||
|
||||
### Documentation
|
||||
- [ ] Quick Start Guide read (understand scope, timeline)
|
||||
- [ ] Test Strategy reviewed (understand AAA pattern, quality gates)
|
||||
- [ ] First test case executed successfully
|
||||
- [ ] Test results documented in tracking spreadsheet
|
||||
|
||||
### Team
|
||||
- [ ] Team introductions completed
|
||||
- [ ] Week 1 plan reviewed with QA Lead
|
||||
- [ ] Critical resources bookmarked
|
||||
- [ ] Communication channels joined (Slack, Teams, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Next Steps: Week 1 Testing Begins
|
||||
|
||||
**Monday Morning Kickoff**:
|
||||
1. Join team standup (15 min)
|
||||
2. Review any blockers from Day 1 setup
|
||||
3. Begin Week 1 test execution (follow documented schedule)
|
||||
|
||||
**Daily Routine**:
|
||||
- **Morning**: Team standup (15 min)
|
||||
- **Morning session**: Test execution (9:15 AM - 12:00 PM)
|
||||
- **Lunch**: Break (12:00 PM - 1:00 PM)
|
||||
- **Afternoon session**: Test execution (1:00 PM - 5:00 PM)
|
||||
- **End of day**: Update tracking, file bugs, status report (5:00 PM - 5:30 PM)
|
||||
|
||||
**Weekly Deliverable**: Friday EOD - Submit weekly progress report to QA Lead.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting Common Day 1 Issues
|
||||
|
||||
### Issue 1: Database Containers Won't Start
|
||||
**Symptoms**: Database service fails or containers show "unhealthy"
|
||||
|
||||
**Fixes**:
|
||||
1. Restart database service (Docker Desktop, systemd, etc.)
|
||||
2. Check logs: `docker logs <container-name>`
|
||||
3. Verify ports not in use: `lsof -i :<port-number>`
|
||||
4. Prune old containers (⚠️ caution): `docker system prune`
|
||||
|
||||
### Issue 2: Website Shows "Failed to Fetch Data"
|
||||
**Symptoms**: Homepage loads but data sections are empty
|
||||
|
||||
**Fixes**:
|
||||
1. Verify database has seeded data: `SELECT COUNT(*) FROM <table>;`
|
||||
2. Check API connection (network tab in DevTools)
|
||||
3. Verify `.env` file has correct database URL
|
||||
4. Restart dev server
|
||||
|
||||
### Issue 3: CLI Command Not Found After Installation
|
||||
**Symptoms**: `<command>: command not found` after installation
|
||||
|
||||
**Fixes**:
|
||||
1. Check installation path: `which <command>` or `pnpm bin -g`
|
||||
2. Add to PATH: `export PATH="$(pnpm bin -g):$PATH"`
|
||||
3. Make permanent: Add to `~/.bashrc` or `~/.zshrc`
|
||||
4. Reload shell: `source ~/.bashrc`
|
||||
|
||||
### Issue 4: Test Users Not Showing Roles
|
||||
**Symptoms**: Admin user can't access admin-only features
|
||||
|
||||
**Fixes**:
|
||||
1. Verify role insert: `SELECT * FROM user_roles WHERE user_id = '<user-id>';`
|
||||
2. Sign out and sign in again (roles cached in session)
|
||||
3. Clear browser cookies and local storage (F12 → Application tab)
|
||||
|
||||
---
|
||||
|
||||
## Congratulations! 🎉
|
||||
|
||||
You've completed Day 1 onboarding. You're now ready to execute the full test plan.
|
||||
|
||||
**Questions or Blockers?**
|
||||
- Slack/Teams: #qa-team channel
|
||||
- Email: qa-lead@company.com
|
||||
- Escalation: Engineering Lead (for critical bugs)
|
||||
|
||||
**See you Monday for Week 1 testing!** 🚀
|
||||
275
qa-expert/references/google_testing_standards.md
Normal file
275
qa-expert/references/google_testing_standards.md
Normal file
@@ -0,0 +1,275 @@
|
||||
# Google Testing Standards Reference
|
||||
|
||||
Comprehensive guide to Google's testing best practices and standards.
|
||||
|
||||
---
|
||||
|
||||
## AAA Pattern (Arrange-Act-Assert)
|
||||
|
||||
Every test should follow this structure:
|
||||
|
||||
### 1. Arrange (Setup)
|
||||
```markdown
|
||||
**Prerequisites**:
|
||||
- System in known state
|
||||
- Test data prepared
|
||||
- Dependencies mocked/configured
|
||||
```
|
||||
|
||||
### 2. Act (Execute)
|
||||
```markdown
|
||||
**Test Steps**:
|
||||
1. Perform action
|
||||
2. Trigger behavior
|
||||
3. Execute operation
|
||||
```
|
||||
|
||||
### 3. Assert (Verify)
|
||||
```markdown
|
||||
**Expected Result**:
|
||||
✅ Verification criteria
|
||||
✅ Observable outcomes
|
||||
✅ System state validation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Case Design Principles
|
||||
|
||||
### 1. Test Case ID Convention
|
||||
```
|
||||
TC-[CATEGORY]-[NUMBER]
|
||||
|
||||
Examples:
|
||||
- TC-CLI-001 (CLI tests)
|
||||
- TC-WEB-042 (Web tests)
|
||||
- TC-API-103 (API tests)
|
||||
- TC-SEC-007 (Security tests)
|
||||
```
|
||||
|
||||
### 2. Priority Classification
|
||||
|
||||
**P0 (Blocker)** - Must fix before release
|
||||
- Prevents core functionality
|
||||
- Security vulnerabilities (SQL injection, XSS)
|
||||
- Data corruption/loss
|
||||
- System crashes
|
||||
|
||||
**P1 (Critical)** - Fix within 2 weeks
|
||||
- Major feature broken with workaround
|
||||
- Significant UX degradation
|
||||
- Performance issues
|
||||
|
||||
**P2 (High)** - Fix within 4 weeks
|
||||
- Minor feature issues
|
||||
- Edge cases
|
||||
- Non-critical bugs
|
||||
|
||||
**P3 (Medium)** - Fix when possible
|
||||
- Cosmetic issues
|
||||
- Rare edge cases
|
||||
- Nice-to-have improvements
|
||||
|
||||
**P4 (Low)** - Optional
|
||||
- Documentation typos
|
||||
- Minor UI alignment
|
||||
|
||||
### 3. Test Types
|
||||
|
||||
**Unit Tests**:
|
||||
- Test individual functions/methods
|
||||
- No external dependencies
|
||||
- Fast execution (<100ms)
|
||||
- Coverage: ≥80% statements, 75% branches
|
||||
|
||||
**Integration Tests**:
|
||||
- Test component interactions
|
||||
- Real dependencies (database, APIs)
|
||||
- Moderate execution time
|
||||
- Coverage: Critical user journeys
|
||||
|
||||
**E2E Tests**:
|
||||
- Test complete user workflows
|
||||
- Real browser/environment
|
||||
- Slow execution
|
||||
- Coverage: Happy paths + critical failures
|
||||
|
||||
**Security Tests**:
|
||||
- OWASP Top 10 coverage
|
||||
- Input validation
|
||||
- Authentication/authorization
|
||||
- Data protection
|
||||
|
||||
---
|
||||
|
||||
## Coverage Thresholds
|
||||
|
||||
### Code Coverage Targets
|
||||
- ✅ **Statements**: ≥80%
|
||||
- ✅ **Branches**: ≥75%
|
||||
- ✅ **Functions**: ≥85%
|
||||
- ✅ **Lines**: ≥80%
|
||||
|
||||
### Test Distribution (Recommended)
|
||||
- Unit Tests: 70%
|
||||
- Integration Tests: 20%
|
||||
- E2E Tests: 10%
|
||||
|
||||
---
|
||||
|
||||
## Test Isolation
|
||||
|
||||
### Mandatory Principles
|
||||
|
||||
1. **No Shared State**
|
||||
```typescript
|
||||
❌ BAD: Tests share global variables
|
||||
✅ GOOD: Each test has independent data
|
||||
```
|
||||
|
||||
2. **Fresh Data Per Test**
|
||||
```typescript
|
||||
beforeEach(() => {
|
||||
database.seed(freshData);
|
||||
});
|
||||
```
|
||||
|
||||
3. **Cleanup After Tests**
|
||||
```typescript
|
||||
afterEach(() => {
|
||||
database.cleanup();
|
||||
mockServer.reset();
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fail-Fast Validation
|
||||
|
||||
### Critical Security Pattern
|
||||
|
||||
```typescript
|
||||
// ❌ BAD: Fallback to mock data
|
||||
if (error) {
|
||||
return getMockData(); // WRONG - hides issues
|
||||
}
|
||||
|
||||
// ✅ GOOD: Fail immediately
|
||||
if (error || !data) {
|
||||
throw new Error(error?.message || 'Operation failed');
|
||||
}
|
||||
```
|
||||
|
||||
### Input Validation
|
||||
```typescript
|
||||
// Validate BEFORE any operations
|
||||
function processSkillName(input: string): void {
|
||||
// Security checks first
|
||||
if (input.includes('..')) {
|
||||
throw new ValidationError('Path traversal detected');
|
||||
}
|
||||
|
||||
if (input.startsWith('/')) {
|
||||
throw new ValidationError('Absolute paths not allowed');
|
||||
}
|
||||
|
||||
// Then business logic
|
||||
return performOperation(input);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Documentation Standards
|
||||
|
||||
### Test Case Template
|
||||
```markdown
|
||||
### TC-XXX-YYY: Descriptive Title
|
||||
|
||||
**Priority**: P0/P1/P2/P3/P4
|
||||
**Type**: Unit/Integration/E2E/Security
|
||||
**Estimated Time**: X minutes
|
||||
|
||||
**Prerequisites**:
|
||||
- Specific, verifiable conditions
|
||||
|
||||
**Test Steps**:
|
||||
1. Exact command or action
|
||||
2. Specific input data
|
||||
3. Verification step
|
||||
|
||||
**Expected Result**:
|
||||
✅ Measurable outcome
|
||||
✅ Specific verification criteria
|
||||
|
||||
**Pass/Fail Criteria**:
|
||||
- ✅ PASS: All verification steps succeed
|
||||
- ❌ FAIL: Any error or deviation
|
||||
|
||||
**Potential Bugs**:
|
||||
- Known edge cases
|
||||
- Security concerns
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Release Criteria
|
||||
|
||||
| Gate | Threshold | Blocker |
|
||||
|------|-----------|---------|
|
||||
| Test Execution | 100% | Yes |
|
||||
| Pass Rate | ≥80% | Yes |
|
||||
| P0 Bugs | 0 | Yes |
|
||||
| P1 Bugs | ≤5 | Yes |
|
||||
| Code Coverage | ≥80% | Yes |
|
||||
| Security | 90% OWASP | Yes |
|
||||
|
||||
### Daily Checkpoints
|
||||
|
||||
**Morning Standup**:
|
||||
- Yesterday's progress
|
||||
- Today's plan
|
||||
- Blockers
|
||||
|
||||
**End-of-Day**:
|
||||
- Tests executed
|
||||
- Pass rate
|
||||
- Bugs filed
|
||||
- Tomorrow's plan
|
||||
|
||||
### Weekly Review
|
||||
|
||||
**Friday Report**:
|
||||
- Week summary
|
||||
- Baseline comparison
|
||||
- Quality gates status
|
||||
- Next week plan
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### DO:
|
||||
- ✅ Write reproducible test cases
|
||||
- ✅ Update tracking after EACH test
|
||||
- ✅ File bugs immediately on failure
|
||||
- ✅ Follow AAA pattern
|
||||
- ✅ Maintain test isolation
|
||||
- ✅ Document environment details
|
||||
|
||||
### DON'T:
|
||||
- ❌ Skip test documentation
|
||||
- ❌ Batch CSV updates
|
||||
- ❌ Ignore security tests
|
||||
- ❌ Use production data in tests
|
||||
- ❌ Skip cleanup
|
||||
- ❌ Hard-code test data
|
||||
|
||||
---
|
||||
|
||||
**References**:
|
||||
- [Google Testing Blog](https://testing.googleblog.com/)
|
||||
- [Google SWE Book - Testing](https://abseil.io/resources/swe-book)
|
||||
- [Test Pyramid Concept](https://martinfowler.com/bliki/TestPyramid.html)
|
||||
353
qa-expert/references/ground_truth_principle.md
Normal file
353
qa-expert/references/ground_truth_principle.md
Normal file
@@ -0,0 +1,353 @@
|
||||
# Ground Truth Principle - Preventing Documentation Sync Issues
|
||||
|
||||
**Purpose**: Prevent test suite integrity problems caused by documentation/tracking file mismatches.
|
||||
|
||||
**Lesson Learned**: CCPM project discovered 3.2% consistency rate between CSV and documentation (only 3 out of 93 test IDs matched correctly).
|
||||
|
||||
---
|
||||
|
||||
## The Problem
|
||||
|
||||
### Common Anti-Pattern
|
||||
Projects often have multiple sources of truth:
|
||||
- Test case documentation (e.g., `02-CLI-TEST-CASES.md`)
|
||||
- Execution tracking CSV (e.g., `TEST-EXECUTION-TRACKING.csv`)
|
||||
- Bug tracking spreadsheet
|
||||
- Test automation code
|
||||
|
||||
**What goes wrong**:
|
||||
1. Documentation updated → CSV not updated
|
||||
2. CSV auto-generated from old test list → docs finalized separately
|
||||
3. Tests executed based on CSV → wrong test steps followed
|
||||
4. Bug reports reference CSV IDs → cannot trace back to correct test
|
||||
|
||||
### Real Example from CCPM
|
||||
|
||||
**CSV TC-CLI-012**: "Install Non-Existent Skill"
|
||||
- Steps: Run `ccpm install this-skill-does-not-exist-12345`
|
||||
- Expected: Clear error message
|
||||
|
||||
**Doc TC-CLI-012**: "Install Skill Already Installed"
|
||||
- Steps: Run `ccpm install cloudflare-troubleshooting` (already installed)
|
||||
- Expected: Warning message with --force hint
|
||||
|
||||
**Result**: Completely different tests! QA engineer might execute wrong test and report incorrect results.
|
||||
|
||||
---
|
||||
|
||||
## The Ground Truth Principle
|
||||
|
||||
### Rule #1: Single Source of Truth
|
||||
|
||||
**Declare one file as authoritative** for test specifications:
|
||||
|
||||
```
|
||||
✅ CORRECT:
|
||||
Ground Truth: 02-CLI-TEST-CASES.md (detailed test specifications)
|
||||
Supporting: TEST-EXECUTION-TRACKING.csv (execution status only)
|
||||
|
||||
❌ WRONG:
|
||||
CSV and docs both contain test steps (divergence inevitable)
|
||||
```
|
||||
|
||||
### Rule #2: Clear Role Separation
|
||||
|
||||
| File Type | Purpose | Contains | Updated When |
|
||||
|-----------|---------|----------|--------------|
|
||||
| **Test Case Docs** | Specification | Prerequisites, Steps, Expected Results, Pass/Fail Criteria | When test design changes |
|
||||
| **Tracking CSV** | Execution tracking | Status, Result, Bug ID, Execution Date, Notes | After each test execution |
|
||||
| **Bug Reports** | Failure documentation | Repro steps, Environment, Severity, Resolution | When test fails |
|
||||
|
||||
### Rule #3: Explicit References
|
||||
|
||||
Always specify which file to use in instructions:
|
||||
|
||||
**Good**:
|
||||
```markdown
|
||||
Execute test case TC-CLI-042:
|
||||
1. Read full test specification from 02-CLI-TEST-CASES.md (pages 15-16)
|
||||
2. Follow steps exactly as documented
|
||||
3. Update TEST-EXECUTION-TRACKING.csv row TC-CLI-042 with result
|
||||
```
|
||||
|
||||
**Bad**:
|
||||
```markdown
|
||||
Execute test case TC-CLI-042 (no reference to source document)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Prevention Strategies
|
||||
|
||||
### Strategy 1: Automated ID Validation
|
||||
|
||||
**Script**: `validate_test_ids.py` (generate this in your project)
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Validate test IDs between documentation and CSV"""
|
||||
|
||||
import csv
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
def extract_doc_ids(doc_path):
|
||||
"""Extract all TC-XXX-YYY IDs from markdown documentation"""
|
||||
with open(doc_path, 'r') as f:
|
||||
content = f.read()
|
||||
pattern = r'TC-[A-Z]+-\d{3}'
|
||||
return set(re.findall(pattern, content))
|
||||
|
||||
def extract_csv_ids(csv_path):
|
||||
"""Extract all Test Case IDs from CSV"""
|
||||
with open(csv_path, 'r') as f:
|
||||
reader = csv.DictReader(f)
|
||||
return set(row['Test Case ID'] for row in reader if row['Test Case ID'])
|
||||
|
||||
def validate_sync(doc_path, csv_path):
|
||||
"""Check consistency between doc and CSV"""
|
||||
doc_ids = extract_doc_ids(doc_path)
|
||||
csv_ids = extract_csv_ids(csv_path)
|
||||
|
||||
matching = doc_ids & csv_ids
|
||||
csv_only = csv_ids - doc_ids
|
||||
doc_only = doc_ids - csv_ids
|
||||
|
||||
consistency_rate = len(matching) / len(csv_ids) * 100 if csv_ids else 0
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Test ID Validation Report")
|
||||
print(f"{'='*60}\n")
|
||||
print(f"✅ Matching IDs: {len(matching)}")
|
||||
print(f"⚠️ CSV-only IDs: {len(csv_only)}")
|
||||
print(f"⚠️ Doc-only IDs: {len(doc_only)}")
|
||||
print(f"\n📊 Consistency Rate: {consistency_rate:.1f}%\n")
|
||||
|
||||
if consistency_rate < 100:
|
||||
print(f"❌ SYNC ISSUE DETECTED!\n")
|
||||
if csv_only:
|
||||
print(f"CSV IDs not in documentation: {sorted(csv_only)[:5]}")
|
||||
if doc_only:
|
||||
print(f"Doc IDs not in CSV: {sorted(doc_only)[:5]}")
|
||||
else:
|
||||
print(f"✅ Perfect sync!\n")
|
||||
|
||||
return consistency_rate >= 95
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
if len(sys.argv) < 3:
|
||||
print("Usage: python validate_test_ids.py <doc-path> <csv-path>")
|
||||
sys.exit(1)
|
||||
|
||||
doc_path = sys.argv[1]
|
||||
csv_path = sys.argv[2]
|
||||
|
||||
valid = validate_sync(doc_path, csv_path)
|
||||
sys.exit(0 if valid else 1)
|
||||
```
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
python scripts/validate_test_ids.py \
|
||||
tests/docs/02-CLI-TEST-CASES.md \
|
||||
tests/docs/templates/TEST-EXECUTION-TRACKING.csv
|
||||
|
||||
# Output:
|
||||
# ============================================================
|
||||
# Test ID Validation Report
|
||||
# ============================================================
|
||||
#
|
||||
# ✅ Matching IDs: 3
|
||||
# ⚠️ CSV-only IDs: 90
|
||||
# ⚠️ Doc-only IDs: 0
|
||||
#
|
||||
# 📊 Consistency Rate: 3.2%
|
||||
#
|
||||
# ❌ SYNC ISSUE DETECTED!
|
||||
```
|
||||
|
||||
### Strategy 2: ID Mapping Document
|
||||
|
||||
When mismatch detected, create bridge document:
|
||||
|
||||
**File**: `tests/docs/TEST-ID-MAPPING.md`
|
||||
|
||||
**Contents**:
|
||||
```markdown
|
||||
# Test ID Mapping - CSV vs. Documentation
|
||||
|
||||
## Ground Truth
|
||||
**Official Source**: 02-CLI-TEST-CASES.md
|
||||
**Tracking File**: TEST-EXECUTION-TRACKING.csv (execution tracking only)
|
||||
|
||||
## ID Mapping Table
|
||||
| CSV ID | Doc ID | Test Name | Match Status |
|
||||
|--------|--------|-----------|--------------|
|
||||
| TC-CLI-001 | TC-CLI-001 | Install Skill by Name | ✅ Match |
|
||||
| TC-CLI-012 | TC-CLI-008 | Install Non-Existent Skill | ❌ Mismatch |
|
||||
```
|
||||
|
||||
### Strategy 3: CSV Usage Guide
|
||||
|
||||
Create explicit instructions for QA engineers:
|
||||
|
||||
**File**: `tests/docs/templates/CSV-USAGE-GUIDE.md`
|
||||
|
||||
**Contents**:
|
||||
```markdown
|
||||
# TEST-EXECUTION-TRACKING.csv Usage Guide
|
||||
|
||||
## ✅ Correct Usage
|
||||
|
||||
1. **ALWAYS use test case documentation** as authoritative source for:
|
||||
- Test steps
|
||||
- Expected results
|
||||
- Prerequisites
|
||||
|
||||
2. **Use this CSV ONLY for**:
|
||||
- Tracking execution status
|
||||
- Recording results (PASSED/FAILED)
|
||||
- Linking to bug reports
|
||||
|
||||
## ❌ Don't Trust CSV for Test Specifications
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recovery Workflow
|
||||
|
||||
When you discover a sync issue:
|
||||
|
||||
### Step 1: Assess Severity
|
||||
```bash
|
||||
# Run ID validation script
|
||||
python scripts/validate_test_ids.py <doc> <csv>
|
||||
|
||||
# Consistency Rate:
|
||||
# 100%: ✅ No action needed
|
||||
# 90-99%: ⚠️ Minor fixes needed
|
||||
# 50-89%: 🔴 Major sync required
|
||||
# <50%: 🚨 CRITICAL - regenerate CSV
|
||||
```
|
||||
|
||||
### Step 2: Create Bridge Documents
|
||||
```bash
|
||||
# If consistency < 100%, create:
|
||||
1. TEST-ID-MAPPING.md (maps CSV → Doc IDs)
|
||||
2. CSV-USAGE-GUIDE.md (instructs QA engineers)
|
||||
```
|
||||
|
||||
### Step 3: Notify Team
|
||||
```markdown
|
||||
Subject: [URGENT] Test Suite Sync Issue - Read Before Testing
|
||||
|
||||
Team,
|
||||
|
||||
We discovered a test ID mismatch between CSV and documentation:
|
||||
- Consistency Rate: 3.2% (only 3 out of 93 tests match)
|
||||
- Impact: Tests executed based on CSV may use wrong steps
|
||||
- Action Required: Read CSV-USAGE-GUIDE.md before continuing
|
||||
|
||||
Ground Truth: 02-CLI-TEST-CASES.md (always trust this)
|
||||
Tracking Only: TEST-EXECUTION-TRACKING.csv
|
||||
|
||||
Bridge: TEST-ID-MAPPING.md (maps IDs)
|
||||
```
|
||||
|
||||
### Step 4: Re-validate Executed Tests
|
||||
```markdown
|
||||
Tests executed before fix may need re-verification:
|
||||
- TC-CLI-001~003: ✅ Correct (IDs matched)
|
||||
- TC-CLI-029: ⚠️ Verify against Doc TC-CLI-029
|
||||
- TC-CLI-037: ⚠️ Verify against Doc TC-CLI-037
|
||||
```
|
||||
|
||||
### Step 5: Long-Term Fix
|
||||
**Option A**: Maintain separation (recommended during active testing)
|
||||
- CSV = execution tracking only
|
||||
- Doc = test specifications
|
||||
- Mapping doc bridges gap
|
||||
|
||||
**Option B**: Regenerate CSV from docs (post-testing)
|
||||
- Risk: Loss of execution history
|
||||
- Benefit: Perfect sync
|
||||
- Timeline: After current test cycle
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### DO ✅
|
||||
|
||||
1. **Declare ground truth upfront** in project README
|
||||
2. **Separate concerns**: Specs vs. tracking vs. bugs
|
||||
3. **Validate IDs regularly** (weekly or before major milestones)
|
||||
4. **Document deviations** in mapping file
|
||||
5. **Train QA team** on ground truth principle
|
||||
|
||||
### DON'T ❌
|
||||
|
||||
1. ❌ Duplicate test steps in multiple files
|
||||
2. ❌ Auto-generate tracking files without validation
|
||||
3. ❌ Execute tests based on CSV alone
|
||||
4. ❌ Assume "it's just tracking" - IDs matter!
|
||||
5. ❌ Ignore small mismatches (3% → 50% quickly)
|
||||
|
||||
---
|
||||
|
||||
## Checklist for QA Project Setup
|
||||
|
||||
When using `init_qa_project.py`, ensure:
|
||||
|
||||
- [ ] Ground truth declared in README
|
||||
- [ ] CSV contains ID + tracking fields only (no detailed steps)
|
||||
- [ ] Test case docs are complete before CSV generation
|
||||
- [ ] ID validation script added to project
|
||||
- [ ] CSV usage guide included in templates/
|
||||
- [ ] QA engineers trained on which file to trust
|
||||
|
||||
---
|
||||
|
||||
## Integration with qa-expert Skill
|
||||
|
||||
When initializing a project with `qa-expert`:
|
||||
|
||||
```bash
|
||||
python scripts/init_qa_project.py my-app ./
|
||||
|
||||
# This creates:
|
||||
tests/docs/
|
||||
├── README.md (declares ground truth)
|
||||
├── 02-CLI-TEST-CASES.md (authoritative specs)
|
||||
├── TEST-ID-MAPPING.md (if needed)
|
||||
└── templates/
|
||||
├── TEST-EXECUTION-TRACKING.csv (tracking only)
|
||||
├── CSV-USAGE-GUIDE.md (usage instructions)
|
||||
└── validate_test_ids.py (validation script)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Your test suite has good integrity when**:
|
||||
- ✅ ID consistency rate ≥ 95%
|
||||
- ✅ QA engineers know which file to trust
|
||||
- ✅ Tracking CSV contains status only (no steps)
|
||||
- ✅ Validation script runs weekly
|
||||
- ✅ Team trained on ground truth principle
|
||||
|
||||
**Red flags**:
|
||||
- 🚩 Multiple files contain test steps
|
||||
- 🚩 CSV test names differ from docs
|
||||
- 🚩 QA engineers "prefer" CSV over docs
|
||||
- 🚩 No one knows which file is authoritative
|
||||
- 🚩 Test IDs diverge over time
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 1.0
|
||||
**Created**: 2025-11-10
|
||||
**Based On**: CCPM test suite integrity incident (3.2% consistency rate)
|
||||
**Priority**: 🔴 P0 (Critical for test suite quality)
|
||||
523
qa-expert/references/llm_prompts_library.md
Normal file
523
qa-expert/references/llm_prompts_library.md
Normal file
@@ -0,0 +1,523 @@
|
||||
# LLM QA Testing Prompts Library
|
||||
|
||||
**Purpose**: Ready-to-use prompts for directing LLM assistants to execute specific QA tasks.
|
||||
|
||||
**Last Updated**: 2025-11-09
|
||||
|
||||
---
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
1. [Day 1 Onboarding](#day-1-onboarding)
|
||||
2. [Weekly Execution](#weekly-execution)
|
||||
3. [Daily Progress](#daily-progress)
|
||||
4. [Bug Investigation](#bug-investigation)
|
||||
5. [Weekly Reporting](#weekly-reporting)
|
||||
6. [Emergency Escalation](#emergency-escalation)
|
||||
|
||||
---
|
||||
|
||||
## Day 1 Onboarding
|
||||
|
||||
### Initial Setup
|
||||
```
|
||||
You are a senior QA engineer with 20+ years of experience at Google. Help me set up the QA testing environment.
|
||||
|
||||
CRITICAL: Follow the Day 1 onboarding checklist exactly as documented.
|
||||
|
||||
Read and execute: tests/docs/templates/DAY-1-ONBOARDING-CHECKLIST.md
|
||||
|
||||
Start with Hour 1 (Environment Setup). Complete each hour sequentially. Do NOT skip any steps. After completing each hour, confirm what you did and ask if you should continue.
|
||||
|
||||
Report any blockers immediately.
|
||||
```
|
||||
|
||||
### Verify Setup Completion
|
||||
```
|
||||
Verify that my Day 1 QA onboarding is complete by checking:
|
||||
|
||||
1. All database containers are healthy (docker ps)
|
||||
2. Database has seeded data (SELECT COUNT(*) FROM <table>)
|
||||
3. Test users created (regular, admin, moderator)
|
||||
4. CLI installed (if applicable) - global and/or local
|
||||
5. Dev server running at http://localhost:8080
|
||||
6. First test case executed successfully
|
||||
|
||||
Read tests/docs/templates/DAY-1-ONBOARDING-CHECKLIST.md section "Day 1 Completion Checklist" and verify ALL items are checked.
|
||||
|
||||
If anything is missing, tell me what needs to be fixed and how to fix it.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Weekly Execution
|
||||
|
||||
### Week 1: Start Testing
|
||||
```
|
||||
You are a senior QA engineer executing Week 1 of the QA test plan.
|
||||
|
||||
CRITICAL: Follow the test plan exactly as documented.
|
||||
|
||||
Read: tests/docs/02-CLI-TEST-CASES.md (or appropriate test category document)
|
||||
|
||||
Your task today (Monday, Week 1):
|
||||
- Execute test cases TC-CLI-001 through TC-CLI-015 (15 tests)
|
||||
- Update tests/docs/templates/TEST-EXECUTION-TRACKING.csv after EACH test
|
||||
- File bugs in tests/docs/templates/BUG-TRACKING-TEMPLATE.csv for any failures
|
||||
- Expected time: 5 hours
|
||||
|
||||
Execute tests in order. For each test:
|
||||
1. Read the full test case specification
|
||||
2. Execute all test steps exactly as documented
|
||||
3. Record result in TEST-EXECUTION-TRACKING.csv immediately
|
||||
4. If test fails, create bug report in BUG-TRACKING-TEMPLATE.csv before moving to next test
|
||||
|
||||
After completing all 15 tests, give me a summary:
|
||||
- How many passed/failed/blocked
|
||||
- Bug IDs filed (if any)
|
||||
- Any blockers for tomorrow
|
||||
```
|
||||
|
||||
### Daily Continuation
|
||||
```
|
||||
Continue Week [N] testing.
|
||||
|
||||
Read: tests/docs/[CATEGORY]-TEST-CASES.md
|
||||
|
||||
Today's test cases: TC-[CATEGORY]-[START] through TC-[CATEGORY]-[END] ([N] tests)
|
||||
|
||||
Follow the same process as yesterday:
|
||||
1. Execute each test exactly as documented
|
||||
2. Update TEST-EXECUTION-TRACKING.csv immediately after each test
|
||||
3. File bugs for any failures
|
||||
4. Give me end-of-day summary
|
||||
|
||||
Start now.
|
||||
```
|
||||
|
||||
### Friday - Week Completion
|
||||
```
|
||||
Complete Week [N] testing and submit weekly progress report.
|
||||
|
||||
Tasks:
|
||||
1. Execute remaining tests: TC-[CATEGORY]-[START] through TC-[CATEGORY]-[END] ([N] tests)
|
||||
2. Update TEST-EXECUTION-TRACKING.csv for all completed tests
|
||||
3. Generate weekly progress report using tests/docs/templates/WEEKLY-PROGRESS-REPORT.md
|
||||
|
||||
For the weekly report:
|
||||
- Calculate pass rate for Week [N] (passed / total executed)
|
||||
- Summarize all bugs filed this week (by severity: P0/P1/P2/P3)
|
||||
- Compare against baseline: tests/docs/BASELINE-METRICS.md
|
||||
- Assess quality gates: Are we on track for 80% pass rate?
|
||||
- Plan for Week [N+1]
|
||||
|
||||
Submit the completed WEEKLY-PROGRESS-REPORT.md.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Daily Progress
|
||||
|
||||
### Morning Standup
|
||||
```
|
||||
Daily standup for QA testing.
|
||||
|
||||
Current status:
|
||||
- Week: [1-5]
|
||||
- Day: [Monday-Friday]
|
||||
- Yesterday's progress: [X] tests executed, [Y] passed, [Z] failed
|
||||
- Blockers: [None / List blockers]
|
||||
|
||||
Today's plan:
|
||||
- Test cases: TC-[XXX]-[YYY] to TC-[XXX]-[ZZZ] ([N] tests)
|
||||
- Expected time: [X] hours
|
||||
- Prerequisites: [Any setup needed]
|
||||
|
||||
Read today's test cases from the appropriate document and confirm you're ready to start.
|
||||
```
|
||||
|
||||
### Mid-Day Progress Check
|
||||
```
|
||||
Give me a mid-day progress update.
|
||||
|
||||
How many test cases have you completed so far today?
|
||||
How many passed vs failed?
|
||||
Any bugs filed? (provide Bug IDs)
|
||||
Any blockers preventing you from continuing?
|
||||
Are you on track to finish today's test cases?
|
||||
|
||||
Update: tests/docs/templates/TEST-EXECUTION-TRACKING.csv with latest results before answering.
|
||||
```
|
||||
|
||||
### End-of-Day Summary
|
||||
```
|
||||
Provide end-of-day summary for QA testing.
|
||||
|
||||
Today's results:
|
||||
- Test cases executed: [X] / [Y] planned
|
||||
- Pass rate: [Z]%
|
||||
- Bugs filed: [List Bug IDs with severity]
|
||||
- Test execution tracking updated: Yes/No
|
||||
- Bug reports filed: Yes/No
|
||||
|
||||
Tomorrow's plan:
|
||||
- Test cases: TC-[XXX]-[YYY] to TC-[XXX]-[ZZZ]
|
||||
- Prerequisites: [Any setup needed]
|
||||
- Estimated time: [X] hours
|
||||
|
||||
Blockers:
|
||||
- [None / List blockers]
|
||||
|
||||
If you didn't finish today's test cases, explain why and how you'll catch up.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bug Investigation
|
||||
|
||||
### Investigate Test Failure
|
||||
```
|
||||
A test case failed. I need you to investigate the root cause.
|
||||
|
||||
Test Case: TC-[CATEGORY]-[NUMBER]
|
||||
Expected Result: [Copy from test case spec]
|
||||
Actual Result: [What happened]
|
||||
|
||||
Your investigation:
|
||||
1. Re-run the test case exactly as documented
|
||||
2. Capture detailed logs, screenshots, network traces
|
||||
3. Check if this is a test environment issue vs real bug
|
||||
4. Determine severity: P0/P1/P2/P3/P4
|
||||
5. Search for similar issues in existing bug reports
|
||||
|
||||
If confirmed as a bug:
|
||||
- Create bug report in BUG-TRACKING-TEMPLATE.csv
|
||||
- Assign unique Bug ID (BUG-XXX)
|
||||
- Complete ALL fields (Steps to Reproduce, Environment, Screenshots, etc.)
|
||||
- Update TEST-EXECUTION-TRACKING.csv with Bug ID reference
|
||||
|
||||
If NOT a bug (e.g., environment issue):
|
||||
- Fix the environment issue
|
||||
- Re-run the test
|
||||
- Update TEST-EXECUTION-TRACKING.csv with PASS result
|
||||
|
||||
Report your findings.
|
||||
```
|
||||
|
||||
### Reproduce Bug from Report
|
||||
```
|
||||
I need you to reproduce a bug to verify it's still an issue.
|
||||
|
||||
Bug ID: BUG-[XXX]
|
||||
Read: tests/docs/templates/BUG-TRACKING-TEMPLATE.csv (find Bug ID BUG-[XXX])
|
||||
|
||||
Steps:
|
||||
1. Read the full bug report (Steps to Reproduce, Environment, etc.)
|
||||
2. Set up the exact same environment
|
||||
3. Execute the steps to reproduce exactly as documented
|
||||
4. Verify you get the same Actual Result
|
||||
|
||||
If bug reproduces:
|
||||
- Confirm "Yes, bug still exists"
|
||||
- Add verification note to bug report
|
||||
|
||||
If bug does NOT reproduce:
|
||||
- Explain what's different (environment, data, timing, etc.)
|
||||
- Mark bug as "Cannot Reproduce" or "Fixed"
|
||||
|
||||
Report your findings.
|
||||
```
|
||||
|
||||
### Root Cause Analysis
|
||||
```
|
||||
Perform root cause analysis for a critical bug.
|
||||
|
||||
Bug ID: BUG-[XXX] (P0 or P1 severity)
|
||||
|
||||
Your analysis:
|
||||
1. Understand the symptom (what the user sees)
|
||||
2. Trace the data flow (where does the failure occur?)
|
||||
3. Identify the root cause (what line of code / configuration is wrong?)
|
||||
4. Assess impact (how many users affected? data loss? security risk?)
|
||||
5. Propose fix (what needs to change to resolve this?)
|
||||
6. Estimate fix complexity (hours/days to implement)
|
||||
|
||||
Read the relevant codebase files.
|
||||
Check database state.
|
||||
Review logs.
|
||||
|
||||
Document your findings in the bug report under "Root Cause Analysis" section.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Weekly Reporting
|
||||
|
||||
### Generate Weekly Progress Report
|
||||
```
|
||||
Generate the weekly progress report for Week [1-5].
|
||||
|
||||
Read: tests/docs/templates/WEEKLY-PROGRESS-REPORT.md (use this template)
|
||||
|
||||
Data sources:
|
||||
- tests/docs/templates/TEST-EXECUTION-TRACKING.csv (for test execution stats)
|
||||
- tests/docs/templates/BUG-TRACKING-TEMPLATE.csv (for bug stats)
|
||||
- tests/docs/BASELINE-METRICS.md (for comparison)
|
||||
|
||||
Fill in ALL sections:
|
||||
1. Executive Summary (tests executed, pass rate, bugs found, blockers, on track status)
|
||||
2. Test Execution Progress (table by category)
|
||||
3. Bugs Filed This Week (P0/P1 highlights + summary table)
|
||||
4. Test Execution Highlights (what went well, challenges, findings)
|
||||
5. Quality Metrics (pass rate trend, bug discovery rate, test velocity)
|
||||
6. Environment & Infrastructure (any issues?)
|
||||
7. Next Week Plan (objectives, deliverables, risks)
|
||||
8. Resource Needs (blockers, questions)
|
||||
9. Release Readiness Assessment (quality gates status)
|
||||
|
||||
Calculate all metrics from actual data. Do NOT make up numbers.
|
||||
|
||||
Save the report as: tests/docs/reports/WEEK-[N]-PROGRESS-REPORT-2025-11-[DD].md
|
||||
```
|
||||
|
||||
### Compare Against Baseline
|
||||
```
|
||||
Compare current QA progress against the pre-QA baseline.
|
||||
|
||||
Read:
|
||||
- tests/docs/BASELINE-METRICS.md (pre-QA state)
|
||||
- tests/docs/templates/TEST-EXECUTION-TRACKING.csv (current state)
|
||||
- tests/docs/templates/BUG-TRACKING-TEMPLATE.csv (bugs found)
|
||||
|
||||
Analysis:
|
||||
1. Test execution: Baseline had [X] unit tests passing. How many total tests do we have now?
|
||||
2. Pass rate: What's our QA test pass rate vs baseline pass rate?
|
||||
3. Bugs discovered: Baseline started with [X] P0 bugs. How many P0/P1/P2/P3 bugs have we found?
|
||||
4. Quality gates: Are we on track to meet 80% pass rate, zero P0 bugs policy?
|
||||
5. Security: Have we maintained OWASP coverage?
|
||||
|
||||
Provide a comparison table showing:
|
||||
- Metric | Baseline (YYYY-MM-DD) | Current (YYYY-MM-DD) | Delta | Status
|
||||
|
||||
Are we improving or regressing? What actions are needed?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Emergency Escalation
|
||||
|
||||
### Escalate Critical Bug (P0)
|
||||
```
|
||||
URGENT: A P0 (Blocker) bug has been discovered.
|
||||
|
||||
Test Case: TC-[CATEGORY]-[NUMBER]
|
||||
Bug ID: BUG-[XXX]
|
||||
Severity: P0 (Blocks release, requires 24-hour fix)
|
||||
|
||||
Issue: [Brief description]
|
||||
|
||||
Immediate actions:
|
||||
1. Stop all other testing immediately
|
||||
2. Create detailed bug report in BUG-TRACKING-TEMPLATE.csv
|
||||
3. Include:
|
||||
- Detailed steps to reproduce
|
||||
- Screenshots/videos of the issue
|
||||
- Full error logs
|
||||
- Environment details (OS, browser, Node version, etc.)
|
||||
- Impact assessment (how many users affected?)
|
||||
- Proposed workaround (if any)
|
||||
4. Mark test case as "Blocked" in TEST-EXECUTION-TRACKING.csv
|
||||
5. Notify:
|
||||
- QA Lead
|
||||
- Engineering Lead
|
||||
- Product Manager
|
||||
|
||||
Draft escalation email:
|
||||
|
||||
Subject: [P0 BLOCKER] [Brief description]
|
||||
|
||||
Body:
|
||||
- What: [Issue description]
|
||||
- When: [When discovered]
|
||||
- Impact: [Severity and user impact]
|
||||
- Test Case: TC-[XXX]-[YYY]
|
||||
- Bug ID: BUG-[XXX]
|
||||
- Next Steps: [What needs to happen to fix]
|
||||
- ETA: [Expected fix time - must be within 24 hours]
|
||||
|
||||
Generate the bug report and escalation email now.
|
||||
```
|
||||
|
||||
### Resolve Blocker
|
||||
```
|
||||
A blocker has been resolved. I need you to verify the fix.
|
||||
|
||||
Bug ID: BUG-[XXX] (previously P0 blocker)
|
||||
Status: Engineering reports "Fixed"
|
||||
Test Case: TC-[CATEGORY]-[NUMBER] (originally failed)
|
||||
|
||||
Verification steps:
|
||||
1. Read the bug report in BUG-TRACKING-TEMPLATE.csv
|
||||
2. Understand what was fixed (check git commit if available)
|
||||
3. Re-run the original test case exactly as documented
|
||||
4. Verify the Expected Result now matches Actual Result
|
||||
|
||||
If fix is verified:
|
||||
- Update BUG-TRACKING-TEMPLATE.csv:
|
||||
- Status: "Closed"
|
||||
- Resolution: "Fixed - Verified"
|
||||
- Resolved Date: [Today's date]
|
||||
- Verified By: [Your name/ID]
|
||||
- Verification Date: [Today's date]
|
||||
- Update TEST-EXECUTION-TRACKING.csv:
|
||||
- Result: "PASS"
|
||||
- Notes: "Re-tested after BUG-[XXX] fix, now passing"
|
||||
|
||||
If fix is NOT verified (bug still exists):
|
||||
- Update bug status: "Reopened"
|
||||
- Add comment: "Fix verification failed - bug still reproduces"
|
||||
- Re-escalate to Engineering Lead
|
||||
|
||||
Report verification results.
|
||||
```
|
||||
|
||||
### Environment Issues
|
||||
```
|
||||
The test environment is broken. I need you to diagnose and fix it.
|
||||
|
||||
Symptoms: [Describe what's not working]
|
||||
|
||||
Diagnostic steps:
|
||||
1. Check database containers: docker ps | grep <db-name>
|
||||
- Are all containers running and healthy?
|
||||
2. Check database connection: docker exec <container-name> psql -U postgres -d postgres -c "SELECT 1;"
|
||||
- Can you connect to the database?
|
||||
3. Check data: docker exec <container-name> psql -U postgres -d postgres -c "SELECT COUNT(*) FROM <table>;"
|
||||
- Does the database still have seeded data?
|
||||
4. Check dev server: curl http://localhost:8080
|
||||
- Is the dev server responding?
|
||||
5. Check CLI (if applicable): ccpm --version
|
||||
- Is the CLI installed and working?
|
||||
|
||||
Refer to: tests/docs/templates/DAY-1-ONBOARDING-CHECKLIST.md section "Troubleshooting Common Day 1 Issues"
|
||||
|
||||
If you can fix the issue:
|
||||
- Execute the fix
|
||||
- Document what was broken and how you fixed it
|
||||
- Verify the environment is fully operational
|
||||
- Resume testing
|
||||
|
||||
If you cannot fix the issue:
|
||||
- Document all diagnostic findings
|
||||
- Escalate to Environment Engineer
|
||||
- Mark affected tests as "Blocked" in TEST-EXECUTION-TRACKING.csv
|
||||
|
||||
Start diagnostics now.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices for Using These Prompts
|
||||
|
||||
### 1. Always Provide Context
|
||||
Include relevant context before the prompt:
|
||||
- Current week/day
|
||||
- Previous day's results
|
||||
- Known blockers
|
||||
- Environment status
|
||||
|
||||
### 2. Be Specific
|
||||
Replace all template variables with actual values:
|
||||
- `[CATEGORY]`: CLI / WEB / API / SEC
|
||||
- `[NUMBER]`: Test case number (e.g., 001, 015)
|
||||
- `[N]`: Number of tests
|
||||
- `[XXX]`: Bug ID number
|
||||
|
||||
### 3. Reference Documentation
|
||||
Always point the LLM to specific documentation files:
|
||||
- Test case specs: `tests/docs/02-CLI-TEST-CASES.md`
|
||||
- Test strategy: `tests/docs/01-TEST-STRATEGY.md`
|
||||
- Tracking: `tests/docs/templates/TEST-EXECUTION-TRACKING.csv`
|
||||
|
||||
### 4. Enforce Tracking
|
||||
Always require the LLM to update tracking templates **immediately** after each test. Don't allow batch updates at end of day.
|
||||
|
||||
### 5. Verify Results
|
||||
Ask the LLM to show you the updated CSV/Markdown files after making changes. Verify the data is correct.
|
||||
|
||||
### 6. Escalate Blockers
|
||||
If the LLM reports a P0 bug or blocker, stop all other work and focus on that issue first.
|
||||
|
||||
---
|
||||
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
### ❌ Mistake 1: Vague Prompts
|
||||
**Bad**: "Do some QA testing for me"
|
||||
**Good**: "Execute CLI test cases TC-CLI-001 through TC-CLI-015 following tests/docs/02-CLI-TEST-CASES.md. Update TEST-EXECUTION-TRACKING.csv after each test."
|
||||
|
||||
### ❌ Mistake 2: Skipping Tracking
|
||||
**Bad**: "Run all the CLI tests and tell me the results"
|
||||
**Good**: "Execute TC-CLI-001. Update TEST-EXECUTION-TRACKING.csv with result. Then execute TC-CLI-002. Update CSV. Repeat for all tests."
|
||||
|
||||
### ❌ Mistake 3: Not Specifying Documentation
|
||||
**Bad**: "Test the install command"
|
||||
**Good**: "Execute test case TC-CLI-001 from tests/docs/02-CLI-TEST-CASES.md. Follow the exact steps documented."
|
||||
|
||||
### ❌ Mistake 4: Allowing Deviations
|
||||
**Bad**: "Test the CLI however you think is best"
|
||||
**Good**: "Execute ONLY the test cases documented in tests/docs/02-CLI-TEST-CASES.md. Do NOT add your own test cases. Do NOT skip test cases."
|
||||
|
||||
### ❌ Mistake 5: Batching Updates
|
||||
**Bad**: "Run 15 tests and then update the tracking CSV"
|
||||
**Good**: "Execute TC-CLI-001. Update TEST-EXECUTION-TRACKING.csv immediately. Then execute TC-CLI-002. Update CSV immediately. Repeat."
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### LLM Not Following Test Plan
|
||||
```
|
||||
CRITICAL: You are deviating from the documented test plan.
|
||||
|
||||
STOP all current work.
|
||||
|
||||
Re-read: tests/docs/README.md
|
||||
|
||||
You MUST:
|
||||
1. Follow the exact test case specifications
|
||||
2. Execute test steps in the documented order
|
||||
3. Update TEST-EXECUTION-TRACKING.csv after EACH test (not in batches)
|
||||
4. File bugs in BUG-TRACKING-TEMPLATE.csv for any failures
|
||||
5. NOT add your own test cases
|
||||
6. NOT skip test cases
|
||||
7. NOT modify test case priorities without approval
|
||||
|
||||
Acknowledge that you understand these requirements and will follow the documented test plan exactly.
|
||||
|
||||
Then resume testing from where you left off.
|
||||
```
|
||||
|
||||
### LLM Providing Incorrect Results
|
||||
```
|
||||
The test results you reported do not match my manual verification.
|
||||
|
||||
Test Case: TC-[CATEGORY]-[NUMBER]
|
||||
Your Result: [PASS / FAIL]
|
||||
My Result: [PASS / FAIL]
|
||||
|
||||
Re-execute this test case step-by-step:
|
||||
1. Read the full test case spec from tests/docs/[DOCUMENT].md
|
||||
2. Show me each test step as you execute it
|
||||
3. Show me the actual output/result after each step
|
||||
4. Compare the actual result to the expected result
|
||||
5. Determine PASS/FAIL based on documented criteria (not your assumptions)
|
||||
|
||||
Be precise. Use exact command outputs, exact HTTP responses, exact UI text. Do NOT paraphrase or summarize.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 1.0
|
||||
**Last Updated**: 2025-11-09
|
||||
**Feedback**: If you create new useful prompts, add them to this document.
|
||||
423
qa-expert/references/master_qa_prompt.md
Normal file
423
qa-expert/references/master_qa_prompt.md
Normal file
@@ -0,0 +1,423 @@
|
||||
# Master QA Prompt - One Command for Autonomous Execution
|
||||
|
||||
**Purpose**: Single copy-paste prompt that directs LLM to execute entire QA test plan autonomously.
|
||||
|
||||
**Innovation**: 100x speedup vs manual testing + zero human error in tracking + auto-resume capability.
|
||||
|
||||
---
|
||||
|
||||
## ⭐ The Master Prompt
|
||||
|
||||
Copy this prompt and paste it into your LLM conversation. The LLM will handle everything automatically.
|
||||
|
||||
```
|
||||
You are a senior QA engineer with 20+ years of experience at Google. Execute the QA test plan.
|
||||
|
||||
CRITICAL INSTRUCTIONS:
|
||||
1. Read tests/docs/QA-HANDOVER-INSTRUCTIONS.md - Master handover guide
|
||||
2. Read tests/docs/BASELINE-METRICS.md - Understand pre-QA baseline
|
||||
3. Read tests/docs/templates/TEST-EXECUTION-TRACKING.csv - Check current progress
|
||||
4. Determine current state:
|
||||
- If no tests executed yet → Start Day 1 onboarding (tests/docs/templates/DAY-1-ONBOARDING-CHECKLIST.md)
|
||||
- If Day 1 complete → Determine current week/day from TEST-EXECUTION-TRACKING.csv
|
||||
- If mid-week → Continue from last completed test case
|
||||
5. Execute today's test cases:
|
||||
- Week 1: CLI tests (tests/docs/02-CLI-TEST-CASES.md)
|
||||
- Week 2: Web tests (tests/docs/03-WEB-TEST-CASES.md)
|
||||
- Week 3: API tests (tests/docs/04-API-TEST-CASES.md)
|
||||
- Week 4: Security tests (tests/docs/05-SECURITY-TEST-CASES.md)
|
||||
- Week 5: Regression tests (re-run all P0 tests)
|
||||
6. For EACH test case:
|
||||
- Read full test case specification
|
||||
- Execute all steps exactly as documented
|
||||
- Update TEST-EXECUTION-TRACKING.csv immediately with result
|
||||
- If test fails → File bug in BUG-TRACKING-TEMPLATE.csv before continuing
|
||||
7. End of day:
|
||||
- Provide summary (tests executed, pass rate, bugs filed, blockers)
|
||||
- If Friday → Generate WEEKLY-PROGRESS-REPORT.md
|
||||
8. Compare progress against BASELINE-METRICS.md weekly
|
||||
|
||||
MANDATORY RULES:
|
||||
- DO NOT skip tests
|
||||
- DO NOT deviate from documented test cases
|
||||
- DO NOT batch CSV updates (update after EACH test)
|
||||
- DO NOT modify test priorities without approval
|
||||
- STOP immediately if P0 bug discovered (escalate)
|
||||
|
||||
Ground truth: tests/docs/ directory
|
||||
Test strategy: tests/docs/01-TEST-STRATEGY.md
|
||||
Quality gates: ≥80% pass rate, zero P0 bugs, ≥80% coverage
|
||||
|
||||
Start now. Tell me what state you're in and what you're going to do today.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
### **Step 1: Assess Current State** (2 minutes)
|
||||
```
|
||||
✅ Reading QA-HANDOVER-INSTRUCTIONS.md...
|
||||
✅ Reading BASELINE-METRICS.md...
|
||||
✅ Reading TEST-EXECUTION-TRACKING.csv...
|
||||
|
||||
Current State Detected:
|
||||
- Day 1 onboarding: NOT COMPLETE
|
||||
- Tests executed: 0 / 342
|
||||
- Current week: N/A (need to complete Day 1 first)
|
||||
|
||||
Decision: Start Day 1 onboarding
|
||||
```
|
||||
|
||||
### **Step 2: Execute Day 1 Onboarding** (if not done yet)
|
||||
```
|
||||
Reading: tests/docs/templates/DAY-1-ONBOARDING-CHECKLIST.md
|
||||
|
||||
Starting Hour 1: Environment Setup
|
||||
Task 1.1: Clone repository
|
||||
Executing: git clone https://github.com/your-org/project.git
|
||||
Status: ✅ Complete
|
||||
|
||||
Task 1.2: Install dependencies
|
||||
Executing: pnpm install
|
||||
Status: ✅ Complete
|
||||
|
||||
[... continues through all 5 hours ...]
|
||||
|
||||
Day 1 Onboarding: ✅ COMPLETE
|
||||
Ready to start Week 1 testing tomorrow.
|
||||
```
|
||||
|
||||
### **Step 3: Execute Daily Tests** (if Day 1 done)
|
||||
```
|
||||
Current State Detected:
|
||||
- Day 1 onboarding: ✅ COMPLETE
|
||||
- Last completed test: TC-CLI-015
|
||||
- Current week: Week 1 (CLI Testing)
|
||||
- Current day: Tuesday
|
||||
- Today's tests: TC-CLI-016 through TC-CLI-030 (15 tests)
|
||||
|
||||
Reading: tests/docs/02-CLI-TEST-CASES.md
|
||||
|
||||
Executing TC-CLI-016: Install Skill with --project Flag
|
||||
Step 1: Create test project directory
|
||||
Executing: mkdir /tmp/test-project && cd /tmp/test-project
|
||||
Status: ✅ Complete
|
||||
Step 2: Run ccpm install --project cli-demo-generator
|
||||
Executing: ccpm install --project cli-demo-generator
|
||||
Output: ✓ Skill installed to /tmp/test-project/.claude/skills/cli-demo-generator
|
||||
Status: ✅ PASS
|
||||
|
||||
Updating TEST-EXECUTION-TRACKING.csv:
|
||||
Row: TC-CLI-016
|
||||
Status: Completed
|
||||
Result: PASS
|
||||
Execution Date: 2025-11-08
|
||||
Executed By: QA Engineer
|
||||
Notes: Skill installed to project directory successfully
|
||||
✅ CSV Updated
|
||||
|
||||
Executing TC-CLI-017: Install Skill with Invalid Name
|
||||
[... continues for all 15 tests ...]
|
||||
|
||||
End of Day Summary:
|
||||
✅ Tests executed: 15 / 15 (100%)
|
||||
✅ Pass rate: 93% (14 passed, 1 failed)
|
||||
🐛 Bugs filed: BUG-002 (P2 - Error message unclear for invalid skill name)
|
||||
📊 TEST-EXECUTION-TRACKING.csv: Updated
|
||||
📝 BUG-TRACKING-TEMPLATE.csv: Updated
|
||||
|
||||
Tomorrow (Wednesday):
|
||||
- Tests: TC-CLI-031 through TC-CLI-045 (15 tests)
|
||||
- Estimated time: 5.5 hours
|
||||
```
|
||||
|
||||
### **Step 4: Weekly Reporting** (if Friday)
|
||||
```
|
||||
Current State Detected:
|
||||
- Current day: Friday (end of Week 1)
|
||||
- Week 1 tests: 93 / 93 executed (100%)
|
||||
- Pass rate: 91% (85 passed, 8 failed)
|
||||
|
||||
Generating WEEKLY-PROGRESS-REPORT.md:
|
||||
Reading template: tests/docs/templates/WEEKLY-PROGRESS-REPORT.md
|
||||
|
||||
Executive Summary:
|
||||
- Tests Executed: 93 / 342 (27% complete)
|
||||
- Pass Rate: 91%
|
||||
- Bugs Found: 8 total (P0: 0, P1: 2, P2: 4, P3: 2)
|
||||
- Blockers: None
|
||||
- On Track: Yes
|
||||
|
||||
[... fills in all sections ...]
|
||||
|
||||
✅ Report saved: tests/docs/reports/WEEK-1-PROGRESS-REPORT-2025-11-08.md
|
||||
|
||||
Comparing against BASELINE-METRICS.md:
|
||||
Baseline (2025-11-08):
|
||||
- Unit tests: 261 passing (100%)
|
||||
- P0 bugs: 0
|
||||
- Database: 18 skills, 32 tables
|
||||
|
||||
Current (2025-11-08):
|
||||
- Total tests: 261 unit + 93 QA CLI = 354 tests
|
||||
- Overall pass rate: 96% (340 passed, 14 failed)
|
||||
- P0 bugs: 0 (maintaining baseline) ✅
|
||||
- Quality gates: On track for ≥80% target ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Auto-Resume Capability
|
||||
|
||||
If the LLM conversation gets interrupted or you need to pause, just paste the master prompt again.
|
||||
|
||||
The LLM will:
|
||||
1. ✅ Read TEST-EXECUTION-TRACKING.csv
|
||||
2. ✅ Find the last completed test case
|
||||
3. ✅ Resume from the next test case
|
||||
4. ✅ Continue exactly where you left off
|
||||
|
||||
**Example**:
|
||||
```
|
||||
Current State Detected:
|
||||
- Last completed test: TC-CLI-042
|
||||
- Current week: Week 1 (CLI Testing)
|
||||
- Current day: Wednesday
|
||||
- Remaining today: TC-CLI-043 through TC-CLI-045 (3 tests)
|
||||
|
||||
Resuming from TC-CLI-043...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates Monitoring
|
||||
|
||||
The LLM automatically checks quality gates and alerts you if off track:
|
||||
|
||||
### **Pass Rate Check** (Daily)
|
||||
```
|
||||
Quality Gate: Pass Rate ≥80%
|
||||
Current: 91% ✅ PASSING
|
||||
Status: On track
|
||||
```
|
||||
|
||||
### **P0 Bugs Check** (Daily)
|
||||
```
|
||||
Quality Gate: Zero P0 Bugs
|
||||
Current: 0 P0 bugs ✅ PASSING
|
||||
Status: Maintained
|
||||
```
|
||||
|
||||
### **Code Coverage Check** (Weekly)
|
||||
```
|
||||
Quality Gate: ≥80% Coverage
|
||||
Current: 76% ⚠️ BELOW TARGET
|
||||
Status: Need 4% improvement
|
||||
Action: Add unit tests for uncovered modules
|
||||
```
|
||||
|
||||
### **Release Readiness** (End of Week 5)
|
||||
```
|
||||
RELEASE READINESS ASSESSMENT
|
||||
|
||||
Quality Gates Status:
|
||||
✅ Test Execution: 100% (342/342 tests executed)
|
||||
✅ Pass Rate: 87% (≥80% required) - PASS
|
||||
✅ P0 Bugs: 0 (zero required) - PASS
|
||||
⚠️ P1 Bugs: 7 (≤5 required) - FAIL (2 over limit)
|
||||
✅ Code Coverage: 82% (≥80% required) - PASS
|
||||
✅ Security: 90% OWASP coverage - PASS
|
||||
|
||||
Overall: ⚠️ AT RISK
|
||||
Blocker: 2 excess P1 bugs must be fixed before release
|
||||
ETA: 2 weeks (1 sprint)
|
||||
|
||||
RECOMMENDATION: Do NOT release until P1 bug count ≤5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Emergency Scenarios
|
||||
|
||||
### **P0 Bug Discovered**
|
||||
If the LLM discovers a P0 bug, it will:
|
||||
```
|
||||
🚨 CRITICAL: P0 Bug Discovered - STOPPING ALL TESTING
|
||||
|
||||
Bug ID: BUG-005
|
||||
Test Case: TC-CLI-023
|
||||
Severity: P0 (Blocker - Requires 24-hour fix)
|
||||
Issue: Path traversal vulnerability allows reading /etc/passwd
|
||||
|
||||
Immediate Actions Taken:
|
||||
✅ Bug report filed in BUG-TRACKING-TEMPLATE.csv (all fields complete)
|
||||
✅ Test case marked as "Blocked" in TEST-EXECUTION-TRACKING.csv
|
||||
✅ All further testing STOPPED
|
||||
|
||||
ESCALATION REQUIRED:
|
||||
- Notify: QA Lead, Engineering Lead, Product Manager
|
||||
- Required Fix Time: 24 hours
|
||||
- Impact: Security vulnerability, release blocker
|
||||
|
||||
Escalation email drafted (see below).
|
||||
|
||||
Awaiting fix before resuming testing.
|
||||
```
|
||||
|
||||
### **Environment Broken**
|
||||
If the environment breaks, the LLM will:
|
||||
```
|
||||
🔧 ENVIRONMENT ISSUE DETECTED
|
||||
|
||||
Issue: Supabase containers not responding
|
||||
Diagnostic Results:
|
||||
✅ Docker running: Yes
|
||||
❌ Supabase containers: 3 unhealthy
|
||||
❌ Database connection: Failed
|
||||
|
||||
Troubleshooting Steps Taken:
|
||||
1. Checked docker ps → 3 containers unhealthy
|
||||
2. Checked logs → Error: port 54321 already in use
|
||||
3. Attempted restart → Failed (port conflict)
|
||||
|
||||
RECOMMENDATION:
|
||||
1. Kill conflicting process: lsof -ti:54321 | xargs kill -9
|
||||
2. Restart Supabase: npx supabase start
|
||||
3. Verify: docker ps (all containers healthy)
|
||||
4. Resume testing from TC-CLI-042
|
||||
|
||||
Marking tests as "Blocked" until environment fixed.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Files Automatically Updated
|
||||
|
||||
When you run the master prompt, these files get updated automatically:
|
||||
|
||||
| File | Update Frequency | What Gets Updated |
|
||||
|------|------------------|-------------------|
|
||||
| **TEST-EXECUTION-TRACKING.csv** | After EACH test | Status, Result, Bug ID, Execution Date, Notes |
|
||||
| **BUG-TRACKING-TEMPLATE.csv** | When test fails | New bug entry (Bug ID, Severity, Steps to Reproduce, etc.) |
|
||||
| **WEEKLY-PROGRESS-REPORT.md** | Every Friday | Weekly summary, metrics, quality gates, next week plan |
|
||||
| **tests/docs/reports/** | Every Friday | New weekly report file created |
|
||||
|
||||
**You don't need to manually update anything!** The LLM handles all tracking automatically.
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
After 5 weeks of using this master prompt, you should have:
|
||||
|
||||
✅ **All 342 test cases executed** (100% completion)
|
||||
✅ **TEST-EXECUTION-TRACKING.csv fully populated** (all 342 rows with results)
|
||||
✅ **Bugs documented** in BUG-TRACKING-TEMPLATE.csv (complete with P0-P4 classification)
|
||||
✅ **5 weekly progress reports** (one per week in `tests/docs/reports/`)
|
||||
✅ **Quality gates met** (≥80% pass rate, zero P0 bugs, ≥80% coverage)
|
||||
✅ **Release readiness assessment** (go/no-go decision based on objective criteria)
|
||||
|
||||
---
|
||||
|
||||
## Daily Workflow Example
|
||||
|
||||
### **Day 1 (First Time)**
|
||||
```
|
||||
You: [Paste master prompt]
|
||||
|
||||
LLM:
|
||||
Current State: No testing done yet
|
||||
Action: Starting Day 1 onboarding
|
||||
[Executes 5-hour onboarding checklist]
|
||||
Result: ✅ Day 1 complete, ready for Week 1 Monday
|
||||
```
|
||||
|
||||
### **Day 2 (Monday, Week 1)**
|
||||
```
|
||||
You: [Paste master prompt]
|
||||
|
||||
LLM:
|
||||
Current State: Day 1 complete, Week 1 not started
|
||||
Action: Execute TC-CLI-001 through TC-CLI-015 (15 tests)
|
||||
[Executes 15 CLI tests, updates CSV after each]
|
||||
Result: 14 passed, 1 failed, 1 bug filed
|
||||
```
|
||||
|
||||
### **Day 3 (Tuesday, Week 1)**
|
||||
```
|
||||
You: [Paste master prompt]
|
||||
|
||||
LLM:
|
||||
Current State: Last completed TC-CLI-015
|
||||
Action: Execute TC-CLI-016 through TC-CLI-030 (15 tests)
|
||||
[Continues testing from where left off]
|
||||
Result: 15 passed, 0 failed
|
||||
```
|
||||
|
||||
### **Day 6 (Friday, Week 1)**
|
||||
```
|
||||
You: [Paste master prompt]
|
||||
|
||||
LLM:
|
||||
Current State: Last completed TC-CLI-077
|
||||
Action: Execute TC-CLI-078 through TC-CLI-093 (16 tests) + Weekly report
|
||||
[Finishes Week 1 tests, generates weekly report]
|
||||
Result: Week 1 complete (93/93 tests), 91% pass rate, 8 bugs filed
|
||||
```
|
||||
|
||||
### **Day 8 (Monday, Week 2)**
|
||||
```
|
||||
You: [Paste master prompt]
|
||||
|
||||
LLM:
|
||||
Current State: Week 1 complete, Week 2 not started
|
||||
Action: Execute TC-WEB-001 through TC-WEB-015 (15 tests)
|
||||
[Switches to Web testing automatically]
|
||||
Result: 13 passed, 2 failed, 2 bugs filed
|
||||
```
|
||||
|
||||
**This continues for 5 weeks until all 342 test cases are executed!**
|
||||
|
||||
---
|
||||
|
||||
## Customizations (Optional)
|
||||
|
||||
### **Skip Day 1 Onboarding**
|
||||
Add this line to the prompt:
|
||||
```
|
||||
ASSUMPTION: Day 1 onboarding is already complete. Skip to test execution.
|
||||
```
|
||||
|
||||
### **Execute Specific Tests**
|
||||
Add this line to the prompt:
|
||||
```
|
||||
TODAY ONLY: Execute test cases TC-CLI-020 through TC-CLI-035 (ignore normal schedule).
|
||||
```
|
||||
|
||||
### **Focus on Bug Investigation**
|
||||
Add this line to the prompt:
|
||||
```
|
||||
PRIORITY: Investigate and reproduce Bug ID BUG-003 before continuing test execution.
|
||||
```
|
||||
|
||||
### **Generate Weekly Report Only**
|
||||
Replace the master prompt with this shorter version:
|
||||
```
|
||||
You are a senior QA engineer. Generate the weekly progress report for Week [N].
|
||||
|
||||
Read:
|
||||
- tests/docs/templates/WEEKLY-PROGRESS-REPORT.md (template)
|
||||
- tests/docs/templates/TEST-EXECUTION-TRACKING.csv (test results)
|
||||
- tests/docs/templates/BUG-TRACKING-TEMPLATE.csv (bug data)
|
||||
- tests/docs/BASELINE-METRICS.md (baseline comparison)
|
||||
|
||||
Fill in ALL sections with actual data. Save report as:
|
||||
tests/docs/reports/WEEK-[N]-PROGRESS-REPORT-2025-11-[DD].md
|
||||
|
||||
Start now.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Pro Tip**: Bookmark this page and copy-paste the master prompt every morning. The LLM will handle the rest! 🚀
|
||||
Reference in New Issue
Block a user