Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287.
This commit is contained in:
40
CHANGELOG.md
40
CHANGELOG.md
@@ -5,6 +5,46 @@ All notable changes to the Claude Skills Library will be documented in this file
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [2.1.1] - 2026-03-07
|
||||
|
||||
### Changed — Tessl Quality Optimization (#287)
|
||||
18 skills optimized from 66-83% to 85-100% via `tessl skill review --optimize`:
|
||||
|
||||
| Skill | Before | After |
|
||||
|-------|--------|-------|
|
||||
| `project-management/confluence-expert` | 66% | 94% |
|
||||
| `project-management/jira-expert` | 77% | 97% |
|
||||
| `product-team/product-strategist` | 76% | 85%+ |
|
||||
| `marketing-skill/campaign-analytics` | 70% | 85%+ |
|
||||
| `business-growth/customer-success-manager` | 70% | 85%+ |
|
||||
| `business-growth/revenue-operations` | 70% | 85%+ |
|
||||
| `finance/financial-analyst` | 70% | 85%+ |
|
||||
| `engineering-team/senior-secops` | 75% | 94% |
|
||||
| `marketing-skill/prompt-engineer-toolkit` | 79% | 90% |
|
||||
| `ra-qm-team/quality-manager-qms-iso13485` | 76% | 85%+ |
|
||||
| `engineering-team/senior-security` | 80% | 93% |
|
||||
| `engineering-team/playwright-pro` | 82% | 100% |
|
||||
| `engineering-team/senior-backend` | 83% | 100% |
|
||||
| `engineering-team/senior-qa` | 83% | 100% |
|
||||
| `engineering-team/senior-ml-engineer` | 82% | 99% |
|
||||
| `engineering-team/ms365-tenant-manager` | 83% | 100% |
|
||||
| `engineering-team/aws-solution-architect` | 83% | 94% |
|
||||
| `c-level-advisor/cto-advisor` | 82% | 99% |
|
||||
| `marketing-skill/marketing-demand-acquisition` | 72% | 99% |
|
||||
|
||||
### Fixed
|
||||
- Created missing `finance/financial-analyst/references/industry-adaptations.md` (reference was declared but file didn't exist)
|
||||
- Removed dead `project-management/packaged-skills/` folder (zip files redundant)
|
||||
|
||||
### Added
|
||||
- `SKILL_PIPELINE.md` — Mandatory 9-phase production pipeline for all skill work
|
||||
|
||||
### Verified
|
||||
- Claude Code compliance: 18/18 pass (after fix)
|
||||
- All YAML frontmatter valid
|
||||
- All file references resolve
|
||||
- All SKILL.md files under 500 lines
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
|
||||
425
SKILL_PIPELINE.md
Normal file
425
SKILL_PIPELINE.md
Normal file
@@ -0,0 +1,425 @@
|
||||
# Skill Production Pipeline — claude-skills
|
||||
|
||||
> **Effective: 2026-03-07** | Applies to ALL new skills, improvements, and deployments.
|
||||
> **Owner:** Leo (orchestrator) + Reza (final approval)
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Pipeline
|
||||
|
||||
Every skill MUST go through this pipeline. No exceptions.
|
||||
|
||||
```
|
||||
Intent → Research → Draft → Eval → Iterate → Compliance → Package → Deploy → Verify → Rollback-Ready
|
||||
```
|
||||
|
||||
### Tool: Anthropic Skill Creator (v2025-03+)
|
||||
**Location:** `~/.openclaw/workspace/skills/skill-creator/`
|
||||
**Components:** SKILL.md, 3 agents (grader, comparator, analyzer), 10 scripts, eval-viewer, schemas
|
||||
|
||||
### Dependencies
|
||||
| Tool | Version | Install | Fallback |
|
||||
|------|---------|---------|----------|
|
||||
| Tessl CLI | v0.70.0 | `tessl login` (auth: rezarezvani) | Manual 8-point compliance check |
|
||||
| ClawHub CLI | latest | `npm i -g @openclaw/clawhub` | Skip OpenClaw publish, do manually later |
|
||||
| Claude Code | 2.1+ | Already installed | Required, no fallback |
|
||||
| Python | 3.10+ | System | Required for scripts |
|
||||
|
||||
### Iteration Limits
|
||||
- **Max 5 iterations** per skill before escalation
|
||||
- **Max 3 hours** per skill in eval loop
|
||||
- If stuck → log issue, move to next skill, revisit in next batch
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Intent & Research
|
||||
|
||||
1. **Capture intent** — What should this skill enable? When should it trigger? Expected output format?
|
||||
2. **Interview** — Edge cases, input/output formats, success criteria, dependencies
|
||||
3. **Research** — Check competing skills, market gaps, related domain standards
|
||||
4. **Define domain expertise level** — Skills must be POWERFUL tier (expert-level, not generic)
|
||||
|
||||
## Phase 2: Draft SKILL.md
|
||||
|
||||
Using Anthropic's skill-creator workflow:
|
||||
|
||||
### Required Structure
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md # Core instructions (YAML frontmatter required)
|
||||
│ ├── name: (kebab-case)
|
||||
│ ├── description: (pushy triggers, when-to-use)
|
||||
│ └── Body (<500 lines ideal)
|
||||
├── scripts/ # Python CLI tools (no ML/LLM calls, stdlib only)
|
||||
├── references/ # Expert knowledge bases (loaded on demand)
|
||||
├── assets/ # Templates, sample data, expected outputs
|
||||
├── agents/ # Sub-agent definitions (if applicable)
|
||||
├── commands/ # Slash commands (if applicable)
|
||||
└── evals/
|
||||
└── evals.json # Test cases + assertions
|
||||
```
|
||||
|
||||
### SKILL.md Rules
|
||||
- YAML frontmatter: `name` + `description` required
|
||||
- Description must be "pushy" — include trigger phrases, edge cases, competing contexts
|
||||
- Under 500 lines; overflow → reference files with clear pointers
|
||||
- Explain WHY, not just WHAT — theory of mind over rigid MUSTs
|
||||
- Include examples with Input/Output patterns
|
||||
- Define output format explicitly
|
||||
|
||||
## Phase 3: Eval & Benchmark
|
||||
|
||||
### 3a. Create Test Cases
|
||||
- 2-3 realistic test prompts (what real users would actually say)
|
||||
- Save to `evals/evals.json` (schema: `references/schemas.md`)
|
||||
- Include `files` for file-dependent skills
|
||||
|
||||
### 3b. Run Evals
|
||||
- Spawn with-skill AND baseline (without-skill) runs in parallel
|
||||
- Save to `<skill>-workspace/iteration-N/eval-<ID>/`
|
||||
- Capture `timing.json` from completion notifications
|
||||
- Grade using `agents/grader.md` → `grading.json`
|
||||
|
||||
### 3c. Aggregate & Review
|
||||
```bash
|
||||
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
|
||||
python <skill-creator>/eval-viewer/generate_review.py <workspace>/iteration-N \
|
||||
--skill-name "<name>" --benchmark <workspace>/iteration-N/benchmark.json --static <output.html>
|
||||
```
|
||||
- Analyst pass (agents/analyzer.md): non-discriminating assertions, variance, tradeoffs
|
||||
- User reviews outputs + benchmark in viewer
|
||||
- Read `feedback.json` → improve → repeat
|
||||
|
||||
### 3d. Quality Gate
|
||||
- **Pass rate ≥ 85%** with-skill
|
||||
- **Delta vs baseline ≥ +30%** on key assertions
|
||||
- No flaky evals (variance < 20%)
|
||||
|
||||
## Phase 4: Iterate Until Done
|
||||
- Generalize from feedback (don't overfit to test cases)
|
||||
- Keep prompt lean — remove what doesn't pull its weight
|
||||
- Bundle repeated helper scripts into `scripts/`
|
||||
- Repeat eval loop until user satisfied + metrics pass
|
||||
|
||||
## Phase 5: Description Optimization
|
||||
|
||||
After skill is finalized:
|
||||
|
||||
1. Generate 20 trigger eval queries (10 should-trigger, 10 should-not)
|
||||
2. User reviews via `assets/eval_review.html`
|
||||
3. Run optimization loop:
|
||||
```bash
|
||||
python -m scripts.run_loop \
|
||||
--eval-set <trigger-eval.json> --skill-path <path> \
|
||||
--model anthropic/claude-opus-4-6 --max-iterations 5 --verbose
|
||||
```
|
||||
4. Apply `best_description` to SKILL.md frontmatter
|
||||
|
||||
## Phase 6: Compliance Check (Claude Code)
|
||||
|
||||
**Mandatory.** Every skill inspected by Claude Code before merge:
|
||||
|
||||
```bash
|
||||
echo "Review this skill for Anthropic compliance:
|
||||
1. No malware, exploit code, or security risks
|
||||
2. No hardcoded secrets or credentials
|
||||
3. Description is accurate (no surprise behavior)
|
||||
4. Scripts are stdlib-only (no undeclared dependencies)
|
||||
5. YAML frontmatter valid (name + description)
|
||||
6. File references all resolve correctly
|
||||
7. Under 500 lines SKILL.md (or justified)
|
||||
8. Assets include sample data + expected output" | claude --output-format text
|
||||
```
|
||||
|
||||
Additionally run Tessl quality check:
|
||||
```bash
|
||||
tessl skill review <skill-path>
|
||||
```
|
||||
**Minimum score: 85%**
|
||||
|
||||
## Phase 7: Package for All Platforms
|
||||
|
||||
### 7a. Claude Code Plugin
|
||||
```
|
||||
skill-name/
|
||||
├── .claude-plugin/
|
||||
│ └── plugin.json # name, version, description, skills, commands, agents
|
||||
├── SKILL.md
|
||||
├── commands/ # /command-name.md definitions
|
||||
├── agents/ # Agent definitions
|
||||
└── (scripts, references, assets, evals)
|
||||
```
|
||||
|
||||
**plugin.json format (STRICT):**
|
||||
```json
|
||||
{
|
||||
"name": "skill-name",
|
||||
"description": "One-line description",
|
||||
"version": "1.0.0",
|
||||
"author": "alirezarezvani",
|
||||
"homepage": "https://github.com/alirezarezvani/claude-skills",
|
||||
"repository": "https://github.com/alirezarezvani/claude-skills",
|
||||
"license": "MIT",
|
||||
"skills": "./"
|
||||
}
|
||||
```
|
||||
**Only these fields. Nothing else.**
|
||||
|
||||
### 7b. Codex CLI Version
|
||||
```
|
||||
skill-name/
|
||||
├── AGENTS.md # Codex-compatible agent instructions
|
||||
├── codex.md # Codex CLI skill format
|
||||
└── (same scripts, references, assets)
|
||||
```
|
||||
- Convert SKILL.md patterns to Codex-native format
|
||||
- Test with `codex --full-auto "test prompt"`
|
||||
|
||||
### 7c. OpenClaw Skill
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md # OpenClaw-compatible (same base)
|
||||
├── openclaw.json # OpenClaw skill metadata (optional)
|
||||
└── (same scripts, references, assets)
|
||||
```
|
||||
- Ensure compatible with OpenClaw's skill loading (YAML frontmatter triggers)
|
||||
- Publish to ClawHub: `clawhub publish ./skill-name`
|
||||
|
||||
## Phase 8: Deploy
|
||||
|
||||
### Marketplace
|
||||
```bash
|
||||
# Claude Code marketplace (via plugin in repo)
|
||||
# Users install with:
|
||||
/plugin marketplace add alirezarezvani/claude-skills
|
||||
/plugin install skill-name@claude-code-skills
|
||||
```
|
||||
|
||||
### GitHub Release
|
||||
- Feature branch from `dev` → PR to `dev` → merge → PR to `main`
|
||||
- Conventional commits: `feat(category): add skill-name skill`
|
||||
- Update category `plugin.json` skill count + version
|
||||
- Update `marketplace.json` if new plugin entry
|
||||
|
||||
### ClawHub
|
||||
```bash
|
||||
clawhub publish ./category/skill-name
|
||||
```
|
||||
|
||||
### Codex CLI Registry
|
||||
```bash
|
||||
# Users install with:
|
||||
npx agent-skills-cli add alirezarezvani/claude-skills --skill skill-name
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent & Command Requirements
|
||||
|
||||
### Every Skill SHOULD Have:
|
||||
- **Agent definition** (`agents/cs-<role>.md`) — persona, capabilities, workflows
|
||||
- **Slash command** (`commands/<action>.md`) — simplified user entry point
|
||||
|
||||
### Agent Format:
|
||||
```markdown
|
||||
---
|
||||
name: cs-<role-name>
|
||||
description: <when to spawn this agent>
|
||||
---
|
||||
# cs-<role-name>
|
||||
## Role & Expertise
|
||||
## Core Workflows
|
||||
## Tools & Scripts Available
|
||||
## Output Standards
|
||||
```
|
||||
|
||||
### Command Format:
|
||||
```markdown
|
||||
---
|
||||
name: <command-name>
|
||||
description: <what this command does>
|
||||
---
|
||||
# /<command-name>
|
||||
## Usage
|
||||
## Arguments
|
||||
## Examples
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 9: Real-World Verification (NEVER SKIP)
|
||||
|
||||
**Every skill must pass real-world testing before merge. No exceptions.**
|
||||
|
||||
### 9a. Marketplace Installation Test
|
||||
```bash
|
||||
# 1. Register marketplace (if not already)
|
||||
# In Claude Code:
|
||||
/plugin marketplace add alirezarezvani/claude-skills
|
||||
|
||||
# 2. Install the skill
|
||||
/plugin install <skill-name>@claude-code-skills
|
||||
|
||||
# 3. Verify installation
|
||||
/plugin list # skill must appear
|
||||
|
||||
# 4. Load/reload test
|
||||
/plugin reload # must load without errors
|
||||
```
|
||||
|
||||
### 9b. Trigger Test
|
||||
- Send 3 realistic prompts that SHOULD trigger the skill
|
||||
- Send 2 prompts that should NOT trigger it
|
||||
- Verify correct trigger/no-trigger behavior
|
||||
|
||||
### 9c. Functional Test
|
||||
- Execute the skill's primary workflow end-to-end
|
||||
- Run each script with sample data
|
||||
- Verify output format matches spec
|
||||
- Check all file references resolve correctly
|
||||
|
||||
### 9d. Bug Fix Protocol
|
||||
- **Every bug found → fix immediately** (no "known issues" parking)
|
||||
- Document bug + fix in CHANGELOG.md
|
||||
- Re-run full eval suite after fix
|
||||
- Re-verify marketplace install after fix
|
||||
|
||||
### 9e. Cross-Platform Verify
|
||||
- **Claude Code**: Install from marketplace, trigger, run workflow
|
||||
- **Codex CLI**: Load AGENTS.md, run test prompt
|
||||
- **OpenClaw**: Load skill, verify frontmatter triggers
|
||||
|
||||
---
|
||||
|
||||
## Documentation Requirements (Continuous)
|
||||
|
||||
**All changes MUST update these files. Every commit, every merge.**
|
||||
|
||||
### Per-Commit Updates
|
||||
| File | What to update |
|
||||
|------|----------------|
|
||||
| `CHANGELOG.md` | Every change, every fix, every improvement |
|
||||
| Category `README.md` | Skill list, descriptions, install commands |
|
||||
| Category `CLAUDE.md` | Navigation, skill count, architecture notes |
|
||||
|
||||
### Per-Skill Updates
|
||||
| File | What to update |
|
||||
|------|----------------|
|
||||
| `SKILL.md` | Frontmatter, body, references |
|
||||
| `plugin.json` | Version, description |
|
||||
| `evals/evals.json` | Test cases + assertions |
|
||||
|
||||
### Per-Release Updates
|
||||
| File | What to update |
|
||||
|------|----------------|
|
||||
| Root `README.md` | Total skill count, category summary, install guide |
|
||||
| Root `CLAUDE.md` | Navigation map, architecture, skill counts |
|
||||
| `agents/CLAUDE.md` | Agent catalog |
|
||||
| `marketplace.json` | Plugin entries |
|
||||
| `docs/` (GitHub Pages) | Run `scripts/generate-docs.py` |
|
||||
| `STORE.md` | Marketplace listing |
|
||||
|
||||
### GitHub Pages
|
||||
After every batch merge — generate docs and deploy:
|
||||
```bash
|
||||
cd ~/workspace/projects/claude-skills
|
||||
# NOTE: generate-docs.py and static.yml workflow must be created first (Phase 0 task)
|
||||
# If not yet available, manually update docs/ folder
|
||||
python scripts/generate-docs.py 2>/dev/null || echo "generate-docs.py not yet created — update docs manually"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Versioning
|
||||
|
||||
### Semantic Versioning (STRICT)
|
||||
|
||||
| Change Type | Version Bump | Example |
|
||||
|-------------|-------------|---------|
|
||||
| **Existing skill improvement** (Tessl optimization, trigger fixes, content trim) | **2.1.x** (patch) | 2.1.0 → 2.1.1 |
|
||||
| **Enhancement + new skills** (new scripts, agents, commands, new skills) | **2.7.0** (minor) | 2.6.x → 2.7.0 |
|
||||
| **Breaking changes** (restructure, removed skills, API changes) | **3.0.0** (major) | 2.x → 3.0.0 |
|
||||
|
||||
### Current Version Targets (update as releases ship)
|
||||
- **v2.1.1** — Existing skill improvements (Tessl #285-#287, compliance fixes)
|
||||
- **v2.7.0** — New skills + agents + commands + multi-platform packaging
|
||||
|
||||
### Rollback Protocol
|
||||
If a deployed skill breaks:
|
||||
1. **Immediate**: `git revert <commit>` on dev, fast-merge to main
|
||||
2. **Marketplace**: Users re-install from updated main (auto-resolves)
|
||||
3. **ClawHub**: `clawhub unpublish <skill-name>@<broken-version>` if published
|
||||
4. **Notification**: Update CHANGELOG.md with `### Reverted` section
|
||||
5. **Post-mortem**: Document what broke and why in the skill's evals/
|
||||
|
||||
### CHANGELOG.md Format
|
||||
```markdown
|
||||
## [2.7.0] - YYYY-MM-DD
|
||||
### Added
|
||||
- New skill: `category/skill-name` — description
|
||||
- Agent: `cs-role-name` — capabilities
|
||||
- Command: `/command-name` — usage
|
||||
|
||||
### Changed
|
||||
- `category/skill-name` — what changed (Tessl: X% → Y%)
|
||||
|
||||
### Fixed
|
||||
- Bug description — root cause — fix applied
|
||||
|
||||
### Verified
|
||||
- Marketplace install: ✅ all skills loadable
|
||||
- Trigger tests: ✅ X/Y correct triggers
|
||||
- Cross-platform: ✅ Claude Code / Codex / OpenClaw
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Tiers
|
||||
|
||||
| Tier | Score | Criteria |
|
||||
|------|-------|----------|
|
||||
| **POWERFUL** ⭐ | 85%+ | Expert-level, scripts, refs, evals pass, real-world utility |
|
||||
| **SOLID** | 70-84% | Good knowledge, some automation, useful |
|
||||
| **GENERIC** | 55-69% | Too general, needs domain depth |
|
||||
| **WEAK** | <55% | Reject or complete rewrite |
|
||||
|
||||
**We only ship POWERFUL. Everything else goes back to iteration.**
|
||||
|
||||
---
|
||||
|
||||
*This pipeline is non-negotiable for all claude-skills repo work.*
|
||||
|
||||
---
|
||||
|
||||
## Checklist (copy per skill)
|
||||
|
||||
### Required (blocks merge)
|
||||
```
|
||||
[ ] SKILL.md drafted (<500 lines, YAML frontmatter, pushy description)
|
||||
[ ] Scripts: Python CLI tools (stdlib only) — or justified exception
|
||||
[ ] References: expert knowledge bases
|
||||
[ ] Evals: evals.json with 2-3+ test cases + assertions (must fail without skill)
|
||||
[ ] Tessl: score ≥85% (or manual 8-point check if tessl unavailable)
|
||||
[ ] Claude Code compliance: 8-point check passed
|
||||
[ ] Plugin: plugin.json (strict format)
|
||||
[ ] Marketplace install: /plugin install works, /plugin reload no errors
|
||||
[ ] Trigger test: 3 should-trigger + 2 should-not
|
||||
[ ] Functional test: end-to-end workflow verified
|
||||
[ ] Bug fixes: all resolved, re-tested
|
||||
[ ] CHANGELOG.md updated
|
||||
[ ] PR created: dev branch, conventional commit
|
||||
```
|
||||
|
||||
### Recommended (nice-to-have, don't block)
|
||||
```
|
||||
[ ] Agent: cs-<role>.md defined
|
||||
[ ] Command: /<action>.md defined
|
||||
[ ] Assets: templates, sample data, expected outputs
|
||||
[ ] Benchmark: with-skill vs baseline, pass rate ≥85%, delta ≥30%
|
||||
[ ] Description optimization: run_loop.py, 20 trigger queries
|
||||
[ ] Codex: AGENTS.md / codex.md
|
||||
[ ] OpenClaw: frontmatter triggers verified
|
||||
[ ] README.md updated (category + root)
|
||||
[ ] CLAUDE.md updated
|
||||
[ ] docs/ regenerated
|
||||
```
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "customer-success-manager"
|
||||
description: Monitors customer health, predicts churn risk, and identifies expansion opportunities using weighted scoring models for SaaS customer success
|
||||
description: Monitors customer health, predicts churn risk, and identifies expansion opportunities using weighted scoring models for SaaS customer success. Use when analyzing customer accounts, reviewing retention metrics, scoring at-risk customers, or when the user mentions churn, customer health scores, upsell opportunities, expansion revenue, retention analysis, or customer analytics. Runs three Python CLI tools to produce deterministic health scores, churn risk tiers, and prioritized expansion recommendations across Enterprise, Mid-Market, and SMB segments.
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 1.0.0
|
||||
@@ -20,7 +20,6 @@ Production-grade customer success analytics with multi-dimensional health scorin
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Capabilities](#capabilities)
|
||||
- [Input Requirements](#input-requirements)
|
||||
- [Output Formats](#output-formats)
|
||||
- [How to Use](#how-to-use)
|
||||
@@ -32,135 +31,21 @@ Production-grade customer success analytics with multi-dimensional health scorin
|
||||
|
||||
---
|
||||
|
||||
## Capabilities
|
||||
|
||||
- **Customer Health Scoring**: Multi-dimensional weighted scoring across usage, engagement, support, and relationship dimensions with Red/Yellow/Green classification
|
||||
- **Churn Risk Analysis**: Behavioral signal detection with tier-based intervention playbooks and time-to-renewal urgency multipliers
|
||||
- **Expansion Opportunity Scoring**: Adoption depth analysis, whitespace mapping, and revenue opportunity estimation with effort-vs-impact prioritization
|
||||
- **Segment-Aware Benchmarking**: Configurable thresholds for Enterprise, Mid-Market, and SMB customer segments
|
||||
- **Trend Analysis**: Period-over-period comparison to detect improving or declining trajectories
|
||||
- **Executive Reporting**: QBR templates, success plans, and executive business review templates
|
||||
|
||||
---
|
||||
|
||||
## Input Requirements
|
||||
|
||||
All scripts accept a JSON file as positional input argument. See `assets/sample_customer_data.json` for complete examples.
|
||||
All scripts accept a JSON file as positional input argument. See `assets/sample_customer_data.json` for complete schema examples and sample data.
|
||||
|
||||
### Health Score Calculator
|
||||
|
||||
```json
|
||||
{
|
||||
"customers": [
|
||||
{
|
||||
"customer_id": "CUST-001",
|
||||
"name": "Acme Corp",
|
||||
"segment": "enterprise",
|
||||
"arr": 120000,
|
||||
"usage": {
|
||||
"login_frequency": 85,
|
||||
"feature_adoption": 72,
|
||||
"dau_mau_ratio": 0.45
|
||||
},
|
||||
"engagement": {
|
||||
"support_ticket_volume": 3,
|
||||
"meeting_attendance": 90,
|
||||
"nps_score": 8,
|
||||
"csat_score": 4.2
|
||||
},
|
||||
"support": {
|
||||
"open_tickets": 2,
|
||||
"escalation_rate": 0.05,
|
||||
"avg_resolution_hours": 18
|
||||
},
|
||||
"relationship": {
|
||||
"executive_sponsor_engagement": 80,
|
||||
"multi_threading_depth": 4,
|
||||
"renewal_sentiment": "positive"
|
||||
},
|
||||
"previous_period": {
|
||||
"usage_score": 70,
|
||||
"engagement_score": 65,
|
||||
"support_score": 75,
|
||||
"relationship_score": 60
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
Required fields per customer object: `customer_id`, `name`, `segment`, `arr`, and nested objects `usage` (login_frequency, feature_adoption, dau_mau_ratio), `engagement` (support_ticket_volume, meeting_attendance, nps_score, csat_score), `support` (open_tickets, escalation_rate, avg_resolution_hours), `relationship` (executive_sponsor_engagement, multi_threading_depth, renewal_sentiment), and `previous_period` scores for trend analysis.
|
||||
|
||||
### Churn Risk Analyzer
|
||||
|
||||
```json
|
||||
{
|
||||
"customers": [
|
||||
{
|
||||
"customer_id": "CUST-001",
|
||||
"name": "Acme Corp",
|
||||
"segment": "enterprise",
|
||||
"arr": 120000,
|
||||
"contract_end_date": "2026-06-30",
|
||||
"usage_decline": {
|
||||
"login_trend": -15,
|
||||
"feature_adoption_change": -10,
|
||||
"dau_mau_change": -0.08
|
||||
},
|
||||
"engagement_drop": {
|
||||
"meeting_cancellations": 2,
|
||||
"response_time_days": 5,
|
||||
"nps_change": -3
|
||||
},
|
||||
"support_issues": {
|
||||
"open_escalations": 1,
|
||||
"unresolved_critical": 0,
|
||||
"satisfaction_trend": "declining"
|
||||
},
|
||||
"relationship_signals": {
|
||||
"champion_left": false,
|
||||
"sponsor_change": false,
|
||||
"competitor_mentions": 1
|
||||
},
|
||||
"commercial_factors": {
|
||||
"contract_type": "annual",
|
||||
"pricing_complaints": false,
|
||||
"budget_cuts_mentioned": false
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
Required fields per customer object: `customer_id`, `name`, `segment`, `arr`, `contract_end_date`, and nested objects `usage_decline`, `engagement_drop`, `support_issues`, `relationship_signals`, and `commercial_factors`.
|
||||
|
||||
### Expansion Opportunity Scorer
|
||||
|
||||
```json
|
||||
{
|
||||
"customers": [
|
||||
{
|
||||
"customer_id": "CUST-001",
|
||||
"name": "Acme Corp",
|
||||
"segment": "enterprise",
|
||||
"arr": 120000,
|
||||
"contract": {
|
||||
"licensed_seats": 100,
|
||||
"active_seats": 95,
|
||||
"plan_tier": "professional",
|
||||
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
|
||||
},
|
||||
"product_usage": {
|
||||
"core_platform": {"adopted": true, "usage_pct": 85},
|
||||
"analytics_module": {"adopted": true, "usage_pct": 60},
|
||||
"integrations_module": {"adopted": false, "usage_pct": 0},
|
||||
"api_access": {"adopted": true, "usage_pct": 40},
|
||||
"advanced_reporting": {"adopted": false, "usage_pct": 0}
|
||||
},
|
||||
"departments": {
|
||||
"current": ["engineering", "product"],
|
||||
"potential": ["marketing", "sales", "support"]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
Required fields per customer object: `customer_id`, `name`, `segment`, `arr`, and nested objects `contract` (licensed_seats, active_seats, plan_tier, available_tiers), `product_usage` (per-module adoption flags and usage percentages), and `departments` (current and potential).
|
||||
|
||||
---
|
||||
|
||||
@@ -196,17 +81,26 @@ python scripts/expansion_opportunity_scorer.py assets/sample_customer_data.json
|
||||
```bash
|
||||
# 1. Score customer health across portfolio
|
||||
python scripts/health_score_calculator.py customer_portfolio.json --format json > health_results.json
|
||||
# Verify: confirm health_results.json contains the expected number of customer records before continuing
|
||||
|
||||
# 2. Identify at-risk accounts
|
||||
python scripts/churn_risk_analyzer.py customer_portfolio.json --format json > risk_results.json
|
||||
# Verify: confirm risk_results.json is non-empty and risk tiers are present for each customer
|
||||
|
||||
# 3. Find expansion opportunities in healthy accounts
|
||||
python scripts/expansion_opportunity_scorer.py customer_portfolio.json --format json > expansion_results.json
|
||||
# Verify: confirm expansion_results.json lists opportunities ranked by priority
|
||||
|
||||
# 4. Prepare QBR using templates
|
||||
# Reference: assets/qbr_template.md
|
||||
```
|
||||
|
||||
**Error handling:** If a script exits with an error, check that:
|
||||
- The input JSON matches the required schema for that script (see Input Requirements above)
|
||||
- All required fields are present and correctly typed
|
||||
- Python 3.7+ is being used (`python --version`)
|
||||
- Output files from prior steps are non-empty before piping into subsequent steps
|
||||
|
||||
---
|
||||
|
||||
## Scripts
|
||||
@@ -299,12 +193,10 @@ python scripts/expansion_opportunity_scorer.py customer_data.json --format json
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Score regularly**: Run health scoring weekly for Enterprise, bi-weekly for Mid-Market, monthly for SMB
|
||||
1. **Combine signals**: Use all three scripts together for a complete customer picture
|
||||
2. **Act on trends, not snapshots**: A declining Green is more urgent than a stable Yellow
|
||||
3. **Combine signals**: Use all three scripts together for a complete customer picture
|
||||
4. **Calibrate thresholds**: Adjust segment benchmarks based on your product and industry
|
||||
5. **Document interventions**: Track what actions you took and outcomes for playbook refinement
|
||||
6. **Prepare with data**: Run scripts before every QBR and executive meeting
|
||||
3. **Calibrate thresholds**: Adjust segment benchmarks based on your product and industry per `references/health-scoring-framework.md`
|
||||
4. **Prepare with data**: Run scripts before every QBR and executive meeting; reference `references/cs-playbooks.md` for intervention guidance
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,26 +1,13 @@
|
||||
---
|
||||
name: "revenue-operations"
|
||||
description: Analyzes pipeline coverage, tracks forecast accuracy with MAPE, and calculates GTM efficiency metrics for SaaS revenue optimization
|
||||
description: Analyzes sales pipeline health, revenue forecasting accuracy, and go-to-market efficiency metrics for SaaS revenue optimization. Use when analyzing sales pipeline coverage, forecasting revenue, evaluating go-to-market performance, reviewing sales metrics, assessing pipeline analysis, tracking forecast accuracy with MAPE, calculating GTM efficiency, or measuring sales efficiency and unit economics for SaaS teams.
|
||||
---
|
||||
|
||||
# Revenue Operations
|
||||
|
||||
Pipeline analysis, forecast accuracy tracking, and GTM efficiency measurement for SaaS revenue teams.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Quick Start](#quick-start)
|
||||
- [Tools Overview](#tools-overview)
|
||||
- [Pipeline Analyzer](#1-pipeline-analyzer)
|
||||
- [Forecast Accuracy Tracker](#2-forecast-accuracy-tracker)
|
||||
- [GTM Efficiency Calculator](#3-gtm-efficiency-calculator)
|
||||
- [Revenue Operations Workflows](#revenue-operations-workflows)
|
||||
- [Weekly Pipeline Review](#weekly-pipeline-review)
|
||||
- [Forecast Accuracy Review](#forecast-accuracy-review)
|
||||
- [GTM Efficiency Audit](#gtm-efficiency-audit)
|
||||
- [Quarterly Business Review](#quarterly-business-review)
|
||||
- [Reference Documentation](#reference-documentation)
|
||||
- [Templates](#templates)
|
||||
> **Output formats:** All scripts support `--format text` (human-readable) and `--format json` (dashboards/integrations).
|
||||
|
||||
---
|
||||
|
||||
@@ -51,11 +38,7 @@ Analyzes sales pipeline health including coverage ratios, stage conversion rates
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
# Text report (human-readable)
|
||||
python scripts/pipeline_analyzer.py --input pipeline.json --format text
|
||||
|
||||
# JSON output (for dashboards/integrations)
|
||||
python scripts/pipeline_analyzer.py --input pipeline.json --format json
|
||||
```
|
||||
|
||||
**Key Metrics Calculated:**
|
||||
@@ -97,15 +80,11 @@ Tracks forecast accuracy over time using MAPE, detects systematic bias, analyzes
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
# Track forecast accuracy
|
||||
python scripts/forecast_accuracy_tracker.py forecast_data.json --format text
|
||||
|
||||
# JSON output for trend analysis
|
||||
python scripts/forecast_accuracy_tracker.py forecast_data.json --format json
|
||||
```
|
||||
|
||||
**Key Metrics Calculated:**
|
||||
- **MAPE** -- Mean Absolute Percentage Error: mean(|actual - forecast| / |actual|) x 100
|
||||
- **MAPE** -- mean(|actual - forecast| / |actual|) x 100
|
||||
- **Forecast Bias** -- Over-forecasting (positive) vs under-forecasting (negative) tendency
|
||||
- **Weighted Accuracy** -- MAPE weighted by deal value for materiality
|
||||
- **Period Trends** -- Improving, stable, or declining accuracy over time
|
||||
@@ -146,11 +125,7 @@ Calculates core SaaS GTM efficiency metrics with industry benchmarking, ratings,
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
# Calculate all GTM efficiency metrics
|
||||
python scripts/gtm_efficiency_calculator.py gtm_data.json --format text
|
||||
|
||||
# JSON output for dashboards
|
||||
python scripts/gtm_efficiency_calculator.py gtm_data.json --format json
|
||||
```
|
||||
|
||||
**Key Metrics Calculated:**
|
||||
@@ -201,57 +176,69 @@ python scripts/gtm_efficiency_calculator.py gtm_data.json --format json
|
||||
|
||||
Use this workflow for your weekly pipeline inspection cadence.
|
||||
|
||||
1. **Generate pipeline report:**
|
||||
1. **Verify input data:** Confirm pipeline export is current and all required fields (stage, value, close_date, owner) are populated before proceeding.
|
||||
|
||||
2. **Generate pipeline report:**
|
||||
```bash
|
||||
python scripts/pipeline_analyzer.py --input current_pipeline.json --format text
|
||||
```
|
||||
|
||||
2. **Review key indicators:**
|
||||
3. **Cross-check output totals** against your CRM source system to confirm data integrity.
|
||||
|
||||
4. **Review key indicators:**
|
||||
- Pipeline coverage ratio (is it above 3x quota?)
|
||||
- Deals aging beyond threshold (which deals need intervention?)
|
||||
- Concentration risk (are we over-reliant on a few large deals?)
|
||||
- Stage distribution (is there a healthy funnel shape?)
|
||||
|
||||
3. **Document using template:** Use `assets/pipeline_review_template.md`
|
||||
5. **Document using template:** Use `assets/pipeline_review_template.md`
|
||||
|
||||
4. **Action items:** Address aging deals, redistribute pipeline concentration, fill coverage gaps
|
||||
6. **Action items:** Address aging deals, redistribute pipeline concentration, fill coverage gaps
|
||||
|
||||
### Forecast Accuracy Review
|
||||
|
||||
Use monthly or quarterly to evaluate and improve forecasting discipline.
|
||||
|
||||
1. **Generate accuracy report:**
|
||||
1. **Verify input data:** Confirm all forecast periods have corresponding actuals and no periods are missing before running.
|
||||
|
||||
2. **Generate accuracy report:**
|
||||
```bash
|
||||
python scripts/forecast_accuracy_tracker.py forecast_history.json --format text
|
||||
```
|
||||
|
||||
2. **Analyze patterns:**
|
||||
3. **Cross-check actuals** against closed-won records in your CRM before drawing conclusions.
|
||||
|
||||
4. **Analyze patterns:**
|
||||
- Is MAPE trending down (improving)?
|
||||
- Which reps or segments have the highest error rates?
|
||||
- Is there systematic over- or under-forecasting?
|
||||
|
||||
3. **Document using template:** Use `assets/forecast_report_template.md`
|
||||
5. **Document using template:** Use `assets/forecast_report_template.md`
|
||||
|
||||
4. **Improvement actions:** Coach high-bias reps, adjust methodology, improve data hygiene
|
||||
6. **Improvement actions:** Coach high-bias reps, adjust methodology, improve data hygiene
|
||||
|
||||
### GTM Efficiency Audit
|
||||
|
||||
Use quarterly or during board prep to evaluate go-to-market efficiency.
|
||||
|
||||
1. **Calculate efficiency metrics:**
|
||||
1. **Verify input data:** Confirm revenue, cost, and customer figures reconcile with finance records before running.
|
||||
|
||||
2. **Calculate efficiency metrics:**
|
||||
```bash
|
||||
python scripts/gtm_efficiency_calculator.py quarterly_data.json --format text
|
||||
```
|
||||
|
||||
2. **Benchmark against targets:**
|
||||
- Magic Number signals GTM spend efficiency
|
||||
- LTV:CAC validates unit economics
|
||||
- CAC Payback shows capital efficiency
|
||||
- Rule of 40 balances growth and profitability
|
||||
3. **Cross-check computed ARR and spend totals** against your finance system before sharing results.
|
||||
|
||||
3. **Document using template:** Use `assets/gtm_dashboard_template.md`
|
||||
4. **Benchmark against targets:**
|
||||
- Magic Number (>0.75)
|
||||
- LTV:CAC (>3:1)
|
||||
- CAC Payback (<18 months)
|
||||
- Rule of 40 (>40%)
|
||||
|
||||
4. **Strategic decisions:** Adjust spend allocation, optimize channels, improve retention
|
||||
5. **Document using template:** Use `assets/gtm_dashboard_template.md`
|
||||
|
||||
6. **Strategic decisions:** Adjust spend allocation, optimize channels, improve retention
|
||||
|
||||
### Quarterly Business Review
|
||||
|
||||
|
||||
@@ -29,19 +29,19 @@ python scripts/team_scaling_calculator.py # Model engineering team growth and c
|
||||
## Core Responsibilities
|
||||
|
||||
### 1. Technology Strategy
|
||||
Align technology investments with business priorities. Not "what's exciting" — "what moves the needle."
|
||||
Align technology investments with business priorities.
|
||||
|
||||
**Strategy components:**
|
||||
- Technology vision (3-year: where the platform is going)
|
||||
- Architecture roadmap (what to build, refactor, or replace)
|
||||
- Innovation budget (10-20% of engineering capacity for experimentation)
|
||||
- Build vs buy decisions (default: buy unless it's your core IP)
|
||||
- Technical debt strategy (not elimination — management)
|
||||
- Technical debt strategy (management, not elimination)
|
||||
|
||||
See `references/technology_evaluation_framework.md` for the full evaluation framework.
|
||||
|
||||
### 2. Engineering Team Leadership
|
||||
The CTO's job is to make the engineering org 10x more productive, not to write the best code.
|
||||
Scale the engineering org's productivity — not individual output.
|
||||
|
||||
**Scaling engineering:**
|
||||
- Hire for the next stage, not the current one
|
||||
@@ -58,7 +58,7 @@ The CTO's job is to make the engineering org 10x more productive, not to write t
|
||||
See `references/engineering_metrics.md` for DORA metrics and the engineering health dashboard.
|
||||
|
||||
### 3. Architecture Governance
|
||||
You don't make every architecture decision. You create the framework for making good ones.
|
||||
Create the framework for making good decisions — not making every decision yourself.
|
||||
|
||||
**Architecture Decision Records (ADRs):**
|
||||
- Every significant decision gets documented: context, options, decision, consequences
|
||||
@@ -75,11 +75,96 @@ Every vendor is a dependency. Every dependency is a risk.
|
||||
### 5. Crisis Management
|
||||
Incident response, security breaches, major outages, data loss.
|
||||
|
||||
**Your role in a crisis:** Not to fix it yourself. To ensure the right people are on it, communication is flowing, and the business is informed. Post-crisis: blameless retrospective within 48 hours.
|
||||
**Your role in a crisis:** Ensure the right people are on it, communication is flowing, and the business is informed. Post-crisis: blameless retrospective within 48 hours.
|
||||
|
||||
## Workflows
|
||||
|
||||
### Tech Debt Assessment Workflow
|
||||
|
||||
**Step 1 — Run the analyzer**
|
||||
```bash
|
||||
python scripts/tech_debt_analyzer.py --output report.json
|
||||
```
|
||||
|
||||
**Step 2 — Interpret results**
|
||||
The analyzer produces a severity-scored inventory. Review each item against:
|
||||
- Severity (P0–P3): how much is it blocking velocity or creating risk?
|
||||
- Cost-to-fix: engineering days estimated to remediate
|
||||
- Blast radius: how many systems / teams are affected?
|
||||
|
||||
**Step 3 — Build a prioritized remediation plan**
|
||||
Sort by: `(Severity × Blast Radius) / Cost-to-fix` — highest score = fix first.
|
||||
Group items into: (a) immediate sprint, (b) next quarter, (c) tracked backlog.
|
||||
|
||||
**Step 4 — Validate before presenting to stakeholders**
|
||||
- [ ] Every P0/P1 item has an owner and a target date
|
||||
- [ ] Cost-to-fix estimates reviewed with the relevant tech lead
|
||||
- [ ] Debt ratio calculated: maintenance work / total engineering capacity (target: < 25%)
|
||||
- [ ] Remediation plan fits within capacity (don't promise 40 points of debt reduction in a 2-week sprint)
|
||||
|
||||
**Example output — Tech Debt Inventory:**
|
||||
```
|
||||
Item | Severity | Cost-to-Fix | Blast Radius | Priority Score
|
||||
----------------------|----------|-------------|--------------|---------------
|
||||
Auth service (v1 API) | P1 | 8 days | 6 services | HIGH
|
||||
Unindexed DB queries | P2 | 3 days | 2 services | MEDIUM
|
||||
Legacy deploy scripts | P3 | 5 days | 1 service | LOW
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ADR Creation Workflow
|
||||
|
||||
**Step 1 — Identify the decision**
|
||||
Trigger an ADR when: the decision affects more than one team, is hard to reverse, or has cost/risk implications > 1 sprint of effort.
|
||||
|
||||
**Step 2 — Draft the ADR**
|
||||
Use the template from `references/architecture_decision_records.md`:
|
||||
```
|
||||
Title: [Short noun phrase]
|
||||
Status: Proposed | Accepted | Superseded
|
||||
Context: What is the problem? What constraints exist?
|
||||
Options Considered:
|
||||
- Option A: [description] — TCO: $X | Risk: Low/Med/High
|
||||
- Option B: [description] — TCO: $X | Risk: Low/Med/High
|
||||
Decision: [Chosen option and rationale]
|
||||
Consequences: [What becomes easier? What becomes harder?]
|
||||
```
|
||||
|
||||
**Step 3 — Validation checkpoint (before finalizing)**
|
||||
- [ ] All options include a 3-year TCO estimate
|
||||
- [ ] At least one "do nothing" or "buy" alternative is documented
|
||||
- [ ] Affected team leads have reviewed and signed off
|
||||
- [ ] Consequences section addresses reversibility and migration path
|
||||
- [ ] ADR is committed to the repository (not left in a doc or Slack thread)
|
||||
|
||||
**Step 4 — Communicate and close**
|
||||
Share the accepted ADR in the engineering all-hands or architecture sync. Link it from the relevant service's README.
|
||||
|
||||
---
|
||||
|
||||
### Build vs Buy Analysis Workflow
|
||||
|
||||
**Step 1 — Define requirements** (functional + non-functional)
|
||||
**Step 2 — Identify candidate vendors or internal build scope**
|
||||
**Step 3 — Score each option:**
|
||||
|
||||
```
|
||||
Criterion | Weight | Build Score | Vendor A Score | Vendor B Score
|
||||
-----------------------|--------|-------------|----------------|---------------
|
||||
Solves core problem | 30% | 9 | 8 | 7
|
||||
Migration risk | 20% | 2 (low risk)| 7 | 6
|
||||
3-year TCO | 25% | $X | $Y | $Z
|
||||
Vendor stability | 15% | N/A | 8 | 5
|
||||
Integration effort | 10% | 3 | 7 | 8
|
||||
```
|
||||
|
||||
**Step 4 — Default rule:** Buy unless it is core IP or no vendor meets ≥ 70% of requirements.
|
||||
**Step 5 — Document the decision as an ADR** (see ADR workflow above).
|
||||
|
||||
## Key Questions a CTO Asks
|
||||
|
||||
- "What's our biggest technical risk right now? Not the most annoying — the most dangerous."
|
||||
- "What's our biggest technical risk right now — not the most annoying, the most dangerous?"
|
||||
- "If we 10x our traffic tomorrow, what breaks first?"
|
||||
- "How much of our engineering time goes to maintenance vs new features?"
|
||||
- "What would a new engineer say about our codebase after their first week?"
|
||||
@@ -105,16 +190,13 @@ Incident response, security breaches, major outages, data loss.
|
||||
|
||||
## Red Flags
|
||||
|
||||
- Tech debt is growing faster than you're paying it down
|
||||
- Deployment frequency is declining (a leading indicator of team health)
|
||||
- Senior engineers are leaving (they see problems before management does)
|
||||
- "It works on my machine" is still a thing
|
||||
- Tech debt ratio > 30% and growing faster than it's being paid down
|
||||
- Deployment frequency declining over 4+ weeks
|
||||
- No ADRs for the last 3 major decisions
|
||||
- The CTO is the only person who can deploy to production
|
||||
- Security audit hasn't happened in 12+ months
|
||||
- The team dreads on-call rotation
|
||||
- Build times exceed 10 minutes
|
||||
- No one can explain the system architecture to a new hire in 30 minutes
|
||||
- Single points of failure on critical systems with no mitigation plan
|
||||
- The team dreads on-call rotation
|
||||
|
||||
## Integration with C-Suite Roles
|
||||
|
||||
|
||||
@@ -9,36 +9,6 @@ Design scalable, cost-effective AWS architectures for startups with infrastructu
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Trigger Terms](#trigger-terms)
|
||||
- [Workflow](#workflow)
|
||||
- [Tools](#tools)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Input Requirements](#input-requirements)
|
||||
- [Output Formats](#output-formats)
|
||||
|
||||
---
|
||||
|
||||
## Trigger Terms
|
||||
|
||||
Use this skill when you encounter:
|
||||
|
||||
| Category | Terms |
|
||||
|----------|-------|
|
||||
| **Architecture Design** | serverless architecture, AWS architecture, cloud design, microservices, three-tier |
|
||||
| **IaC Generation** | CloudFormation, CDK, Terraform, infrastructure as code, deploy template |
|
||||
| **Serverless** | Lambda, API Gateway, DynamoDB, Step Functions, EventBridge, AppSync |
|
||||
| **Containers** | ECS, Fargate, EKS, container orchestration, Docker on AWS |
|
||||
| **Cost Optimization** | reduce AWS costs, optimize spending, right-sizing, Savings Plans |
|
||||
| **Database** | Aurora, RDS, DynamoDB design, database migration, data modeling |
|
||||
| **Security** | IAM policies, VPC design, encryption, Cognito, WAF |
|
||||
| **CI/CD** | CodePipeline, CodeBuild, CodeDeploy, GitHub Actions AWS |
|
||||
| **Monitoring** | CloudWatch, X-Ray, observability, alarms, dashboards |
|
||||
| **Migration** | migrate to AWS, lift and shift, replatform, DMS |
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Gather Requirements
|
||||
@@ -62,6 +32,18 @@ Run the architecture designer to get pattern recommendations:
|
||||
python scripts/architecture_designer.py --input requirements.json
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
|
||||
```json
|
||||
{
|
||||
"recommended_pattern": "serverless_web",
|
||||
"service_stack": ["S3", "CloudFront", "API Gateway", "Lambda", "DynamoDB", "Cognito"],
|
||||
"estimated_monthly_cost_usd": 35,
|
||||
"pros": ["Low ops overhead", "Pay-per-use", "Auto-scaling"],
|
||||
"cons": ["Cold starts", "15-min Lambda limit", "Eventual consistency"]
|
||||
}
|
||||
```
|
||||
|
||||
Select from recommended patterns:
|
||||
- **Serverless Web**: S3 + CloudFront + API Gateway + Lambda + DynamoDB
|
||||
- **Event-Driven Microservices**: EventBridge + Lambda + SQS + Step Functions
|
||||
@@ -70,6 +52,8 @@ Select from recommended patterns:
|
||||
|
||||
See `references/architecture_patterns.md` for detailed pattern specifications.
|
||||
|
||||
**Validation checkpoint:** Confirm the recommended pattern matches the team's operational maturity and compliance requirements before proceeding to Step 3.
|
||||
|
||||
### Step 3: Generate IaC Templates
|
||||
|
||||
Create infrastructure-as-code for the selected pattern:
|
||||
@@ -77,8 +61,76 @@ Create infrastructure-as-code for the selected pattern:
|
||||
```bash
|
||||
# Serverless stack (CloudFormation)
|
||||
python scripts/serverless_stack.py --app-name my-app --region us-east-1
|
||||
```
|
||||
|
||||
# Output: CloudFormation YAML template ready to deploy
|
||||
**Example CloudFormation YAML output (core serverless resources):**
|
||||
|
||||
```yaml
|
||||
AWSTemplateFormatVersion: '2010-09-09'
|
||||
Transform: AWS::Serverless-2016-10-31
|
||||
|
||||
Parameters:
|
||||
AppName:
|
||||
Type: String
|
||||
Default: my-app
|
||||
|
||||
Resources:
|
||||
ApiFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: index.handler
|
||||
Runtime: nodejs20.x
|
||||
MemorySize: 512
|
||||
Timeout: 30
|
||||
Environment:
|
||||
Variables:
|
||||
TABLE_NAME: !Ref DataTable
|
||||
Policies:
|
||||
- DynamoDBCrudPolicy:
|
||||
TableName: !Ref DataTable
|
||||
Events:
|
||||
ApiEvent:
|
||||
Type: Api
|
||||
Properties:
|
||||
Path: /{proxy+}
|
||||
Method: ANY
|
||||
|
||||
DataTable:
|
||||
Type: AWS::DynamoDB::Table
|
||||
Properties:
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
AttributeDefinitions:
|
||||
- AttributeName: pk
|
||||
AttributeType: S
|
||||
- AttributeName: sk
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: pk
|
||||
KeyType: HASH
|
||||
- AttributeName: sk
|
||||
KeyType: RANGE
|
||||
```
|
||||
|
||||
> Full templates including API Gateway, Cognito, IAM roles, and CloudWatch logging are generated by `serverless_stack.py` and also available in `references/architecture_patterns.md`.
|
||||
|
||||
**Example CDK TypeScript snippet (three-tier pattern):**
|
||||
|
||||
```typescript
|
||||
import * as ecs from 'aws-cdk-lib/aws-ecs';
|
||||
import * as ec2 from 'aws-cdk-lib/aws-ec2';
|
||||
import * as rds from 'aws-cdk-lib/aws-rds';
|
||||
|
||||
const vpc = new ec2.Vpc(this, 'AppVpc', { maxAzs: 2 });
|
||||
|
||||
const cluster = new ecs.Cluster(this, 'AppCluster', { vpc });
|
||||
|
||||
const db = new rds.ServerlessCluster(this, 'AppDb', {
|
||||
engine: rds.DatabaseClusterEngine.auroraPostgres({
|
||||
version: rds.AuroraPostgresEngineVersion.VER_15_2,
|
||||
}),
|
||||
vpc,
|
||||
scaling: { minCapacity: 0.5, maxCapacity: 4 },
|
||||
});
|
||||
```
|
||||
|
||||
### Step 4: Review Costs
|
||||
@@ -89,6 +141,20 @@ Analyze estimated costs and optimization opportunities:
|
||||
python scripts/cost_optimizer.py --resources current_setup.json --monthly-spend 2000
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
|
||||
```json
|
||||
{
|
||||
"current_monthly_usd": 2000,
|
||||
"recommendations": [
|
||||
{ "action": "Right-size RDS db.r5.2xlarge → db.r5.large", "savings_usd": 420, "priority": "high" },
|
||||
{ "action": "Purchase 1-yr Compute Savings Plan at 40% utilization", "savings_usd": 310, "priority": "high" },
|
||||
{ "action": "Move S3 objects >90 days to Glacier Instant Retrieval", "savings_usd": 85, "priority": "medium" }
|
||||
],
|
||||
"total_potential_savings_usd": 815
|
||||
}
|
||||
```
|
||||
|
||||
Output includes:
|
||||
- Monthly cost breakdown by service
|
||||
- Right-sizing recommendations
|
||||
@@ -113,7 +179,7 @@ cdk deploy
|
||||
terraform init && terraform apply
|
||||
```
|
||||
|
||||
### Step 6: Validate
|
||||
### Step 6: Validate and Handle Failures
|
||||
|
||||
Verify deployment and set up monitoring:
|
||||
|
||||
@@ -125,6 +191,30 @@ aws cloudformation describe-stacks --stack-name my-app-stack
|
||||
aws cloudwatch put-metric-alarm --alarm-name high-errors ...
|
||||
```
|
||||
|
||||
**If stack creation fails:**
|
||||
|
||||
1. Check the failure reason:
|
||||
```bash
|
||||
aws cloudformation describe-stack-events \
|
||||
--stack-name my-app-stack \
|
||||
--query 'StackEvents[?ResourceStatus==`CREATE_FAILED`]'
|
||||
```
|
||||
2. Review CloudWatch Logs for Lambda or ECS errors.
|
||||
3. Fix the template or resource configuration.
|
||||
4. Delete the failed stack before retrying:
|
||||
```bash
|
||||
aws cloudformation delete-stack --stack-name my-app-stack
|
||||
# Wait for deletion
|
||||
aws cloudformation wait stack-delete-complete --stack-name my-app-stack
|
||||
# Redeploy
|
||||
aws cloudformation create-stack ...
|
||||
```
|
||||
|
||||
**Common failure causes:**
|
||||
- IAM permission errors → verify `--capabilities CAPABILITY_IAM` and role trust policies
|
||||
- Resource limit exceeded → request quota increase via Service Quotas console
|
||||
- Invalid template syntax → run `aws cloudformation validate-template --template-body file://template.yaml` before deploying
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
@@ -267,10 +357,7 @@ Provide these details for architecture design:
|
||||
|
||||
- Pattern recommendation with rationale
|
||||
- Service stack diagram (ASCII)
|
||||
- Configuration specifications
|
||||
- Monthly cost estimate
|
||||
- Scaling characteristics
|
||||
- Trade-offs and limitations
|
||||
- Monthly cost estimate and trade-offs
|
||||
|
||||
### IaC Templates
|
||||
|
||||
@@ -280,10 +367,8 @@ Provide these details for architecture design:
|
||||
|
||||
### Cost Analysis
|
||||
|
||||
- Current spend breakdown
|
||||
- Optimization recommendations with savings
|
||||
- Priority action list (high/medium/low)
|
||||
- Implementation checklist
|
||||
- Current spend breakdown with optimization recommendations
|
||||
- Priority action list (high/medium/low) and implementation checklist
|
||||
|
||||
---
|
||||
|
||||
@@ -294,13 +379,3 @@ Provide these details for architecture design:
|
||||
| `references/architecture_patterns.md` | 6 patterns: serverless, microservices, three-tier, data processing, GraphQL, multi-region |
|
||||
| `references/service_selection.md` | Decision matrices for compute, database, storage, messaging |
|
||||
| `references/best_practices.md` | Serverless design, cost optimization, security hardening, scalability |
|
||||
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
- Lambda: 15-minute execution, 10GB memory max
|
||||
- API Gateway: 29-second timeout, 10MB payload
|
||||
- DynamoDB: 400KB item size, eventually consistent by default
|
||||
- Regional availability varies by service
|
||||
- Some services have AWS-specific lock-in
|
||||
|
||||
@@ -9,136 +9,38 @@ Expert guidance and automation for Microsoft 365 Global Administrators managing
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Trigger Phrases](#trigger-phrases)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Tools](#tools)
|
||||
- [Workflows](#workflows)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Reference Guides](#reference-guides)
|
||||
- [Limitations](#limitations)
|
||||
|
||||
---
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
Use this skill when you hear:
|
||||
- "set up Microsoft 365 tenant"
|
||||
- "create Office 365 users"
|
||||
- "configure Azure AD"
|
||||
- "generate PowerShell script for M365"
|
||||
- "set up Conditional Access"
|
||||
- "bulk user provisioning"
|
||||
- "M365 security audit"
|
||||
- "license management"
|
||||
- "Exchange Online configuration"
|
||||
- "Teams administration"
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Generate Security Audit Script
|
||||
### Run a Security Audit
|
||||
|
||||
```bash
|
||||
python scripts/powershell_generator.py --action audit --output audit_script.ps1
|
||||
```powershell
|
||||
Connect-MgGraph -Scopes "Directory.Read.All","Policy.Read.All","AuditLog.Read.All"
|
||||
Get-MgSubscribedSku | Select-Object SkuPartNumber, ConsumedUnits, @{N="Total";E={$_.PrepaidUnits.Enabled}}
|
||||
Get-MgPolicyAuthorizationPolicy | Select-Object AllowInvitesFrom, DefaultUserRolePermissions
|
||||
```
|
||||
|
||||
### Create Bulk User Provisioning Script
|
||||
### Bulk Provision Users from CSV
|
||||
|
||||
```bash
|
||||
python scripts/user_management.py --action provision --csv users.csv --license E3
|
||||
```powershell
|
||||
# CSV columns: DisplayName, UserPrincipalName, Department, LicenseSku
|
||||
Import-Csv .\new_users.csv | ForEach-Object {
|
||||
$passwordProfile = @{ Password = (New-Guid).ToString().Substring(0,16) + "!"; ForceChangePasswordNextSignIn = $true }
|
||||
New-MgUser -DisplayName $_.DisplayName -UserPrincipalName $_.UserPrincipalName `
|
||||
-Department $_.Department -AccountEnabled -PasswordProfile $passwordProfile
|
||||
}
|
||||
```
|
||||
|
||||
### Configure Conditional Access Policy
|
||||
### Create a Conditional Access Policy (MFA for Admins)
|
||||
|
||||
```bash
|
||||
python scripts/powershell_generator.py --action conditional-access --require-mfa --include-admins
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
|
||||
### powershell_generator.py
|
||||
|
||||
Generates ready-to-use PowerShell scripts for Microsoft 365 administration.
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
# Generate security audit script
|
||||
python scripts/powershell_generator.py --action audit
|
||||
|
||||
# Generate Conditional Access policy script
|
||||
python scripts/powershell_generator.py --action conditional-access \
|
||||
--policy-name "Require MFA for Admins" \
|
||||
--require-mfa \
|
||||
--include-users "All"
|
||||
|
||||
# Generate bulk license assignment script
|
||||
python scripts/powershell_generator.py --action license \
|
||||
--csv users.csv \
|
||||
--sku "ENTERPRISEPACK"
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Parameter | Required | Description |
|
||||
|-----------|----------|-------------|
|
||||
| `--action` | Yes | Script type: `audit`, `conditional-access`, `license`, `users` |
|
||||
| `--policy-name` | No | Name for Conditional Access policy |
|
||||
| `--require-mfa` | No | Require MFA in policy |
|
||||
| `--include-users` | No | Users to include: `All` or specific UPNs |
|
||||
| `--csv` | No | CSV file path for bulk operations |
|
||||
| `--sku` | No | License SKU for assignment |
|
||||
| `--output` | No | Output file path (default: stdout) |
|
||||
|
||||
**Output:** Complete PowerShell scripts with error handling, logging, and best practices.
|
||||
|
||||
### user_management.py
|
||||
|
||||
Automates user lifecycle operations and bulk provisioning.
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
# Provision users from CSV
|
||||
python scripts/user_management.py --action provision --csv new_users.csv
|
||||
|
||||
# Offboard user securely
|
||||
python scripts/user_management.py --action offboard --user john.doe@company.com
|
||||
|
||||
# Generate inactive users report
|
||||
python scripts/user_management.py --action report-inactive --days 90
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Parameter | Required | Description |
|
||||
|-----------|----------|-------------|
|
||||
| `--action` | Yes | Operation: `provision`, `offboard`, `report-inactive`, `sync` |
|
||||
| `--csv` | No | CSV file for bulk operations |
|
||||
| `--user` | No | Single user UPN |
|
||||
| `--days` | No | Days for inactivity threshold (default: 90) |
|
||||
| `--license` | No | License SKU to assign |
|
||||
|
||||
### tenant_setup.py
|
||||
|
||||
Initial tenant configuration and service provisioning automation.
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
# Generate tenant setup checklist
|
||||
python scripts/tenant_setup.py --action checklist --company "Acme Inc" --users 50
|
||||
|
||||
# Generate DNS records configuration
|
||||
python scripts/tenant_setup.py --action dns --domain acme.com
|
||||
|
||||
# Generate security baseline script
|
||||
python scripts/tenant_setup.py --action security-baseline
|
||||
```powershell
|
||||
$adminRoles = (Get-MgDirectoryRole | Where-Object { $_.DisplayName -match "Admin" }).Id
|
||||
$policy = @{
|
||||
DisplayName = "Require MFA for Admins"
|
||||
State = "enabledForReportingButNotEnforced" # Start in report-only mode
|
||||
Conditions = @{ Users = @{ IncludeRoles = $adminRoles } }
|
||||
GrantControls = @{ Operator = "OR"; BuiltInControls = @("mfa") }
|
||||
}
|
||||
New-MgIdentityConditionalAccessPolicy -BodyParameter $policy
|
||||
```
|
||||
|
||||
---
|
||||
@@ -149,69 +51,150 @@ python scripts/tenant_setup.py --action security-baseline
|
||||
|
||||
**Step 1: Generate Setup Checklist**
|
||||
|
||||
```bash
|
||||
python scripts/tenant_setup.py --action checklist --company "Company Name" --users 100
|
||||
Confirm prerequisites before provisioning:
|
||||
- Global Admin account created and secured with MFA
|
||||
- Custom domain purchased and accessible for DNS edits
|
||||
- License SKUs confirmed (E3 vs E5 feature requirements noted)
|
||||
|
||||
**Step 2: Configure and Verify DNS Records**
|
||||
|
||||
```powershell
|
||||
# After adding the domain in the M365 admin center, verify propagation before proceeding
|
||||
$domain = "company.com"
|
||||
Resolve-DnsName -Name "_msdcs.$domain" -Type NS -ErrorAction SilentlyContinue
|
||||
# Also run from a shell prompt:
|
||||
# nslookup -type=MX company.com
|
||||
# nslookup -type=TXT company.com # confirm SPF record
|
||||
```
|
||||
|
||||
**Step 2: Configure DNS Records**
|
||||
|
||||
```bash
|
||||
python scripts/tenant_setup.py --action dns --domain company.com
|
||||
```
|
||||
Wait for DNS propagation (up to 48 h) before bulk user creation.
|
||||
|
||||
**Step 3: Apply Security Baseline**
|
||||
|
||||
```bash
|
||||
python scripts/powershell_generator.py --action audit > initial_audit.ps1
|
||||
```powershell
|
||||
# Disable legacy authentication (blocks Basic Auth protocols)
|
||||
$policy = @{
|
||||
DisplayName = "Block Legacy Authentication"
|
||||
State = "enabled"
|
||||
Conditions = @{ ClientAppTypes = @("exchangeActiveSync","other") }
|
||||
GrantControls = @{ Operator = "OR"; BuiltInControls = @("block") }
|
||||
}
|
||||
New-MgIdentityConditionalAccessPolicy -BodyParameter $policy
|
||||
|
||||
# Enable unified audit log
|
||||
Set-AdminAuditLogConfig -UnifiedAuditLogIngestionEnabled $true
|
||||
```
|
||||
|
||||
**Step 4: Provision Users**
|
||||
|
||||
```bash
|
||||
python scripts/user_management.py --action provision --csv employees.csv --license E3
|
||||
```powershell
|
||||
$licenseSku = (Get-MgSubscribedSku | Where-Object { $_.SkuPartNumber -eq "ENTERPRISEPACK" }).SkuId
|
||||
|
||||
Import-Csv .\employees.csv | ForEach-Object {
|
||||
try {
|
||||
$user = New-MgUser -DisplayName $_.DisplayName -UserPrincipalName $_.UserPrincipalName `
|
||||
-AccountEnabled -PasswordProfile @{ Password = (New-Guid).ToString().Substring(0,12)+"!"; ForceChangePasswordNextSignIn = $true }
|
||||
Set-MgUserLicense -UserId $user.Id -AddLicenses @(@{ SkuId = $licenseSku }) -RemoveLicenses @()
|
||||
Write-Host "Provisioned: $($_.UserPrincipalName)"
|
||||
} catch {
|
||||
Write-Warning "Failed $($_.UserPrincipalName): $_"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Validation:** Spot-check 3–5 accounts in the M365 admin portal; confirm licenses show "Active."
|
||||
|
||||
---
|
||||
|
||||
### Workflow 2: Security Hardening
|
||||
|
||||
**Step 1: Run Security Audit**
|
||||
|
||||
```bash
|
||||
python scripts/powershell_generator.py --action audit --output security_audit.ps1
|
||||
```powershell
|
||||
Connect-MgGraph -Scopes "Directory.Read.All","Policy.Read.All","AuditLog.Read.All","Reports.Read.All"
|
||||
|
||||
# Export Conditional Access policy inventory
|
||||
Get-MgIdentityConditionalAccessPolicy | Select-Object DisplayName, State |
|
||||
Export-Csv .\ca_policies.csv -NoTypeInformation
|
||||
|
||||
# Find accounts without MFA registered
|
||||
$report = Get-MgReportAuthenticationMethodUserRegistrationDetail
|
||||
$report | Where-Object { -not $_.IsMfaRegistered } |
|
||||
Select-Object UserPrincipalName, IsMfaRegistered |
|
||||
Export-Csv .\no_mfa_users.csv -NoTypeInformation
|
||||
|
||||
Write-Host "Audit complete. Review ca_policies.csv and no_mfa_users.csv."
|
||||
```
|
||||
|
||||
**Step 2: Create MFA Policy**
|
||||
**Step 2: Create MFA Policy (report-only first)**
|
||||
|
||||
```bash
|
||||
python scripts/powershell_generator.py --action conditional-access \
|
||||
--policy-name "Require MFA All Users" \
|
||||
--require-mfa \
|
||||
--include-users "All"
|
||||
```powershell
|
||||
$policy = @{
|
||||
DisplayName = "Require MFA All Users"
|
||||
State = "enabledForReportingButNotEnforced"
|
||||
Conditions = @{ Users = @{ IncludeUsers = @("All") } }
|
||||
GrantControls = @{ Operator = "OR"; BuiltInControls = @("mfa") }
|
||||
}
|
||||
New-MgIdentityConditionalAccessPolicy -BodyParameter $policy
|
||||
```
|
||||
|
||||
**Step 3: Review Results**
|
||||
**Validation:** After 48 h, review Sign-in logs in Entra ID; confirm expected users would be challenged, then change `State` to `"enabled"`.
|
||||
|
||||
Execute generated scripts and review CSV reports in output directory.
|
||||
**Step 3: Review Secure Score**
|
||||
|
||||
```powershell
|
||||
# Retrieve current Secure Score and top improvement actions
|
||||
Get-MgSecuritySecureScore -Top 1 | Select-Object CurrentScore, MaxScore, ActiveUserCount
|
||||
Get-MgSecuritySecureScoreControlProfile | Sort-Object -Property ActionType |
|
||||
Select-Object Title, ImplementationStatus, MaxScore | Format-Table -AutoSize
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Workflow 3: User Offboarding
|
||||
|
||||
**Step 1: Generate Offboarding Script**
|
||||
|
||||
```bash
|
||||
python scripts/user_management.py --action offboard --user departing.user@company.com
|
||||
```
|
||||
|
||||
**Step 2: Execute Script with -WhatIf**
|
||||
**Step 1: Block Sign-in and Revoke Sessions**
|
||||
|
||||
```powershell
|
||||
.\offboard_user.ps1 -WhatIf
|
||||
$upn = "departing.user@company.com"
|
||||
$user = Get-MgUser -Filter "userPrincipalName eq '$upn'"
|
||||
|
||||
# Block sign-in immediately
|
||||
Update-MgUser -UserId $user.Id -AccountEnabled:$false
|
||||
|
||||
# Revoke all active tokens
|
||||
Invoke-MgInvalidateAllUserRefreshToken -UserId $user.Id
|
||||
Write-Host "Sign-in blocked and sessions revoked for $upn"
|
||||
```
|
||||
|
||||
**Step 3: Execute for Real**
|
||||
**Step 2: Preview with -WhatIf (license removal)**
|
||||
|
||||
```powershell
|
||||
.\offboard_user.ps1 -Confirm:$false
|
||||
# Identify assigned licenses
|
||||
$licenses = (Get-MgUserLicenseDetail -UserId $user.Id).SkuId
|
||||
|
||||
# Dry-run: print what would be removed
|
||||
$licenses | ForEach-Object { Write-Host "[WhatIf] Would remove SKU: $_" }
|
||||
```
|
||||
|
||||
**Step 3: Execute Offboarding**
|
||||
|
||||
```powershell
|
||||
# Remove licenses
|
||||
Set-MgUserLicense -UserId $user.Id -AddLicenses @() -RemoveLicenses $licenses
|
||||
|
||||
# Convert mailbox to shared (requires ExchangeOnlineManagement module)
|
||||
Set-Mailbox -Identity $upn -Type Shared
|
||||
|
||||
# Remove from all groups
|
||||
Get-MgUserMemberOf -UserId $user.Id | ForEach-Object {
|
||||
try { Remove-MgGroupMemberByRef -GroupId $_.Id -DirectoryObjectId $user.Id } catch {}
|
||||
}
|
||||
Write-Host "Offboarding complete for $upn"
|
||||
```
|
||||
|
||||
**Validation:** Confirm in the M365 admin portal that the account shows "Blocked," has no active licenses, and the mailbox type is "Shared."
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
@@ -221,47 +204,42 @@ python scripts/user_management.py --action offboard --user departing.user@compan
|
||||
1. Enable MFA before adding users
|
||||
2. Configure named locations for Conditional Access
|
||||
3. Use separate admin accounts with PIM
|
||||
4. Verify custom domains before bulk user creation
|
||||
4. Verify custom domains (and DNS propagation) before bulk user creation
|
||||
5. Apply Microsoft Secure Score recommendations
|
||||
|
||||
### Security Operations
|
||||
|
||||
1. Start Conditional Access policies in report-only mode
|
||||
2. Use `-WhatIf` parameter before executing scripts
|
||||
3. Never hardcode credentials in scripts
|
||||
4. Enable audit logging for all operations
|
||||
5. Regular quarterly security reviews
|
||||
2. Review Sign-in logs for 48 h before enforcing a new policy
|
||||
3. Never hardcode credentials in scripts — use Azure Key Vault or `Get-Credential`
|
||||
4. Enable unified audit logging for all operations
|
||||
5. Conduct quarterly security reviews and Secure Score check-ins
|
||||
|
||||
### PowerShell Automation
|
||||
|
||||
1. Prefer Microsoft Graph over legacy MSOnline modules
|
||||
2. Include try/catch blocks for error handling
|
||||
3. Implement logging for audit trails
|
||||
4. Use Azure Key Vault for credential management
|
||||
5. Test in non-production tenant first
|
||||
1. Prefer Microsoft Graph (`Microsoft.Graph` module) over legacy MSOnline
|
||||
2. Include `try/catch` blocks for error handling
|
||||
3. Implement `Write-Host`/`Write-Warning` logging for audit trails
|
||||
4. Use `-WhatIf` or dry-run output before bulk destructive operations
|
||||
5. Test in a non-production tenant first
|
||||
|
||||
---
|
||||
|
||||
## Reference Guides
|
||||
|
||||
### When to Use Each Reference
|
||||
|
||||
**references/powershell-templates.md**
|
||||
|
||||
- Ready-to-use script templates
|
||||
- Conditional Access policy examples
|
||||
- Bulk user provisioning scripts
|
||||
- Security audit scripts
|
||||
|
||||
**references/security-policies.md**
|
||||
|
||||
- Conditional Access configuration
|
||||
- MFA enforcement strategies
|
||||
- DLP and retention policies
|
||||
- Security baseline settings
|
||||
|
||||
**references/troubleshooting.md**
|
||||
|
||||
- Common error resolutions
|
||||
- PowerShell module issues
|
||||
- Permission troubleshooting
|
||||
@@ -289,7 +267,7 @@ Install-Module MicrosoftTeams -Scope CurrentUser
|
||||
|
||||
### Required Permissions
|
||||
|
||||
- **Global Administrator** - Full tenant setup
|
||||
- **User Administrator** - User management
|
||||
- **Security Administrator** - Security policies
|
||||
- **Exchange Administrator** - Mailbox management
|
||||
- **Global Administrator** — Full tenant setup
|
||||
- **User Administrator** — User management
|
||||
- **Security Administrator** — Security policies
|
||||
- **Exchange Administrator** — Mailbox management
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "playwright-pro"
|
||||
description: "Production-grade Playwright testing toolkit. Generate tests, fix flaky failures, migrate from Cypress/Selenium, sync with TestRail, run on BrowserStack. 55 templates, 3 agents, smart reporting."
|
||||
description: "Production-grade Playwright testing toolkit. Use when the user mentions Playwright tests, end-to-end testing, browser automation, fixing flaky tests, test migration, CI/CD testing, or test suites. Generate tests, fix flaky failures, migrate from Cypress/Selenium, sync with TestRail, run on BrowserStack. 55 templates, 3 agents, smart reporting."
|
||||
---
|
||||
|
||||
# Playwright Pro
|
||||
@@ -23,6 +23,45 @@ When installed as a Claude Code plugin, these are available as `/pw:` commands:
|
||||
| `/pw:browserstack` | Run on BrowserStack, pull cross-browser reports |
|
||||
| `/pw:report` | Generate test report in your preferred format |
|
||||
|
||||
## Quick Start Workflow
|
||||
|
||||
The recommended sequence for most projects:
|
||||
|
||||
```
|
||||
1. /pw:init → scaffolds config, CI pipeline, and a first smoke test
|
||||
2. /pw:generate → generates tests from your spec or URL
|
||||
3. /pw:review → validates quality and flags anti-patterns ← always run after generate
|
||||
4. /pw:fix <test> → diagnoses and repairs any failing/flaky tests ← run when CI turns red
|
||||
```
|
||||
|
||||
**Validation checkpoints:**
|
||||
- After `/pw:generate` — always run `/pw:review` before committing; it catches locator anti-patterns and missing assertions automatically.
|
||||
- After `/pw:fix` — re-run the full suite locally (`npx playwright test`) to confirm the fix doesn't introduce regressions.
|
||||
- After `/pw:migrate` — run `/pw:coverage` to confirm parity with the old suite before decommissioning Cypress/Selenium tests.
|
||||
|
||||
### Example: Generate → Review → Fix
|
||||
|
||||
```bash
|
||||
# 1. Generate tests from a user story
|
||||
/pw:generate "As a user I can log in with email and password"
|
||||
|
||||
# Generated: tests/auth/login.spec.ts
|
||||
# → Playwright Pro creates the file using the auth template.
|
||||
|
||||
# 2. Review the generated tests
|
||||
/pw:review tests/auth/login.spec.ts
|
||||
|
||||
# → Flags: one test used page.locator('input[type=password]') — suggests getByLabel('Password')
|
||||
# → Fix applied automatically.
|
||||
|
||||
# 3. Run locally to confirm
|
||||
npx playwright test tests/auth/login.spec.ts --headed
|
||||
|
||||
# 4. If a test is flaky in CI, diagnose it
|
||||
/pw:fix tests/auth/login.spec.ts
|
||||
# → Identifies missing web-first assertion; replaces waitForTimeout(2000) with expect(locator).toBeVisible()
|
||||
```
|
||||
|
||||
## Golden Rules
|
||||
|
||||
1. `getByRole()` over CSS/XPath — resilient to markup changes
|
||||
|
||||
@@ -1,26 +1,12 @@
|
||||
---
|
||||
name: "senior-backend"
|
||||
description: This skill should be used when the user asks to "design REST APIs", "optimize database queries", "implement authentication", "build microservices", "review backend code", "set up GraphQL", "handle database migrations", or "load test APIs". Use for Node.js/Express/Fastify development, PostgreSQL optimization, API security, and backend architecture patterns.
|
||||
description: Designs and implements backend systems including REST APIs, microservices, database architectures, authentication flows, and security hardening. Use when the user asks to "design REST APIs", "optimize database queries", "implement authentication", "build microservices", "review backend code", "set up GraphQL", "handle database migrations", or "load test APIs". Covers Node.js/Express/Fastify development, PostgreSQL optimization, API security, and backend architecture patterns.
|
||||
---
|
||||
|
||||
# Senior Backend Engineer
|
||||
|
||||
Backend development patterns, API design, database optimization, and security practices.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Quick Start](#quick-start)
|
||||
- [Tools Overview](#tools-overview)
|
||||
- [API Scaffolder](#1-api-scaffolder)
|
||||
- [Database Migration Tool](#2-database-migration-tool)
|
||||
- [API Load Tester](#3-api-load-tester)
|
||||
- [Backend Development Workflows](#backend-development-workflows)
|
||||
- [API Design Workflow](#api-design-workflow)
|
||||
- [Database Optimization Workflow](#database-optimization-workflow)
|
||||
- [Security Hardening Workflow](#security-hardening-workflow)
|
||||
- [Reference Documentation](#reference-documentation)
|
||||
- [Common Patterns Quick Reference](#common-patterns-quick-reference)
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
@@ -51,17 +37,7 @@ Generates API route handlers, middleware, and OpenAPI specifications from schema
|
||||
```bash
|
||||
# Generate Express routes from OpenAPI spec
|
||||
python scripts/api_scaffolder.py openapi.yaml --framework express --output src/routes/
|
||||
|
||||
# Output:
|
||||
# Generated 12 route handlers in src/routes/
|
||||
# - GET /users (listUsers)
|
||||
# - POST /users (createUser)
|
||||
# - GET /users/{id} (getUser)
|
||||
# - PUT /users/{id} (updateUser)
|
||||
# - DELETE /users/{id} (deleteUser)
|
||||
# ...
|
||||
# Created validation middleware: src/middleware/validators.ts
|
||||
# Created TypeScript types: src/types/api.ts
|
||||
# Output: Generated 12 route handlers, validation middleware, and TypeScript types
|
||||
|
||||
# Generate from database schema
|
||||
python scripts/api_scaffolder.py --from-db postgres://localhost/mydb --output src/routes/
|
||||
@@ -88,32 +64,12 @@ Analyzes database schemas, detects changes, and generates migration files with r
|
||||
```bash
|
||||
# Analyze current schema and suggest optimizations
|
||||
python scripts/database_migration_tool.py --connection postgres://localhost/mydb --analyze
|
||||
|
||||
# Output:
|
||||
# === Database Analysis Report ===
|
||||
# Tables: 24
|
||||
# Total rows: 1,247,832
|
||||
#
|
||||
# MISSING INDEXES (5 found):
|
||||
# orders.user_id - 847ms avg query time, ADD INDEX recommended
|
||||
# products.category_id - 234ms avg query time, ADD INDEX recommended
|
||||
#
|
||||
# N+1 QUERY RISKS (3 found):
|
||||
# users -> orders relationship (no eager loading)
|
||||
#
|
||||
# SUGGESTED MIGRATIONS:
|
||||
# 1. Add index on orders(user_id)
|
||||
# 2. Add index on products(category_id)
|
||||
# 3. Add composite index on order_items(order_id, product_id)
|
||||
# Output: Missing indexes, N+1 query risks, and suggested migration files
|
||||
|
||||
# Generate migration from schema diff
|
||||
python scripts/database_migration_tool.py --connection postgres://localhost/mydb \
|
||||
--compare schema/v2.sql --output migrations/
|
||||
|
||||
# Output:
|
||||
# Generated migration: migrations/20240115_add_user_indexes.sql
|
||||
# Generated rollback: migrations/20240115_add_user_indexes_rollback.sql
|
||||
|
||||
# Dry-run a migration
|
||||
python scripts/database_migration_tool.py --connection postgres://localhost/mydb \
|
||||
--migrate migrations/20240115_add_user_indexes.sql --dry-run
|
||||
@@ -132,32 +88,7 @@ Performs HTTP load testing with configurable concurrency, measuring latency perc
|
||||
```bash
|
||||
# Basic load test
|
||||
python scripts/api_load_tester.py https://api.example.com/users --concurrency 50 --duration 30
|
||||
|
||||
# Output:
|
||||
# === Load Test Results ===
|
||||
# Target: https://api.example.com/users
|
||||
# Duration: 30s | Concurrency: 50
|
||||
#
|
||||
# THROUGHPUT:
|
||||
# Total requests: 15,247
|
||||
# Requests/sec: 508.2
|
||||
# Successful: 15,102 (99.0%)
|
||||
# Failed: 145 (1.0%)
|
||||
#
|
||||
# LATENCY (ms):
|
||||
# Min: 12
|
||||
# Avg: 89
|
||||
# P50: 67
|
||||
# P95: 198
|
||||
# P99: 423
|
||||
# Max: 1,247
|
||||
#
|
||||
# ERRORS:
|
||||
# Connection timeout: 89
|
||||
# HTTP 503: 56
|
||||
#
|
||||
# RECOMMENDATION: P99 latency (423ms) exceeds 200ms target.
|
||||
# Consider: connection pooling, query optimization, or horizontal scaling.
|
||||
# Output: Throughput (req/sec), latency percentiles (P50/P95/P99), error counts, and scaling recommendations
|
||||
|
||||
# Test with custom headers and body
|
||||
python scripts/api_load_tester.py https://api.example.com/orders \
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "senior-ml-engineer"
|
||||
description: ML engineering skill for productionizing models, building MLOps pipelines, and integrating LLMs. Covers model deployment, feature stores, drift monitoring, RAG systems, and cost optimization.
|
||||
description: ML engineering skill for productionizing models, building MLOps pipelines, and integrating LLMs. Covers model deployment, feature stores, drift monitoring, RAG systems, and cost optimization. Use when the user asks about deploying ML models to production, setting up MLOps infrastructure (MLflow, Kubeflow, Kubernetes, Docker), monitoring model performance or drift, building RAG pipelines, or integrating LLM APIs with retry logic and cost controls. Focused on production and operational concerns rather than model research or initial training.
|
||||
triggers:
|
||||
- MLOps pipeline
|
||||
- model deployment
|
||||
|
||||
@@ -1,26 +1,12 @@
|
||||
---
|
||||
name: "senior-qa"
|
||||
description: This skill should be used when the user asks to "generate tests", "write unit tests", "analyze test coverage", "scaffold E2E tests", "set up Playwright", "configure Jest", "implement testing patterns", or "improve test quality". Use for React/Next.js testing with Jest, React Testing Library, and Playwright.
|
||||
description: Generates unit tests, integration tests, and E2E tests for React/Next.js applications. Scans components to create Jest + React Testing Library test stubs, analyzes Istanbul/LCOV coverage reports to surface gaps, scaffolds Playwright test files from Next.js routes, mocks API calls with MSW, creates test fixtures, and configures test runners. Use when the user asks to "generate tests", "write unit tests", "analyze test coverage", "scaffold E2E tests", "set up Playwright", "configure Jest", "implement testing patterns", or "improve test quality".
|
||||
---
|
||||
|
||||
# Senior QA Engineer
|
||||
|
||||
Test automation, coverage analysis, and quality assurance patterns for React and Next.js applications.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Quick Start](#quick-start)
|
||||
- [Tools Overview](#tools-overview)
|
||||
- [Test Suite Generator](#1-test-suite-generator)
|
||||
- [Coverage Analyzer](#2-coverage-analyzer)
|
||||
- [E2E Test Scaffolder](#3-e2e-test-scaffolder)
|
||||
- [QA Workflows](#qa-workflows)
|
||||
- [Unit Test Generation Workflow](#unit-test-generation-workflow)
|
||||
- [Coverage Analysis Workflow](#coverage-analysis-workflow)
|
||||
- [E2E Test Setup Workflow](#e2e-test-setup-workflow)
|
||||
- [Reference Documentation](#reference-documentation)
|
||||
- [Common Patterns Quick Reference](#common-patterns-quick-reference)
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
@@ -52,18 +38,6 @@ Scans React/TypeScript components and generates Jest + React Testing Library tes
|
||||
# Basic usage - scan components and generate tests
|
||||
python scripts/test_suite_generator.py src/components/ --output __tests__/
|
||||
|
||||
# Output:
|
||||
# Scanning: src/components/
|
||||
# Found 24 React components
|
||||
#
|
||||
# Generated tests:
|
||||
# __tests__/Button.test.tsx (render, click handler, disabled state)
|
||||
# __tests__/Modal.test.tsx (render, open/close, keyboard events)
|
||||
# __tests__/Form.test.tsx (render, validation, submission)
|
||||
# ...
|
||||
#
|
||||
# Summary: 24 test files, 87 test cases
|
||||
|
||||
# Include accessibility tests
|
||||
python scripts/test_suite_generator.py src/ --output __tests__/ --include-a11y
|
||||
|
||||
@@ -91,29 +65,6 @@ Parses Jest/Istanbul coverage reports and identifies gaps, uncovered branches, a
|
||||
# Analyze coverage report
|
||||
python scripts/coverage_analyzer.py coverage/coverage-final.json
|
||||
|
||||
# Output:
|
||||
# === Coverage Analysis Report ===
|
||||
# Overall: 72.4% (target: 80%)
|
||||
#
|
||||
# BY TYPE:
|
||||
# Statements: 74.2%
|
||||
# Branches: 68.1%
|
||||
# Functions: 71.8%
|
||||
# Lines: 73.5%
|
||||
#
|
||||
# CRITICAL GAPS (uncovered business logic):
|
||||
# src/services/payment.ts:45-67 - Payment processing
|
||||
# src/hooks/useAuth.ts:23-41 - Authentication flow
|
||||
#
|
||||
# RECOMMENDATIONS:
|
||||
# 1. Add tests for payment service error handling
|
||||
# 2. Cover authentication edge cases
|
||||
# 3. Test form validation branches
|
||||
#
|
||||
# Files below threshold (80%):
|
||||
# src/components/Checkout.tsx: 45%
|
||||
# src/services/api.ts: 62%
|
||||
|
||||
# Enforce threshold (exit 1 if below)
|
||||
python scripts/coverage_analyzer.py coverage/ --threshold 80 --strict
|
||||
|
||||
@@ -135,21 +86,6 @@ Scans Next.js pages/app directory and generates Playwright test files with commo
|
||||
# Scaffold E2E tests for Next.js App Router
|
||||
python scripts/e2e_test_scaffolder.py src/app/ --output e2e/
|
||||
|
||||
# Output:
|
||||
# Scanning: src/app/
|
||||
# Found 12 routes
|
||||
#
|
||||
# Generated E2E tests:
|
||||
# e2e/home.spec.ts (navigation, hero section)
|
||||
# e2e/auth/login.spec.ts (form submission, validation)
|
||||
# e2e/auth/register.spec.ts (registration flow)
|
||||
# e2e/dashboard.spec.ts (authenticated routes)
|
||||
# e2e/products/[id].spec.ts (dynamic routes)
|
||||
# ...
|
||||
#
|
||||
# Generated: playwright.config.ts
|
||||
# Generated: e2e/fixtures/auth.ts
|
||||
|
||||
# Include Page Object Model classes
|
||||
python scripts/e2e_test_scaffolder.py src/app/ --output e2e/ --include-pom
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "senior-secops"
|
||||
description: Comprehensive SecOps skill for application security, vulnerability management, compliance, and secure development practices. Includes security scanning, vulnerability assessment, compliance checking, and security automation. Use when implementing security controls, conducting security audits, responding to vulnerabilities, or ensuring compliance requirements.
|
||||
description: Senior SecOps engineer skill for application security, vulnerability management, compliance verification, and secure development practices. Runs SAST/DAST scans, generates CVE remediation plans, checks dependency vulnerabilities, creates security policies, enforces secure coding patterns, and automates compliance checks against SOC2, PCI-DSS, HIPAA, and GDPR. Use when conducting a security review or audit, responding to a CVE or security incident, hardening infrastructure, implementing authentication or secrets management, running penetration test prep, checking OWASP Top 10 exposure, or enforcing security controls in CI/CD pipelines.
|
||||
---
|
||||
|
||||
# Senior SecOps Engineer
|
||||
@@ -11,7 +11,6 @@ Complete toolkit for Security Operations including vulnerability management, com
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Trigger Terms](#trigger-terms)
|
||||
- [Core Capabilities](#core-capabilities)
|
||||
- [Workflows](#workflows)
|
||||
- [Tool Reference](#tool-reference)
|
||||
@@ -21,27 +20,6 @@ Complete toolkit for Security Operations including vulnerability management, com
|
||||
|
||||
---
|
||||
|
||||
## Trigger Terms
|
||||
|
||||
Use this skill when you encounter:
|
||||
|
||||
| Category | Terms |
|
||||
|----------|-------|
|
||||
| **Vulnerability Management** | CVE, CVSS, vulnerability scan, security patch, dependency audit, npm audit, pip-audit |
|
||||
| **OWASP Top 10** | injection, XSS, CSRF, broken authentication, security misconfiguration, sensitive data exposure |
|
||||
| **Compliance** | SOC 2, PCI-DSS, HIPAA, GDPR, compliance audit, security controls, access control |
|
||||
| **Secure Coding** | input validation, output encoding, parameterized queries, prepared statements, sanitization |
|
||||
| **Secrets Management** | API key, secrets vault, environment variables, HashiCorp Vault, AWS Secrets Manager |
|
||||
| **Authentication** | JWT, OAuth, MFA, 2FA, TOTP, password hashing, bcrypt, argon2, session management |
|
||||
| **Security Testing** | SAST, DAST, penetration test, security scan, Snyk, Semgrep, CodeQL, Trivy |
|
||||
| **Incident Response** | security incident, breach notification, incident response, forensics, containment |
|
||||
| **Network Security** | TLS, HTTPS, HSTS, CSP, CORS, security headers, firewall rules, WAF |
|
||||
| **Infrastructure Security** | container security, Kubernetes security, IAM, least privilege, zero trust |
|
||||
| **Cryptography** | encryption at rest, encryption in transit, AES-256, RSA, key management, KMS |
|
||||
| **Monitoring** | security monitoring, SIEM, audit logging, intrusion detection, anomaly detection |
|
||||
|
||||
---
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
### 1. Security Scanner
|
||||
@@ -129,14 +107,23 @@ Complete security assessment of a codebase.
|
||||
```bash
|
||||
# Step 1: Scan for code vulnerabilities
|
||||
python scripts/security_scanner.py . --severity medium
|
||||
# STOP if exit code 2 — resolve critical findings before continuing
|
||||
```
|
||||
|
||||
```bash
|
||||
# Step 2: Check dependency vulnerabilities
|
||||
python scripts/vulnerability_assessor.py . --severity high
|
||||
# STOP if exit code 2 — patch critical CVEs before continuing
|
||||
```
|
||||
|
||||
```bash
|
||||
# Step 3: Verify compliance controls
|
||||
python scripts/compliance_checker.py . --framework all
|
||||
# STOP if exit code 2 — address critical gaps before proceeding
|
||||
```
|
||||
|
||||
# Step 4: Generate combined report
|
||||
```bash
|
||||
# Step 4: Generate combined reports
|
||||
python scripts/security_scanner.py . --json --output security.json
|
||||
python scripts/vulnerability_assessor.py . --json --output vulns.json
|
||||
python scripts/compliance_checker.py . --json --output compliance.json
|
||||
@@ -175,6 +162,8 @@ jobs:
|
||||
run: python scripts/compliance_checker.py . --framework soc2
|
||||
```
|
||||
|
||||
Each step fails the pipeline on its respective exit code — no deployment proceeds past a critical finding.
|
||||
|
||||
### Workflow 3: CVE Triage
|
||||
|
||||
Respond to a new CVE affecting your application.
|
||||
@@ -184,6 +173,7 @@ Respond to a new CVE affecting your application.
|
||||
- Identify affected systems using vulnerability_assessor.py
|
||||
- Check if CVE is being actively exploited
|
||||
- Determine CVSS environmental score for your context
|
||||
- STOP if CVSS 9.0+ on internet-facing system — escalate immediately
|
||||
|
||||
2. PRIORITIZE
|
||||
- Critical (CVSS 9.0+, internet-facing): 24 hours
|
||||
@@ -193,7 +183,8 @@ Respond to a new CVE affecting your application.
|
||||
|
||||
3. REMEDIATE
|
||||
- Update affected dependency to fixed version
|
||||
- Run security_scanner.py to verify fix
|
||||
- Run security_scanner.py to verify fix (must return exit code 0)
|
||||
- STOP if scanner still flags the CVE — do not deploy
|
||||
- Test for regressions
|
||||
- Deploy with enhanced monitoring
|
||||
|
||||
@@ -223,7 +214,7 @@ PHASE 2: CONTAIN (15-60 min)
|
||||
PHASE 3: ERADICATE (1-4 hours)
|
||||
- Root cause identified
|
||||
- Malware/backdoors removed
|
||||
- Vulnerabilities patched (run security_scanner.py)
|
||||
- Vulnerabilities patched (run security_scanner.py; must return exit code 0)
|
||||
- Systems hardened
|
||||
|
||||
PHASE 4: RECOVER (4-24 hours)
|
||||
@@ -254,10 +245,7 @@ PHASE 5: POST-INCIDENT (24-72 hours)
|
||||
| `--json` | Output results as JSON |
|
||||
| `--output, -o` | Write results to file |
|
||||
|
||||
**Exit Codes:**
|
||||
- `0`: No critical/high findings
|
||||
- `1`: High severity findings
|
||||
- `2`: Critical severity findings
|
||||
**Exit Codes:** `0` = no critical/high findings · `1` = high severity findings · `2` = critical severity findings
|
||||
|
||||
### vulnerability_assessor.py
|
||||
|
||||
@@ -269,10 +257,7 @@ PHASE 5: POST-INCIDENT (24-72 hours)
|
||||
| `--json` | Output results as JSON |
|
||||
| `--output, -o` | Write results to file |
|
||||
|
||||
**Exit Codes:**
|
||||
- `0`: No critical/high vulnerabilities
|
||||
- `1`: High severity vulnerabilities
|
||||
- `2`: Critical severity vulnerabilities
|
||||
**Exit Codes:** `0` = no critical/high vulnerabilities · `1` = high severity vulnerabilities · `2` = critical severity vulnerabilities
|
||||
|
||||
### compliance_checker.py
|
||||
|
||||
@@ -284,29 +269,13 @@ PHASE 5: POST-INCIDENT (24-72 hours)
|
||||
| `--json` | Output results as JSON |
|
||||
| `--output, -o` | Write results to file |
|
||||
|
||||
**Exit Codes:**
|
||||
- `0`: Compliant (90%+ score)
|
||||
- `1`: Non-compliant (50-69% score)
|
||||
- `2`: Critical gaps (<50% score)
|
||||
**Exit Codes:** `0` = compliant (90%+ score) · `1` = non-compliant (50-69% score) · `2` = critical gaps (<50% score)
|
||||
|
||||
---
|
||||
|
||||
## Security Standards
|
||||
|
||||
### OWASP Top 10 Prevention
|
||||
|
||||
| Vulnerability | Prevention |
|
||||
|--------------|------------|
|
||||
| **A01: Broken Access Control** | Implement RBAC, deny by default, validate permissions server-side |
|
||||
| **A02: Cryptographic Failures** | Use TLS 1.2+, AES-256 encryption, secure key management |
|
||||
| **A03: Injection** | Parameterized queries, input validation, escape output |
|
||||
| **A04: Insecure Design** | Threat modeling, secure design patterns, defense in depth |
|
||||
| **A05: Security Misconfiguration** | Hardening guides, remove defaults, disable unused features |
|
||||
| **A06: Vulnerable Components** | Dependency scanning, automated updates, SBOM |
|
||||
| **A07: Authentication Failures** | MFA, rate limiting, secure password storage |
|
||||
| **A08: Data Integrity Failures** | Code signing, integrity checks, secure CI/CD |
|
||||
| **A09: Security Logging Failures** | Comprehensive audit logs, SIEM integration, alerting |
|
||||
| **A10: SSRF** | URL validation, allowlist destinations, network segmentation |
|
||||
See `references/security_standards.md` for OWASP Top 10 full guidance, secure coding standards, authentication requirements, and API security controls.
|
||||
|
||||
### Secure Coding Checklist
|
||||
|
||||
@@ -346,47 +315,28 @@ PHASE 5: POST-INCIDENT (24-72 hours)
|
||||
|
||||
## Compliance Frameworks
|
||||
|
||||
### SOC 2 Type II Controls
|
||||
See `references/compliance_requirements.md` for full control mappings. Run `compliance_checker.py` to verify the controls below:
|
||||
|
||||
| Control | Category | Description |
|
||||
|---------|----------|-------------|
|
||||
| CC1 | Control Environment | Security policies, org structure |
|
||||
| CC2 | Communication | Security awareness, documentation |
|
||||
| CC3 | Risk Assessment | Vulnerability scanning, threat modeling |
|
||||
| CC6 | Logical Access | Authentication, authorization, MFA |
|
||||
| CC7 | System Operations | Monitoring, logging, incident response |
|
||||
| CC8 | Change Management | CI/CD, code review, deployment controls |
|
||||
### SOC 2 Type II
|
||||
- **CC6** Logical Access: authentication, authorization, MFA
|
||||
- **CC7** System Operations: monitoring, logging, incident response
|
||||
- **CC8** Change Management: CI/CD, code review, deployment controls
|
||||
|
||||
### PCI-DSS v4.0 Requirements
|
||||
|
||||
| Requirement | Description |
|
||||
|-------------|-------------|
|
||||
| Req 3 | Protect stored cardholder data (encryption at rest) |
|
||||
| Req 4 | Encrypt transmission (TLS 1.2+) |
|
||||
| Req 6 | Secure development (input validation, secure coding) |
|
||||
| Req 8 | Strong authentication (MFA, password policy) |
|
||||
| Req 10 | Audit logging (all access to cardholder data) |
|
||||
| Req 11 | Security testing (SAST, DAST, penetration testing) |
|
||||
### PCI-DSS v4.0
|
||||
- **Req 3/4**: Encryption at rest and in transit (TLS 1.2+)
|
||||
- **Req 6**: Secure development (input validation, secure coding)
|
||||
- **Req 8**: Strong authentication (MFA, password policy)
|
||||
- **Req 10/11**: Audit logging, SAST/DAST/penetration testing
|
||||
|
||||
### HIPAA Security Rule
|
||||
- Unique user IDs and audit trails for PHI access (164.312(a)(1), 164.312(b))
|
||||
- MFA for person/entity authentication (164.312(d))
|
||||
- Transmission encryption via TLS (164.312(e)(1))
|
||||
|
||||
| Safeguard | Requirement |
|
||||
|-----------|-------------|
|
||||
| 164.312(a)(1) | Unique user identification for PHI access |
|
||||
| 164.312(b) | Audit trails for PHI access |
|
||||
| 164.312(c)(1) | Data integrity controls |
|
||||
| 164.312(d) | Person/entity authentication (MFA) |
|
||||
| 164.312(e)(1) | Transmission encryption (TLS) |
|
||||
|
||||
### GDPR Requirements
|
||||
|
||||
| Article | Requirement |
|
||||
|---------|-------------|
|
||||
| Art 25 | Privacy by design, data minimization |
|
||||
| Art 32 | Security measures, encryption, pseudonymization |
|
||||
| Art 33 | Breach notification (72 hours) |
|
||||
| Art 17 | Right to erasure (data deletion) |
|
||||
| Art 20 | Data portability (export capability) |
|
||||
### GDPR
|
||||
- **Art 25/32**: Privacy by design, encryption, pseudonymization
|
||||
- **Art 33**: Breach notification within 72 hours
|
||||
- **Art 17/20**: Right to erasure and data portability
|
||||
|
||||
---
|
||||
|
||||
@@ -469,37 +419,4 @@ app.use((req, res, next) => {
|
||||
|----------|-------------|
|
||||
| `references/security_standards.md` | OWASP Top 10, secure coding, authentication, API security |
|
||||
| `references/vulnerability_management_guide.md` | CVE triage, CVSS scoring, remediation workflows |
|
||||
| `references/compliance_requirements.md` | SOC 2, PCI-DSS, HIPAA, GDPR requirements |
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack
|
||||
|
||||
**Security Scanning:**
|
||||
- Snyk (dependency scanning)
|
||||
- Semgrep (SAST)
|
||||
- CodeQL (code analysis)
|
||||
- Trivy (container scanning)
|
||||
- OWASP ZAP (DAST)
|
||||
|
||||
**Secrets Management:**
|
||||
- HashiCorp Vault
|
||||
- AWS Secrets Manager
|
||||
- Azure Key Vault
|
||||
- 1Password Secrets Automation
|
||||
|
||||
**Authentication:**
|
||||
- bcrypt, argon2 (password hashing)
|
||||
- jsonwebtoken (JWT)
|
||||
- passport.js (authentication middleware)
|
||||
- speakeasy (TOTP/MFA)
|
||||
|
||||
**Logging & Monitoring:**
|
||||
- Winston, Pino (Node.js logging)
|
||||
- Datadog, Splunk (SIEM)
|
||||
- PagerDuty (alerting)
|
||||
|
||||
**Compliance:**
|
||||
- Vanta (SOC 2 automation)
|
||||
- Drata (compliance management)
|
||||
- AWS Config (configuration compliance)
|
||||
| `references/compliance_requirements.md` | SOC 2, PCI-DSS, HIPAA, GDPR full control mappings |
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "senior-security"
|
||||
description: Security engineering toolkit for threat modeling, vulnerability analysis, secure architecture, and penetration testing. Includes STRIDE analysis, OWASP guidance, cryptography patterns, and security scanning tools.
|
||||
description: Security engineering toolkit for threat modeling, vulnerability analysis, secure architecture, and penetration testing. Includes STRIDE analysis, OWASP guidance, cryptography patterns, and security scanning tools. Use when the user asks about security reviews, threat analysis, vulnerability assessments, secure coding practices, security audits, attack surface analysis, CVE remediation, or security best practices.
|
||||
triggers:
|
||||
- security architecture
|
||||
- threat modeling
|
||||
@@ -49,13 +49,7 @@ Identify and analyze security threats using STRIDE methodology.
|
||||
- Processes (application components)
|
||||
- Data stores (databases, caches)
|
||||
- Data flows (APIs, network connections)
|
||||
3. Apply STRIDE to each DFD element:
|
||||
- Spoofing: Can identity be faked?
|
||||
- Tampering: Can data be modified?
|
||||
- Repudiation: Can actions be denied?
|
||||
- Information Disclosure: Can data leak?
|
||||
- Denial of Service: Can availability be affected?
|
||||
- Elevation of Privilege: Can access be escalated?
|
||||
3. Apply STRIDE to each DFD element (see [STRIDE per Element Matrix](#stride-per-element-matrix) below)
|
||||
4. Score risks using DREAD:
|
||||
- Damage potential (1-10)
|
||||
- Reproducibility (1-10)
|
||||
@@ -69,14 +63,14 @@ Identify and analyze security threats using STRIDE methodology.
|
||||
|
||||
### STRIDE Threat Categories
|
||||
|
||||
| Category | Description | Security Property | Mitigation Focus |
|
||||
|----------|-------------|-------------------|------------------|
|
||||
| Spoofing | Impersonating users or systems | Authentication | MFA, certificates, strong auth |
|
||||
| Tampering | Modifying data or code | Integrity | Signing, checksums, validation |
|
||||
| Repudiation | Denying actions | Non-repudiation | Audit logs, digital signatures |
|
||||
| Information Disclosure | Exposing data | Confidentiality | Encryption, access controls |
|
||||
| Denial of Service | Disrupting availability | Availability | Rate limiting, redundancy |
|
||||
| Elevation of Privilege | Gaining unauthorized access | Authorization | RBAC, least privilege |
|
||||
| Category | Security Property | Mitigation Focus |
|
||||
|----------|-------------------|------------------|
|
||||
| Spoofing | Authentication | MFA, certificates, strong auth |
|
||||
| Tampering | Integrity | Signing, checksums, validation |
|
||||
| Repudiation | Non-repudiation | Audit logs, digital signatures |
|
||||
| Information Disclosure | Confidentiality | Encryption, access controls |
|
||||
| Denial of Service | Availability | Rate limiting, redundancy |
|
||||
| Elevation of Privilege | Authorization | RBAC, least privilege |
|
||||
|
||||
### STRIDE per Element Matrix
|
||||
|
||||
@@ -195,24 +189,11 @@ Identify and remediate security vulnerabilities in applications.
|
||||
7. Verify fixes and document
|
||||
8. **Validation:** Scope defined; automated and manual testing complete; findings classified; remediation tracked
|
||||
|
||||
### OWASP Top 10 Mapping
|
||||
|
||||
| Rank | Vulnerability | Testing Approach |
|
||||
|------|---------------|------------------|
|
||||
| A01 | Broken Access Control | Manual IDOR testing, authorization checks |
|
||||
| A02 | Cryptographic Failures | Algorithm review, key management audit |
|
||||
| A03 | Injection | SAST + manual payload testing |
|
||||
| A04 | Insecure Design | Threat modeling, architecture review |
|
||||
| A05 | Security Misconfiguration | Configuration audit, CIS benchmarks |
|
||||
| A06 | Vulnerable Components | Dependency scanning, CVE monitoring |
|
||||
| A07 | Authentication Failures | Password policy, session management review |
|
||||
| A08 | Software/Data Integrity | CI/CD security, code signing verification |
|
||||
| A09 | Logging Failures | Log review, SIEM configuration check |
|
||||
| A10 | SSRF | Manual URL manipulation testing |
|
||||
For OWASP Top 10 vulnerability descriptions and testing guidance, refer to [owasp.org/Top10](https://owasp.org/Top10).
|
||||
|
||||
### Vulnerability Severity Matrix
|
||||
|
||||
| Impact / Exploitability | Easy | Moderate | Difficult |
|
||||
| Impact \ Exploitability | Easy | Moderate | Difficult |
|
||||
|-------------------------|------|----------|-----------|
|
||||
| Critical | Critical | Critical | High |
|
||||
| High | Critical | High | Medium |
|
||||
@@ -280,6 +261,55 @@ Review code for security vulnerabilities before deployment.
|
||||
| MD5/SHA1 for passwords | Weak hashing | Use Argon2id or bcrypt |
|
||||
| Math.random for tokens | Predictable values | Use crypto.getRandomValues |
|
||||
|
||||
### Inline Code Examples
|
||||
|
||||
**SQL Injection — insecure vs. secure (Python):**
|
||||
|
||||
```python
|
||||
# ❌ Insecure: string formatting allows SQL injection
|
||||
query = f"SELECT * FROM users WHERE username = '{username}'"
|
||||
cursor.execute(query)
|
||||
|
||||
# ✅ Secure: parameterized query — user input never interpreted as SQL
|
||||
query = "SELECT * FROM users WHERE username = %s"
|
||||
cursor.execute(query, (username,))
|
||||
```
|
||||
|
||||
**Password Hashing with Argon2id (Python):**
|
||||
|
||||
```python
|
||||
from argon2 import PasswordHasher
|
||||
|
||||
ph = PasswordHasher() # uses secure defaults (time_cost, memory_cost)
|
||||
|
||||
# On registration
|
||||
hashed = ph.hash(plain_password)
|
||||
|
||||
# On login — raises argon2.exceptions.VerifyMismatchError on failure
|
||||
ph.verify(hashed, plain_password)
|
||||
```
|
||||
|
||||
**Secret Scanning — core pattern matching (Python):**
|
||||
|
||||
```python
|
||||
import re, pathlib
|
||||
|
||||
SECRET_PATTERNS = {
|
||||
"aws_access_key": re.compile(r"AKIA[0-9A-Z]{16}"),
|
||||
"github_token": re.compile(r"ghp_[A-Za-z0-9]{36}"),
|
||||
"private_key": re.compile(r"-----BEGIN (RSA |EC )?PRIVATE KEY-----"),
|
||||
"generic_secret": re.compile(r'(?i)(password|secret|api_key)\s*=\s*["\']?\S{8,}'),
|
||||
}
|
||||
|
||||
def scan_file(path: pathlib.Path) -> list[dict]:
|
||||
findings = []
|
||||
for lineno, line in enumerate(path.read_text(errors="replace").splitlines(), 1):
|
||||
for name, pattern in SECRET_PATTERNS.items():
|
||||
if pattern.search(line):
|
||||
findings.append({"file": str(path), "line": lineno, "type": name})
|
||||
return findings
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Incident Response Workflow
|
||||
@@ -317,12 +347,12 @@ Respond to and contain security incidents.
|
||||
|
||||
### Incident Severity Levels
|
||||
|
||||
| Level | Description | Response Time | Escalation |
|
||||
|-------|-------------|---------------|------------|
|
||||
| P1 - Critical | Active breach, data exfiltration | Immediate | CISO, Legal, Executive |
|
||||
| P2 - High | Confirmed compromise, contained | 1 hour | Security Lead, IT Director |
|
||||
| P3 - Medium | Potential compromise, under investigation | 4 hours | Security Team |
|
||||
| P4 - Low | Suspicious activity, low impact | 24 hours | On-call engineer |
|
||||
| Level | Response Time | Escalation |
|
||||
|-------|---------------|------------|
|
||||
| P1 - Critical (active breach/exfiltration) | Immediate | CISO, Legal, Executive |
|
||||
| P2 - High (confirmed, contained) | 1 hour | Security Lead, IT Director |
|
||||
| P3 - Medium (potential, under investigation) | 4 hours | Security Team |
|
||||
| P4 - Low (suspicious, low impact) | 24 hours | On-call engineer |
|
||||
|
||||
### Incident Response Checklist
|
||||
|
||||
@@ -370,24 +400,12 @@ See: [references/cryptography-implementation.md](references/cryptography-impleme
|
||||
|
||||
### Scripts
|
||||
|
||||
| Script | Purpose | Usage |
|
||||
|--------|---------|-------|
|
||||
| [threat_modeler.py](scripts/threat_modeler.py) | STRIDE threat analysis with risk scoring | `python threat_modeler.py --component "Authentication"` |
|
||||
| [secret_scanner.py](scripts/secret_scanner.py) | Detect hardcoded secrets and credentials | `python secret_scanner.py /path/to/project` |
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| [threat_modeler.py](scripts/threat_modeler.py) | STRIDE threat analysis with DREAD risk scoring; JSON and text output; interactive guided mode |
|
||||
| [secret_scanner.py](scripts/secret_scanner.py) | Detect hardcoded secrets and credentials across 20+ patterns; CI/CD integration ready |
|
||||
|
||||
**Threat Modeler Features:**
|
||||
- STRIDE analysis for any system component
|
||||
- DREAD risk scoring
|
||||
- Mitigation recommendations
|
||||
- JSON and text output formats
|
||||
- Interactive mode for guided analysis
|
||||
|
||||
**Secret Scanner Features:**
|
||||
- Detects AWS, GCP, Azure credentials
|
||||
- Finds API keys and tokens (GitHub, Slack, Stripe)
|
||||
- Identifies private keys and passwords
|
||||
- Supports 20+ secret patterns
|
||||
- CI/CD integration ready
|
||||
For usage, see the inline code examples in [Secure Code Review Workflow](#inline-code-examples) and the script source files directly.
|
||||
|
||||
### References
|
||||
|
||||
@@ -401,17 +419,6 @@ See: [references/cryptography-implementation.md](references/cryptography-impleme
|
||||
|
||||
## Security Standards Reference
|
||||
|
||||
### Compliance Frameworks
|
||||
|
||||
| Framework | Focus | Applicable To |
|
||||
|-----------|-------|---------------|
|
||||
| OWASP ASVS | Application security | Web applications |
|
||||
| CIS Benchmarks | System hardening | Servers, containers, cloud |
|
||||
| NIST CSF | Risk management | Enterprise security programs |
|
||||
| PCI-DSS | Payment card data | Payment processing |
|
||||
| HIPAA | Healthcare data | Healthcare applications |
|
||||
| SOC 2 | Service organization controls | SaaS providers |
|
||||
|
||||
### Security Headers Checklist
|
||||
|
||||
| Header | Recommended Value |
|
||||
@@ -423,6 +430,8 @@ See: [references/cryptography-implementation.md](references/cryptography-impleme
|
||||
| Referrer-Policy | strict-origin-when-cross-origin |
|
||||
| Permissions-Policy | geolocation=(), microphone=(), camera=() |
|
||||
|
||||
For compliance framework requirements (OWASP ASVS, CIS Benchmarks, NIST CSF, PCI-DSS, HIPAA, SOC 2), refer to the respective official documentation.
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
name: "financial-analyst"
|
||||
description: Performs financial ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction for strategic decision-making
|
||||
description: Performs financial ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction for strategic decision-making. Use when analyzing financial statements, building valuation models, assessing budget variances, or constructing financial projections and forecasts. Also applicable when users mention financial modeling, cash flow analysis, company valuation, financial projections, or spreadsheet analysis.
|
||||
---
|
||||
|
||||
# Financial Analyst Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Production-ready financial analysis toolkit providing ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction. Designed for financial analysts with 3-6 years experience performing financial modeling, forecasting & budgeting, management reporting, business performance analysis, and investment analysis.
|
||||
Production-ready financial analysis toolkit providing ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction. Designed for financial modeling, forecasting & budgeting, management reporting, business performance analysis, and investment analysis.
|
||||
|
||||
## 5-Phase Workflow
|
||||
|
||||
@@ -19,8 +19,9 @@ Production-ready financial analysis toolkit providing ratio analysis, DCF valuat
|
||||
|
||||
### Phase 2: Data Analysis & Modeling
|
||||
- Collect and validate financial data (income statement, balance sheet, cash flow)
|
||||
- **Validate input data completeness** before running ratio calculations (check for missing fields, nulls, or implausible values)
|
||||
- Calculate financial ratios across 5 categories (profitability, liquidity, leverage, efficiency, valuation)
|
||||
- Build DCF models with WACC and terminal value calculations
|
||||
- Build DCF models with WACC and terminal value calculations; **cross-check DCF outputs against sanity bounds** (e.g., implied multiples vs. comparables)
|
||||
- Construct budget variance analyses with favorable/unfavorable classification
|
||||
- Develop driver-based forecasts with scenario modeling
|
||||
|
||||
@@ -118,6 +119,7 @@ python scripts/forecast_builder.py forecast_data.json --scenarios base,bull,bear
|
||||
| `references/financial-ratios-guide.md` | Ratio formulas, interpretation, industry benchmarks |
|
||||
| `references/valuation-methodology.md` | DCF methodology, WACC, terminal value, comps |
|
||||
| `references/forecasting-best-practices.md` | Driver-based forecasting, rolling forecasts, accuracy |
|
||||
| `references/industry-adaptations.md` | Sector-specific metrics and considerations (SaaS, Retail, Manufacturing, Financial Services, Healthcare) |
|
||||
|
||||
## Templates
|
||||
|
||||
@@ -127,38 +129,6 @@ python scripts/forecast_builder.py forecast_data.json --scenarios base,bull,bear
|
||||
| `assets/dcf_analysis_template.md` | DCF valuation analysis template |
|
||||
| `assets/forecast_report_template.md` | Revenue forecast report template |
|
||||
|
||||
## Industry Adaptations
|
||||
|
||||
### SaaS
|
||||
- Key metrics: MRR, ARR, CAC, LTV, Churn Rate, Net Revenue Retention
|
||||
- Revenue recognition: subscription-based, deferred revenue tracking
|
||||
- Unit economics: CAC payback period, LTV/CAC ratio
|
||||
- Cohort analysis for retention and expansion revenue
|
||||
|
||||
### Retail
|
||||
- Key metrics: Same-store sales, Revenue per square foot, Inventory turnover
|
||||
- Seasonal adjustment factors in forecasting
|
||||
- Gross margin analysis by product category
|
||||
- Working capital cycle optimization
|
||||
|
||||
### Manufacturing
|
||||
- Key metrics: Gross margin by product line, Capacity utilization, COGS breakdown
|
||||
- Bill of materials cost analysis
|
||||
- Absorption vs variable costing impact
|
||||
- Capital expenditure planning and ROI
|
||||
|
||||
### Financial Services
|
||||
- Key metrics: Net Interest Margin, Efficiency Ratio, ROA, Tier 1 Capital
|
||||
- Regulatory capital requirements
|
||||
- Credit loss provisioning and reserves
|
||||
- Fee income analysis and diversification
|
||||
|
||||
### Healthcare
|
||||
- Key metrics: Revenue per patient, Payer mix, Days in A/R, Operating margin
|
||||
- Reimbursement rate analysis by payer
|
||||
- Case mix index impact on revenue
|
||||
- Compliance cost allocation
|
||||
|
||||
## Key Metrics & Targets
|
||||
|
||||
| Metric | Target |
|
||||
|
||||
103
finance/financial-analyst/references/industry-adaptations.md
Normal file
103
finance/financial-analyst/references/industry-adaptations.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# Industry Adaptations
|
||||
|
||||
Sector-specific metrics, benchmarks, and considerations for financial analysis.
|
||||
|
||||
## SaaS / Software
|
||||
|
||||
**Key Metrics:**
|
||||
- ARR / MRR growth rate
|
||||
- Net Revenue Retention (NRR) — target >110%
|
||||
- CAC Payback Period — target <18 months
|
||||
- Rule of 40 (growth rate + profit margin ≥ 40%)
|
||||
- LTV:CAC ratio — target >3:1
|
||||
- Gross margin — target >70%
|
||||
|
||||
**Valuation Multiples:**
|
||||
- Revenue multiple: 5-15x ARR (growth-adjusted)
|
||||
- High-growth (>50%): 15-25x ARR
|
||||
- Moderate growth (20-50%): 8-15x ARR
|
||||
- Low growth (<20%): 3-8x ARR
|
||||
|
||||
**Considerations:**
|
||||
- Deferred revenue recognition (ASC 606)
|
||||
- Stock-based compensation impact on margins
|
||||
- Cohort analysis critical for retention metrics
|
||||
|
||||
## Retail / E-Commerce
|
||||
|
||||
**Key Metrics:**
|
||||
- Same-store sales growth (SSS)
|
||||
- Gross margin by category
|
||||
- Inventory turnover — target varies by segment (grocery: 14-20x, fashion: 4-6x)
|
||||
- Revenue per square foot (physical)
|
||||
- Customer acquisition cost vs. AOV
|
||||
- Return rate impact on unit economics
|
||||
|
||||
**Valuation Multiples:**
|
||||
- EV/EBITDA: 8-15x (premium brands higher)
|
||||
- P/E: 15-25x
|
||||
|
||||
**Considerations:**
|
||||
- Seasonal revenue concentration (Q4 holiday)
|
||||
- Working capital intensity (inventory cycles)
|
||||
- Omnichannel attribution complexity
|
||||
|
||||
## Manufacturing
|
||||
|
||||
**Key Metrics:**
|
||||
- Gross margin by product line
|
||||
- Capacity utilization rate — target >80%
|
||||
- Days Inventory Outstanding (DIO)
|
||||
- Warranty reserve as % of revenue
|
||||
- Capex as % of revenue (maintenance vs. growth)
|
||||
- Order backlog / book-to-bill ratio
|
||||
|
||||
**Valuation Multiples:**
|
||||
- EV/EBITDA: 6-12x
|
||||
- P/E: 12-20x
|
||||
|
||||
**Considerations:**
|
||||
- Raw material cost volatility
|
||||
- Currency exposure in supply chain
|
||||
- Depreciation schedules (straight-line vs. accelerated)
|
||||
- Regulatory compliance costs (environmental, safety)
|
||||
|
||||
## Financial Services
|
||||
|
||||
**Key Metrics:**
|
||||
- Net Interest Margin (NIM)
|
||||
- Return on Equity (ROE) — target >12%
|
||||
- Cost-to-Income Ratio — target <60%
|
||||
- Non-Performing Loan (NPL) ratio
|
||||
- Tier 1 Capital Ratio — regulatory minimum varies
|
||||
- Assets Under Management (AUM) growth
|
||||
|
||||
**Valuation Multiples:**
|
||||
- Price-to-Book (P/B): 1.0-2.5x
|
||||
- P/E: 10-18x
|
||||
|
||||
**Considerations:**
|
||||
- Regulatory capital requirements (Basel III/IV)
|
||||
- Interest rate sensitivity analysis
|
||||
- Credit risk provisioning (CECL / IFRS 9)
|
||||
- Mark-to-market vs. held-to-maturity accounting
|
||||
|
||||
## Healthcare
|
||||
|
||||
**Key Metrics:**
|
||||
- Revenue per patient / per bed
|
||||
- Payor mix (Medicare/Medicaid vs. commercial)
|
||||
- EBITDAR margin (rent-adjusted for facilities)
|
||||
- Clinical trial pipeline value (biotech/pharma)
|
||||
- Patent cliff exposure
|
||||
- R&D as % of revenue — benchmark 15-25% (pharma)
|
||||
|
||||
**Valuation Multiples:**
|
||||
- EV/EBITDA: 10-18x (medtech), 12-20x (pharma)
|
||||
- EV/Revenue: 3-8x (services), 5-15x (devices)
|
||||
|
||||
**Considerations:**
|
||||
- Reimbursement rate changes (regulatory risk)
|
||||
- FDA approval timelines and probability-weighted pipeline
|
||||
- 340B pricing program impact
|
||||
- Medical device regulation (MDR, QSR compliance)
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "campaign-analytics"
|
||||
description: Analyzes campaign performance with multi-touch attribution, funnel conversion, and ROI calculation for marketing optimization
|
||||
description: Analyzes campaign performance with multi-touch attribution, funnel conversion analysis, and ROI calculation for marketing optimization. Use when analyzing marketing campaigns, ad performance, attribution models, conversion rates, or calculating marketing ROI, ROAS, CPA, and campaign metrics across channels.
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 1.0.0
|
||||
@@ -18,30 +18,6 @@ Production-grade campaign performance analysis with multi-touch attribution mode
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Capabilities](#capabilities)
|
||||
- [Input Requirements](#input-requirements)
|
||||
- [Output Formats](#output-formats)
|
||||
- [How to Use](#how-to-use)
|
||||
- [Scripts](#scripts)
|
||||
- [Reference Guides](#reference-guides)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Limitations](#limitations)
|
||||
|
||||
---
|
||||
|
||||
## Capabilities
|
||||
|
||||
- **Multi-Touch Attribution**: Five attribution models (first-touch, last-touch, linear, time-decay, position-based) with configurable parameters
|
||||
- **Funnel Conversion Analysis**: Stage-by-stage conversion rates, drop-off identification, bottleneck detection, and segment comparison
|
||||
- **Campaign ROI Calculation**: ROI, ROAS, CPA, CPL, CAC metrics with industry benchmarking and underperformance flagging
|
||||
- **A/B Test Support**: Templates for structured A/B test documentation and analysis
|
||||
- **Channel Comparison**: Cross-channel performance comparison with normalized metrics
|
||||
- **Executive Reporting**: Ready-to-use templates for campaign performance reports
|
||||
|
||||
---
|
||||
|
||||
## Input Requirements
|
||||
|
||||
All scripts accept a JSON file as positional input argument. See `assets/sample_campaign_data.json` for complete examples.
|
||||
@@ -95,6 +71,16 @@ All scripts accept a JSON file as positional input argument. See `assets/sample_
|
||||
}
|
||||
```
|
||||
|
||||
### Input Validation
|
||||
|
||||
Before running scripts, verify your JSON is valid and matches the expected schema. Common errors:
|
||||
|
||||
- **Missing required keys** (e.g., `journeys`, `funnel.stages`, `campaigns`) → script exits with a descriptive `KeyError`
|
||||
- **Mismatched array lengths** in funnel data (`stages` and `counts` must be the same length) → raises `ValueError`
|
||||
- **Non-numeric monetary values** in ROI data → raises `TypeError`
|
||||
|
||||
Use `python -m json.tool your_file.json` to validate JSON syntax before passing it to any script.
|
||||
|
||||
---
|
||||
|
||||
## Output Formats
|
||||
@@ -106,6 +92,25 @@ All scripts support two output formats via the `--format` flag:
|
||||
|
||||
---
|
||||
|
||||
## Typical Analysis Workflow
|
||||
|
||||
For a complete campaign review, run the three scripts in sequence:
|
||||
|
||||
```bash
|
||||
# Step 1 — Attribution: understand which channels drive conversions
|
||||
python scripts/attribution_analyzer.py campaign_data.json --model time-decay
|
||||
|
||||
# Step 2 — Funnel: identify where prospects drop off on the path to conversion
|
||||
python scripts/funnel_analyzer.py funnel_data.json
|
||||
|
||||
# Step 3 — ROI: calculate profitability and benchmark against industry standards
|
||||
python scripts/campaign_roi_calculator.py campaign_data.json
|
||||
```
|
||||
|
||||
Use attribution results to identify top-performing channels, then focus funnel analysis on those channels' segments, and finally validate ROI metrics to prioritize budget reallocation.
|
||||
|
||||
---
|
||||
|
||||
## How to Use
|
||||
|
||||
### Attribution Analysis
|
||||
@@ -196,10 +201,10 @@ Calculates comprehensive ROI metrics with industry benchmarking:
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use multiple attribution models** -- No single model tells the full story. Compare at least 3 models to triangulate channel value.
|
||||
1. **Use multiple attribution models** -- Compare at least 3 models to triangulate channel value; no single model tells the full story.
|
||||
2. **Set appropriate lookback windows** -- Match your time-decay half-life to your average sales cycle length.
|
||||
3. **Segment your funnels** -- Always compare segments (channel, cohort, geography) to identify what drives best performance.
|
||||
4. **Benchmark against your own history first** -- Industry benchmarks provide context, but your own historical data is the most relevant comparison.
|
||||
3. **Segment your funnels** -- Compare segments (channel, cohort, geography) to identify performance drivers.
|
||||
4. **Benchmark against your own history first** -- Industry benchmarks provide context, but historical data is the most relevant comparison.
|
||||
5. **Run ROI analysis at regular intervals** -- Weekly for active campaigns, monthly for strategic review.
|
||||
6. **Include all costs** -- Factor in creative, tooling, and labor costs alongside media spend for accurate ROI.
|
||||
7. **Document A/B tests rigorously** -- Use the provided template to ensure statistical validity and clear decision criteria.
|
||||
@@ -208,34 +213,12 @@ Calculates comprehensive ROI metrics with industry benchmarking:
|
||||
|
||||
## Limitations
|
||||
|
||||
- **No statistical significance testing** -- A/B test analysis requires external tools for p-value calculations. Scripts provide descriptive metrics only.
|
||||
- **Standard library only** -- No advanced statistical or data processing libraries. Suitable for most campaign sizes but not optimized for datasets exceeding 100K journeys.
|
||||
- **Offline analysis** -- Scripts analyze static JSON snapshots. No real-time data connections or API integrations.
|
||||
- **Single-currency** -- All monetary values assumed to be in the same currency. No currency conversion support.
|
||||
- **Simplified time-decay** -- Uses exponential decay based on configurable half-life. Does not account for weekday/weekend or seasonal patterns.
|
||||
- **No cross-device tracking** -- Attribution operates on provided journey data as-is. Cross-device identity resolution must be handled upstream.
|
||||
|
||||
## Proactive Triggers
|
||||
|
||||
- **Attribution model not set** → Last-click attribution misses 60%+ of the journey. Use multi-touch.
|
||||
- **No baseline metrics documented** → Can't measure improvement without baselines.
|
||||
- **Data discrepancy between tools** → GA4 and ad platform numbers rarely match. Document the gap.
|
||||
- **Vanity metrics dominating reports** → Pageviews don't matter. Focus on conversion metrics.
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| When you ask for... | You get... |
|
||||
|---------------------|------------|
|
||||
| "Campaign report" | Cross-channel performance report with attribution analysis |
|
||||
| "Channel comparison" | Channel-by-channel ROI with budget reallocation recommendations |
|
||||
| "What's working?" | Top 5 performers + bottom 5 drains with specific actions |
|
||||
|
||||
## Communication
|
||||
|
||||
All output passes quality verification:
|
||||
- Self-verify: source attribution, assumption audit, confidence scoring
|
||||
- Output format: Bottom Line → What (with confidence) → Why → How to Act
|
||||
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
|
||||
- **No statistical significance testing** -- Scripts provide descriptive metrics only; p-value calculations require external tools.
|
||||
- **Standard library only** -- No advanced statistical libraries. Suitable for most campaign sizes but not optimized for datasets exceeding 100K journeys.
|
||||
- **Offline analysis** -- Scripts analyze static JSON snapshots; no real-time data connections or API integrations.
|
||||
- **Single-currency** -- All monetary values assumed to be in the same currency; no currency conversion support.
|
||||
- **Simplified time-decay** -- Exponential decay based on configurable half-life; does not account for weekday/weekend or seasonal patterns.
|
||||
- **No cross-device tracking** -- Attribution operates on provided journey data as-is; cross-device identity resolution must be handled upstream.
|
||||
|
||||
## Related Skills
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "marketing-demand-acquisition"
|
||||
description: Multi-channel demand generation, paid media optimization, SEO strategy, and partnership programs for Series A+ startups
|
||||
description: Creates demand generation campaigns, optimizes paid ad spend across LinkedIn, Google, and Meta, develops SEO strategies, and structures partnership programs for Series A+ startups scaling internationally. Use when planning marketing strategy, growth marketing, advertising campaigns, PPC optimization, lead generation, pipeline generation, or startup marketing budgets. Covers multi-channel acquisition (Google Ads, LinkedIn Ads, Meta Ads), CAC analysis, MQL/SQL workflows, attribution modeling, technical SEO, and co-marketing partnerships for hybrid PLG/Sales-Led motions in EU/US/Canada markets.
|
||||
triggers:
|
||||
- demand gen
|
||||
- demand generation
|
||||
@@ -31,7 +31,6 @@ Acquisition playbook for Series A+ startups scaling internationally (EU/US/Canad
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Role Coverage](#role-coverage)
|
||||
- [Core KPIs](#core-kpis)
|
||||
- [Demand Generation Framework](#demand-generation-framework)
|
||||
- [Paid Media Channels](#paid-media-channels)
|
||||
@@ -43,17 +42,6 @@ Acquisition playbook for Series A+ startups scaling internationally (EU/US/Canad
|
||||
|
||||
---
|
||||
|
||||
## Role Coverage
|
||||
|
||||
| Role | Focus Areas |
|
||||
|------|-------------|
|
||||
| Demand Generation Manager | Multi-channel campaigns, pipeline generation |
|
||||
| Paid Media Marketer | Paid search/social/display optimization |
|
||||
| SEO Manager | Organic acquisition, technical SEO |
|
||||
| Partnerships Manager | Co-marketing, channel partnerships |
|
||||
|
||||
---
|
||||
|
||||
## Core KPIs
|
||||
|
||||
**Demand Gen:** MQL/SQL volume, cost per opportunity, marketing-sourced pipeline $, MQL→SQL rate
|
||||
@@ -316,21 +304,6 @@ Required:
|
||||
- **CAC exceeding LTV** → Demand gen is unprofitable. Optimize or cut channels.
|
||||
- **No nurture for non-ready leads** → 80% of leads aren't ready to buy. Nurture converts them later.
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| When you ask for... | You get... |
|
||||
|---------------------|------------|
|
||||
| "Demand gen plan" | Multi-channel acquisition strategy with budget allocation |
|
||||
| "Pipeline analysis" | Funnel conversion rates with bottleneck identification |
|
||||
| "Channel strategy" | Channel selection matrix based on audience and budget |
|
||||
|
||||
## Communication
|
||||
|
||||
All output passes quality verification:
|
||||
- Self-verify: source attribution, assumption audit, confidence scoring
|
||||
- Output format: Bottom Line → What (with confidence) → Why → How to Act
|
||||
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **paid-ads**: For executing paid acquisition campaigns.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "prompt-engineer-toolkit"
|
||||
description: "When the user wants to improve prompts for AI-assisted marketing, build prompt templates, or optimize AI content workflows. Also use when the user mentions 'prompt engineering,' 'improve my prompts,' 'AI writing quality,' 'prompt templates,' or 'AI content workflow.'"
|
||||
description: "Analyzes and rewrites prompts for better AI output, creates reusable prompt templates for marketing use cases (ad copy, email campaigns, social media), and structures end-to-end AI content workflows. Use when the user wants to improve prompts for AI-assisted marketing, build prompt templates, or optimize AI content workflows. Also use when the user mentions 'prompt engineering,' 'improve my prompts,' 'AI writing quality,' 'prompt templates,' or 'AI content workflow.'"
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 1.0.0
|
||||
@@ -11,13 +11,9 @@ metadata:
|
||||
|
||||
# Prompt Engineer Toolkit
|
||||
|
||||
**Tier:** POWERFUL
|
||||
**Category:** Marketing Skill / AI Operations
|
||||
**Domain:** Prompt Engineering, LLM Optimization, AI Workflows
|
||||
|
||||
## Overview
|
||||
|
||||
Use this skill to move prompts from ad-hoc drafts to production assets with repeatable testing, versioning, and regression safety. It emphasizes measurable quality over intuition.
|
||||
Use this skill to move prompts from ad-hoc drafts to production assets with repeatable testing, versioning, and regression safety. It emphasizes measurable quality over intuition. Apply it when launching a new LLM feature that needs reliable outputs, when prompt quality degrades after model or instruction changes, when multiple team members edit prompts and need history/diffs, when you need evidence-based prompt choice for production rollout, or when you want consistent prompt governance across environments.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
@@ -28,14 +24,6 @@ Use this skill to move prompts from ad-hoc drafts to production assets with repe
|
||||
- Reusable prompt templates and selection guidance
|
||||
- Regression-friendly workflows for model/prompt updates
|
||||
|
||||
## When to Use
|
||||
|
||||
- You are launching a new LLM feature and need reliable outputs
|
||||
- Prompt quality degrades after model or instruction changes
|
||||
- Multiple team members edit prompts and need history/diffs
|
||||
- You need evidence-based prompt choice for production rollout
|
||||
- You want consistent prompt governance across environments
|
||||
|
||||
## Key Workflows
|
||||
|
||||
### 1. Run Prompt A/B Test
|
||||
@@ -97,22 +85,24 @@ python3 scripts/prompt_versioner.py changelog --name support_classifier
|
||||
- Manages prompt history (`add`, `list`, `diff`, `changelog`)
|
||||
- Stores metadata and content snapshots locally
|
||||
|
||||
## Common Pitfalls
|
||||
## Pitfalls, Best Practices & Review Checklist
|
||||
|
||||
1. Picking prompts by anecdotal single-case outputs
|
||||
2. Changing prompt + model simultaneously without control group
|
||||
3. Missing forbidden-content checks in evaluation criteria
|
||||
4. Editing prompts without version metadata or rationale
|
||||
5. Failing to diff semantic changes before deploy
|
||||
**Avoid these mistakes:**
|
||||
1. Picking prompts from single-case outputs — use a realistic, edge-case-rich test suite.
|
||||
2. Changing prompt and model simultaneously — always isolate variables.
|
||||
3. Missing `must_not_contain` (forbidden-content) checks in evaluation criteria.
|
||||
4. Editing prompts without version metadata, author, or change rationale.
|
||||
5. Skipping semantic diffs before deploying a new prompt version.
|
||||
6. Optimizing one benchmark while harming edge cases — track the full suite.
|
||||
7. Model swap without rerunning the baseline A/B suite.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Keep test cases realistic and edge-case rich.
|
||||
2. Always include negative checks (`must_not_contain`).
|
||||
3. Store prompt versions with author and change reason.
|
||||
4. Run A/B tests before and after major model upgrades.
|
||||
5. Separate reusable templates from production prompt instances.
|
||||
6. Maintain a small golden regression suite for every critical prompt.
|
||||
**Before promoting any prompt, confirm:**
|
||||
- [ ] Task intent is explicit and unambiguous.
|
||||
- [ ] Output schema/format is explicit.
|
||||
- [ ] Safety and exclusion constraints are explicit.
|
||||
- [ ] No contradictory instructions.
|
||||
- [ ] No unnecessary verbosity tokens.
|
||||
- [ ] A/B score improves and violation count stays at zero.
|
||||
|
||||
## References
|
||||
|
||||
@@ -146,47 +136,3 @@ This enables deterministic grading across prompt variants.
|
||||
3. Run A/B suite against same cases.
|
||||
4. Promote only if winner improves average and keeps violation count at zero.
|
||||
5. Track post-release feedback and feed new failure cases back into test suite.
|
||||
|
||||
## Prompt Review Checklist
|
||||
|
||||
1. Task intent is explicit and unambiguous.
|
||||
2. Output schema/format is explicit.
|
||||
3. Safety and exclusion constraints are explicit.
|
||||
4. Prompt avoids contradictory instructions.
|
||||
5. Prompt avoids unnecessary verbosity tokens.
|
||||
|
||||
## Common Operational Risks
|
||||
|
||||
- Evaluating with too few test cases (false confidence)
|
||||
- Optimizing for one benchmark while harming edge cases
|
||||
- Missing audit trail for prompt edits in multi-author teams
|
||||
- Model swap without rerunning baseline A/B suite
|
||||
|
||||
## Proactive Triggers
|
||||
|
||||
- **AI output sounds generic** → Prompts lack brand voice context. Include voice guidelines.
|
||||
- **Inconsistent output quality** → Prompts too vague. Add specific examples and constraints.
|
||||
- **No quality checks on AI content** → AI output needs human review. Never publish without editing.
|
||||
- **Same prompt style for all tasks** → Different tasks need different prompt structures.
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| When you ask for... | You get... |
|
||||
|---------------------|------------|
|
||||
| "Improve my prompts" | Prompt audit with specific rewrites for better output |
|
||||
| "Prompt templates" | Task-specific prompt templates for marketing use cases |
|
||||
| "AI content workflow" | End-to-end AI-assisted content production workflow |
|
||||
|
||||
## Communication
|
||||
|
||||
All output passes quality verification:
|
||||
- Self-verify: source attribution, assumption audit, confidence scoring
|
||||
- Output format: Bottom Line → What (with confidence) → Why → How to Act
|
||||
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **content-production**: For the full content pipeline. Prompt engineering supports AI-assisted writing.
|
||||
- **ad-creative**: For generating ad variations using prompt techniques.
|
||||
- **content-humanizer**: For refining AI-generated output to sound natural.
|
||||
- **marketing-context**: Provides brand context that improves prompt outputs.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "product-strategist"
|
||||
description: Strategic product leadership toolkit for Head of Product including OKR cascade generation, market analysis, vision setting, and team scaling. Use for strategic planning, goal alignment, competitive analysis, and organizational design.
|
||||
description: Strategic product leadership toolkit for Head of Product covering OKR cascade generation, quarterly planning, competitive landscape analysis, product vision documents, and team scaling proposals. Use when creating quarterly OKR documents, defining product goals or KPIs, building product roadmaps, running competitive analysis, drafting team structure or hiring plans, aligning product strategy across engineering and design, or generating cascaded goal hierarchies from company to team level.
|
||||
---
|
||||
|
||||
# Product Strategist
|
||||
@@ -9,23 +9,19 @@ Strategic toolkit for Head of Product to drive vision, alignment, and organizati
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
## Core Capabilities
|
||||
|
||||
- [Quick Start](#quick-start)
|
||||
- [Core Capabilities](#core-capabilities)
|
||||
- [Workflow: Strategic Planning Session](#workflow-strategic-planning-session)
|
||||
- [OKR Cascade Generator](#okr-cascade-generator)
|
||||
- [Usage](#usage)
|
||||
- [Configuration Options](#configuration-options)
|
||||
- [Input/Output Examples](#inputoutput-examples)
|
||||
- [Reference Documents](#reference-documents)
|
||||
| Capability | Description | Tool |
|
||||
|------------|-------------|------|
|
||||
| **OKR Cascade** | Generate aligned OKRs from company to team level | `okr_cascade_generator.py` |
|
||||
| **Alignment Scoring** | Measure vertical and horizontal alignment | Built into generator |
|
||||
| **Strategy Templates** | 5 pre-built strategy types | Growth, Retention, Revenue, Innovation, Operational |
|
||||
| **Team Configuration** | Customize for your org structure | `--teams` flag |
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Generate OKRs for Your Team
|
||||
|
||||
```bash
|
||||
# Growth strategy with default teams
|
||||
python scripts/okr_cascade_generator.py growth
|
||||
@@ -42,25 +38,10 @@ python scripts/okr_cascade_generator.py growth --json > okrs.json
|
||||
|
||||
---
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
| Capability | Description | Tool |
|
||||
|------------|-------------|------|
|
||||
| **OKR Cascade** | Generate aligned OKRs from company to team level | `okr_cascade_generator.py` |
|
||||
| **Alignment Scoring** | Measure vertical and horizontal alignment | Built into generator |
|
||||
| **Strategy Templates** | 5 pre-built strategy types | Growth, Retention, Revenue, Innovation, Operational |
|
||||
| **Team Configuration** | Customize for your org structure | `--teams` flag |
|
||||
|
||||
---
|
||||
|
||||
## Workflow: Strategic Planning Session
|
||||
|
||||
A step-by-step guide for running a quarterly strategic planning session.
|
||||
## Workflow: Quarterly Strategic Planning
|
||||
|
||||
### Step 1: Define Strategic Focus
|
||||
|
||||
Choose the primary strategy type based on company priorities:
|
||||
|
||||
| Strategy | When to Use |
|
||||
|----------|-------------|
|
||||
| **Growth** | Scaling user base, market expansion |
|
||||
@@ -69,55 +50,42 @@ Choose the primary strategy type based on company priorities:
|
||||
| **Innovation** | Market differentiation, new capabilities |
|
||||
| **Operational** | Improving efficiency, scaling operations |
|
||||
|
||||
See `references/strategy_types.md` for detailed guidance on each strategy.
|
||||
See `references/strategy_types.md` for detailed guidance.
|
||||
|
||||
### Step 2: Gather Input Metrics
|
||||
|
||||
Collect current state metrics to inform OKR targets:
|
||||
|
||||
```bash
|
||||
# Example metrics JSON
|
||||
```json
|
||||
{
|
||||
"current": 100000, # Current MAU
|
||||
"target": 150000, # Target MAU
|
||||
"current_nps": 40, # Current NPS
|
||||
"target_nps": 60 # Target NPS
|
||||
"current": 100000, // Current MAU
|
||||
"target": 150000, // Target MAU
|
||||
"current_nps": 40, // Current NPS
|
||||
"target_nps": 60 // Target NPS
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Configure Team Structure
|
||||
|
||||
Define the teams that will receive cascaded OKRs:
|
||||
### Step 3: Configure Teams & Run Generator
|
||||
|
||||
```bash
|
||||
# Default teams
|
||||
python scripts/okr_cascade_generator.py growth
|
||||
|
||||
# Custom teams for your organization
|
||||
python scripts/okr_cascade_generator.py growth --teams "Core,Platform,Mobile,AI"
|
||||
# Custom org structure with contribution percentage
|
||||
python scripts/okr_cascade_generator.py growth \
|
||||
--teams "Core,Platform,Mobile,AI" \
|
||||
--contribution 0.3
|
||||
```
|
||||
|
||||
### Step 4: Generate OKR Cascade
|
||||
### Step 4: Review Alignment Scores
|
||||
|
||||
Run the generator to create aligned OKRs:
|
||||
|
||||
```bash
|
||||
python scripts/okr_cascade_generator.py growth --contribution 0.3
|
||||
```
|
||||
|
||||
### Step 5: Review Alignment Scores
|
||||
|
||||
Check the alignment scores in the output:
|
||||
|
||||
| Score | Target | Action |
|
||||
|-------|--------|--------|
|
||||
| Score | Target | Action if Below |
|
||||
|-------|--------|-----------------|
|
||||
| Vertical Alignment | >90% | Ensure all objectives link to parent |
|
||||
| Horizontal Alignment | >75% | Check for team coordination |
|
||||
| Horizontal Alignment | >75% | Check for team coordination gaps |
|
||||
| Coverage | >80% | Validate all company OKRs are addressed |
|
||||
| Balance | >80% | Redistribute if one team is overloaded |
|
||||
| **Overall** | **>80%** | Good alignment; <60% needs restructuring |
|
||||
| **Overall** | **>80%** | <60% needs restructuring |
|
||||
|
||||
### Step 6: Refine and Validate
|
||||
### Step 5: Refine, Validate, and Export
|
||||
|
||||
Before finalizing:
|
||||
|
||||
@@ -127,12 +95,8 @@ Before finalizing:
|
||||
- [ ] Ensure no conflicting objectives across teams
|
||||
- [ ] Set up tracking cadence (bi-weekly check-ins)
|
||||
|
||||
### Step 7: Export and Track
|
||||
|
||||
Export OKRs for your tracking system:
|
||||
|
||||
```bash
|
||||
# JSON for tools like Lattice, Ally, Workboard
|
||||
# Export JSON for tools like Lattice, Ally, Workboard
|
||||
python scripts/okr_cascade_generator.py growth --json > q1_okrs.json
|
||||
```
|
||||
|
||||
@@ -140,20 +104,13 @@ python scripts/okr_cascade_generator.py growth --json > q1_okrs.json
|
||||
|
||||
## OKR Cascade Generator
|
||||
|
||||
Automatically cascades company OKRs down to product and team levels with alignment tracking.
|
||||
|
||||
### Usage
|
||||
|
||||
```bash
|
||||
python scripts/okr_cascade_generator.py [strategy] [options]
|
||||
```
|
||||
|
||||
**Strategies:**
|
||||
- `growth` - User acquisition and market expansion
|
||||
- `retention` - Customer value and churn reduction
|
||||
- `revenue` - Revenue growth and monetization
|
||||
- `innovation` - Product differentiation and leadership
|
||||
- `operational` - Efficiency and organizational excellence
|
||||
**Strategies:** `growth` | `retention` | `revenue` | `innovation` | `operational`
|
||||
|
||||
### Configuration Options
|
||||
|
||||
@@ -164,163 +121,67 @@ python scripts/okr_cascade_generator.py [strategy] [options]
|
||||
| `--json`, `-j` | Output as JSON instead of dashboard | False |
|
||||
| `--metrics`, `-m` | Metrics as JSON string | Sample metrics |
|
||||
|
||||
**Examples:**
|
||||
### Output Examples
|
||||
|
||||
```bash
|
||||
# Custom teams
|
||||
python scripts/okr_cascade_generator.py retention \
|
||||
--teams "Engineering,Design,Data,Growth"
|
||||
#### Dashboard Output (`growth` strategy)
|
||||
|
||||
# Higher product contribution
|
||||
python scripts/okr_cascade_generator.py revenue --contribution 0.4
|
||||
|
||||
# Full customization
|
||||
python scripts/okr_cascade_generator.py innovation \
|
||||
--teams "Core,Platform,ML" \
|
||||
--contribution 0.5 \
|
||||
--json
|
||||
```
|
||||
|
||||
### Input/Output Examples
|
||||
|
||||
#### Example 1: Growth Strategy (Dashboard Output)
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
python scripts/okr_cascade_generator.py growth
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
============================================================
|
||||
OKR CASCADE DASHBOARD
|
||||
Quarter: Q1 2025
|
||||
Strategy: GROWTH
|
||||
Teams: Growth, Platform, Mobile, Data
|
||||
Product Contribution: 30%
|
||||
Quarter: Q1 2025 | Strategy: GROWTH
|
||||
Teams: Growth, Platform, Mobile, Data | Product Contribution: 30%
|
||||
============================================================
|
||||
|
||||
🏢 COMPANY OKRS
|
||||
|
||||
📌 CO-1: Accelerate user acquisition and market expansion
|
||||
└─ CO-1-KR1: Increase MAU from 100000 to 150000
|
||||
└─ CO-1-KR2: Achieve 150000% MoM growth rate
|
||||
└─ CO-1-KR3: Expand to 150000 new markets
|
||||
└─ CO-1-KR1: Increase MAU from 100,000 to 150,000
|
||||
└─ CO-1-KR2: Achieve 50% MoM growth rate
|
||||
└─ CO-1-KR3: Expand to 3 new markets
|
||||
|
||||
📌 CO-2: Achieve product-market fit in new segments
|
||||
└─ CO-2-KR1: Reduce CAC by 150000%
|
||||
└─ CO-2-KR2: Improve activation rate to 150000%
|
||||
└─ CO-2-KR3: Increase MAU from 100000 to 150000
|
||||
|
||||
📌 CO-3: Build sustainable growth engine
|
||||
└─ CO-3-KR1: Achieve 150000% MoM growth rate
|
||||
└─ CO-3-KR2: Expand to 150000 new markets
|
||||
└─ CO-3-KR3: Reduce CAC by 150000%
|
||||
|
||||
🚀 PRODUCT OKRS
|
||||
|
||||
📌 PO-1: Build viral product features and market expansion
|
||||
↳ Supports: CO-1
|
||||
└─ PO-1-KR1: Increase product MAU from 100000 to 45000.0
|
||||
└─ PO-1-KR2: Achieve 45000.0% feature adoption rate
|
||||
|
||||
📌 PO-2: Validate product hypotheses in new segments
|
||||
↳ Supports: CO-2
|
||||
└─ PO-2-KR1: Reduce product onboarding efficiency by 45000.0%
|
||||
└─ PO-2-KR2: Improve activation rate to 45000.0%
|
||||
|
||||
📌 PO-3: Create product-led growth loops engine
|
||||
↳ Supports: CO-3
|
||||
└─ PO-3-KR1: Achieve 45000.0% feature adoption rate
|
||||
└─ PO-3-KR2: Expand to 45000.0 new markets
|
||||
└─ PO-1-KR1: Increase product MAU to 45,000
|
||||
└─ PO-1-KR2: Achieve 45% feature adoption rate
|
||||
|
||||
👥 TEAM OKRS
|
||||
|
||||
Growth Team:
|
||||
📌 GRO-1: Build viral product features through acquisition and activation
|
||||
└─ GRO-1-KR1: [Growth] Increase product MAU from 100000 to 11250.0
|
||||
└─ GRO-1-KR2: [Growth] Achieve 11250.0% feature adoption rate
|
||||
|
||||
Platform Team:
|
||||
📌 PLA-1: Build viral product features through infrastructure and reliability
|
||||
└─ PLA-1-KR1: [Platform] Increase product MAU from 100000 to 11250.0
|
||||
└─ PLA-1-KR2: [Platform] Achieve 11250.0% feature adoption rate
|
||||
|
||||
|
||||
📊 ALIGNMENT MATRIX
|
||||
|
||||
Company → Product → Teams
|
||||
----------------------------------------
|
||||
|
||||
CO-1
|
||||
├─ PO-1
|
||||
└─ GRO-1 (Growth)
|
||||
└─ PLA-1 (Platform)
|
||||
|
||||
CO-2
|
||||
├─ PO-2
|
||||
|
||||
CO-3
|
||||
├─ PO-3
|
||||
|
||||
└─ GRO-1-KR1: Increase product MAU to 11,250
|
||||
└─ GRO-1-KR2: Achieve 11.25% feature adoption rate
|
||||
|
||||
🎯 ALIGNMENT SCORES
|
||||
----------------------------------------
|
||||
✓ Vertical Alignment: 100.0%
|
||||
! Horizontal Alignment: 75.0%
|
||||
✓ Coverage: 100.0%
|
||||
✓ Balance: 97.5%
|
||||
✓ Overall: 94.0%
|
||||
|
||||
✓ Coverage: 100.0% | ✓ Balance: 97.5% | ✓ Overall: 94.0%
|
||||
✅ Overall alignment is GOOD (≥80%)
|
||||
```
|
||||
|
||||
#### Example 2: JSON Output
|
||||
#### JSON Output (`retention --json`, truncated)
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
python scripts/okr_cascade_generator.py retention --json
|
||||
```
|
||||
|
||||
**Output (truncated):**
|
||||
```json
|
||||
{
|
||||
"quarter": "Q1 2025",
|
||||
"strategy": "retention",
|
||||
"company": {
|
||||
"level": "Company",
|
||||
"objectives": [
|
||||
{
|
||||
"id": "CO-1",
|
||||
"title": "Create lasting customer value and loyalty",
|
||||
"owner": "CEO",
|
||||
"key_results": [
|
||||
{
|
||||
"id": "CO-1-KR1",
|
||||
"title": "Improve retention from 100000% to 150000%",
|
||||
"current": 100000,
|
||||
"target": 150000
|
||||
}
|
||||
{ "id": "CO-1-KR1", "title": "Improve retention from 70% to 85%", "current": 70, "target": 85 }
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"product": {
|
||||
"level": "Product",
|
||||
"contribution": 0.3,
|
||||
"objectives": [...]
|
||||
},
|
||||
"teams": [...],
|
||||
"product": { "contribution": 0.3, "objectives": ["..."] },
|
||||
"teams": ["..."],
|
||||
"alignment_scores": {
|
||||
"vertical_alignment": 100.0,
|
||||
"horizontal_alignment": 75.0,
|
||||
"coverage": 100.0,
|
||||
"balance": 97.5,
|
||||
"overall": 94.0
|
||||
},
|
||||
"config": {
|
||||
"teams": ["Growth", "Platform", "Mobile", "Data"],
|
||||
"product_contribution": 0.3
|
||||
"vertical_alignment": 100.0, "horizontal_alignment": 75.0,
|
||||
"coverage": 100.0, "balance": 97.5, "overall": 94.0
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -343,17 +204,15 @@ See `references/examples/sample_growth_okrs.json` for a complete example.
|
||||
|
||||
### OKR Cascade
|
||||
|
||||
- Limit to 3-5 objectives per level
|
||||
- Each objective should have 3-5 key results
|
||||
- Limit to 3-5 objectives per level, each with 3-5 key results
|
||||
- Key results must be measurable with current and target values
|
||||
- Validate parent-child relationships before finalizing
|
||||
|
||||
### Alignment Scoring
|
||||
|
||||
- Target >80% overall alignment
|
||||
- Investigate any score below 60%
|
||||
- Target >80% overall alignment; investigate any score below 60%
|
||||
- Balance scores ensure no team is overloaded
|
||||
- Horizontal alignment prevents conflicting goals
|
||||
- Horizontal alignment prevents conflicting goals across teams
|
||||
|
||||
### Team Configuration
|
||||
|
||||
@@ -361,16 +220,3 @@ See `references/examples/sample_growth_okrs.json` for a complete example.
|
||||
- Adjust contribution percentages based on team size
|
||||
- Platform/Infrastructure teams often support all objectives
|
||||
- Specialized teams (ML, Data) may only support relevant objectives
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# Common commands
|
||||
python scripts/okr_cascade_generator.py growth # Default growth
|
||||
python scripts/okr_cascade_generator.py retention # Retention focus
|
||||
python scripts/okr_cascade_generator.py revenue -c 0.4 # 40% contribution
|
||||
python scripts/okr_cascade_generator.py growth --json # JSON export
|
||||
python scripts/okr_cascade_generator.py growth -t "A,B,C" # Custom teams
|
||||
```
|
||||
|
||||
@@ -1,37 +1,48 @@
|
||||
---
|
||||
name: "confluence-expert"
|
||||
description: Atlassian Confluence expert for creating and managing spaces, knowledge bases, documentation, planning, product discovery, page layouts, macros, templates, and all Confluence features. Use for documentation strategy, space architecture, content organization, and collaborative knowledge management.
|
||||
description: Atlassian Confluence expert for creating and managing spaces, knowledge bases, and documentation. Configures space permissions and hierarchies, creates page templates with macros, sets up documentation taxonomies, designs page layouts, and manages content governance. Use when users need to build or restructure a Confluence space, design page hierarchies with permission structures, author or standardise documentation templates, embed Jira reports in pages, run knowledge base audits, or establish documentation standards and collaborative workflows.
|
||||
---
|
||||
|
||||
# Atlassian Confluence Expert
|
||||
|
||||
Master-level expertise in Confluence space management, documentation architecture, content creation, macros, templates, and collaborative knowledge management.
|
||||
|
||||
## Core Competencies
|
||||
## Atlassian MCP Integration
|
||||
|
||||
**Space Architecture**
|
||||
- Design and create space hierarchies
|
||||
- Organize knowledge by teams, projects, or topics
|
||||
- Implement documentation taxonomies
|
||||
- Configure space permissions and visibility
|
||||
**Primary Tool**: Confluence MCP Server
|
||||
|
||||
**Content Creation**
|
||||
- Create structured pages with layouts
|
||||
- Use macros for dynamic content
|
||||
- Build templates for consistency
|
||||
- Implement version control and change tracking
|
||||
**Key Operations**:
|
||||
|
||||
**Collaboration & Governance**
|
||||
- Facilitate team documentation practices
|
||||
- Implement review and approval workflows
|
||||
- Manage content lifecycle
|
||||
- Establish documentation standards
|
||||
```
|
||||
// Create a new space
|
||||
create_space({ key: "TEAM", name: "Engineering Team", description: "Engineering team knowledge base" })
|
||||
|
||||
**Integration & Automation**
|
||||
- Link Confluence with Jira
|
||||
- Embed dynamic Jira reports
|
||||
- Configure page watchers and notifications
|
||||
- Set up content automation
|
||||
// Create a page under a parent
|
||||
create_page({ spaceKey: "TEAM", title: "Sprint 42 Notes", parentId: "123456", body: "<p>Meeting notes in storage-format HTML</p>" })
|
||||
|
||||
// Update an existing page (version must be incremented)
|
||||
update_page({ pageId: "789012", version: 4, body: "<p>Updated content</p>" })
|
||||
|
||||
// Delete a page
|
||||
delete_page({ pageId: "789012" })
|
||||
|
||||
// Search with CQL
|
||||
search({ cql: 'space = "TEAM" AND label = "meeting-notes" ORDER BY lastModified DESC' })
|
||||
|
||||
// Retrieve child pages for hierarchy inspection
|
||||
get_children({ pageId: "123456" })
|
||||
|
||||
// Apply a label to a page
|
||||
add_label({ pageId: "789012", label: "archived" })
|
||||
```
|
||||
|
||||
**Integration Points**:
|
||||
- Create documentation for Senior PM projects
|
||||
- Support Scrum Master with ceremony templates
|
||||
- Link to Jira issues for Jira Expert
|
||||
- Provide templates for Template Creator
|
||||
|
||||
> **See also**: `MACROS.md` for macro syntax reference, `TEMPLATES.md` for full template library, `PERMISSIONS.md` for permission scheme details.
|
||||
|
||||
## Workflows
|
||||
|
||||
@@ -44,7 +55,8 @@ Master-level expertise in Confluence space management, documentation architectur
|
||||
- Admin privileges
|
||||
5. Create initial page tree structure
|
||||
6. Add space shortcuts for navigation
|
||||
7. **HANDOFF TO**: Teams for content population
|
||||
7. **Verify**: Navigate to the space URL and confirm the homepage loads; check that a non-admin test user sees the correct permission level
|
||||
8. **HANDOFF TO**: Teams for content population
|
||||
|
||||
### Page Architecture
|
||||
**Best Practices**:
|
||||
@@ -79,7 +91,8 @@ Space Home
|
||||
4. Format with appropriate macros
|
||||
5. Save as template
|
||||
6. Share with space or make global
|
||||
7. **USE**: References for advanced template patterns
|
||||
7. **Verify**: Create a test page from the template and confirm all placeholders render correctly before sharing with the team
|
||||
8. **USE**: References for advanced template patterns
|
||||
|
||||
### Documentation Strategy
|
||||
1. **Assess** current documentation state
|
||||
@@ -108,6 +121,8 @@ Space Home
|
||||
|
||||
## Essential Macros
|
||||
|
||||
> Full macro reference with all parameters: see `MACROS.md`.
|
||||
|
||||
### Content Macros
|
||||
**Info, Note, Warning, Tip**:
|
||||
```
|
||||
@@ -212,135 +227,18 @@ const example = "code here";
|
||||
|
||||
## Templates Library
|
||||
|
||||
### Meeting Notes Template
|
||||
```
|
||||
**Date**: {date}
|
||||
**Attendees**: @user1, @user2
|
||||
**Facilitator**: @facilitator
|
||||
> Full template library with complete markup: see `TEMPLATES.md`. Key templates summarised below.
|
||||
|
||||
## Agenda
|
||||
1. Topic 1
|
||||
2. Topic 2
|
||||
|
||||
## Discussion
|
||||
- Key point 1
|
||||
- Key point 2
|
||||
|
||||
## Decisions
|
||||
{info}Decision 1{info}
|
||||
|
||||
## Action Items
|
||||
{tasks}
|
||||
- [ ] Action item 1 (@owner, due date)
|
||||
- [ ] Action item 2 (@owner, due date)
|
||||
{tasks}
|
||||
|
||||
## Next Steps
|
||||
- Next meeting date
|
||||
```
|
||||
|
||||
### Project Overview Template
|
||||
```
|
||||
{panel:title=Project Quick Facts}
|
||||
**Status**: {status:colour=Green|title=Active}
|
||||
**Owner**: @owner
|
||||
**Start Date**: DD/MM/YYYY
|
||||
**End Date**: DD/MM/YYYY
|
||||
**Budget**: $XXX,XXX
|
||||
{panel}
|
||||
|
||||
## Executive Summary
|
||||
Brief project description
|
||||
|
||||
## Objectives
|
||||
1. Objective 1
|
||||
2. Objective 2
|
||||
|
||||
## Key Stakeholders
|
||||
| Name | Role | Responsibility |
|
||||
|------|------|----------------|
|
||||
| @user | PM | Overall delivery |
|
||||
|
||||
## Milestones
|
||||
{jira:project=PROJ AND type=Epic}
|
||||
|
||||
## Risks & Issues
|
||||
| Risk | Impact | Mitigation |
|
||||
|------|--------|-----------|
|
||||
| Risk 1 | High | Action plan |
|
||||
|
||||
## Resources
|
||||
- [Design Docs](#)
|
||||
- [Technical Specs](#)
|
||||
```
|
||||
|
||||
### Decision Log Template
|
||||
```
|
||||
**Decision ID**: PROJ-DEC-001
|
||||
**Date**: {date}
|
||||
**Status**: {status:colour=Green|title=Approved}
|
||||
**Decision Maker**: @decisionmaker
|
||||
|
||||
## Context
|
||||
Background and problem statement
|
||||
|
||||
## Options Considered
|
||||
1. Option A
|
||||
- Pros:
|
||||
- Cons:
|
||||
2. Option B
|
||||
- Pros:
|
||||
- Cons:
|
||||
|
||||
## Decision
|
||||
Chosen option and rationale
|
||||
|
||||
## Consequences
|
||||
Expected outcomes and impacts
|
||||
|
||||
## Next Steps
|
||||
- [ ] Action 1
|
||||
- [ ] Action 2
|
||||
```
|
||||
|
||||
### Sprint Retrospective Template
|
||||
```
|
||||
**Sprint**: Sprint XX
|
||||
**Date**: {date}
|
||||
**Team**: Team Name
|
||||
|
||||
## What Went Well
|
||||
{info}
|
||||
- Positive item 1
|
||||
- Positive item 2
|
||||
{info}
|
||||
|
||||
## What Didn't Go Well
|
||||
{warning}
|
||||
- Challenge 1
|
||||
- Challenge 2
|
||||
{warning}
|
||||
|
||||
## Action Items
|
||||
{tasks}
|
||||
- [ ] Improvement 1 (@owner)
|
||||
- [ ] Improvement 2 (@owner)
|
||||
{tasks}
|
||||
|
||||
## Metrics
|
||||
**Velocity**: XX points
|
||||
**Completed Stories**: X/X
|
||||
**Bugs Found**: X
|
||||
```
|
||||
| Template | Purpose | Key Sections |
|
||||
|----------|---------|--------------|
|
||||
| **Meeting Notes** | Sprint/team meetings | Agenda, Discussion, Decisions, Action Items (tasks macro) |
|
||||
| **Project Overview** | Project kickoff & status | Quick Facts panel, Objectives, Stakeholders table, Milestones (Jira macro), Risks |
|
||||
| **Decision Log** | Architectural/strategic decisions | Context, Options Considered, Decision, Consequences, Next Steps |
|
||||
| **Sprint Retrospective** | Agile ceremony docs | What Went Well (info), What Didn't (warning), Action Items (tasks), Metrics |
|
||||
|
||||
## Space Permissions
|
||||
|
||||
### Permission Levels
|
||||
- **View**: Read-only access
|
||||
- **Edit**: Modify existing pages
|
||||
- **Create**: Add new pages
|
||||
- **Delete**: Remove pages
|
||||
- **Admin**: Full space control
|
||||
> Full permission scheme details: see `PERMISSIONS.md`.
|
||||
|
||||
### Permission Schemes
|
||||
**Public Space**:
|
||||
@@ -441,13 +339,6 @@ Expected outcomes and impacts
|
||||
|
||||
## Best Practices
|
||||
|
||||
**Writing Style**:
|
||||
- Use active voice
|
||||
- Write scannable content (headings, bullets, short paragraphs)
|
||||
- Include visuals and diagrams
|
||||
- Provide examples
|
||||
- Keep language simple and clear
|
||||
|
||||
**Organization**:
|
||||
- Consistent naming conventions
|
||||
- Meaningful labels
|
||||
@@ -477,22 +368,3 @@ Expected outcomes and impacts
|
||||
- Duplicate content
|
||||
- Broken links
|
||||
- Empty spaces
|
||||
|
||||
## Atlassian MCP Integration
|
||||
|
||||
**Primary Tool**: Confluence MCP Server
|
||||
|
||||
**Key Operations**:
|
||||
- Create and manage spaces
|
||||
- Create, update, and delete pages
|
||||
- Apply templates and macros
|
||||
- Manage page hierarchies
|
||||
- Configure permissions
|
||||
- Search content
|
||||
- Extract documentation for analysis
|
||||
|
||||
**Integration Points**:
|
||||
- Create documentation for Senior PM projects
|
||||
- Support Scrum Master with ceremony templates
|
||||
- Link to Jira issues for Jira Expert
|
||||
- Provide templates for Template Creator
|
||||
|
||||
@@ -7,31 +7,21 @@ description: Atlassian Jira expert for creating and managing projects, planning,
|
||||
|
||||
Master-level expertise in Jira configuration, project management, JQL, workflows, automation, and reporting. Handles all technical and operational aspects of Jira.
|
||||
|
||||
## Core Competencies
|
||||
## Quick Start — Most Common Operations
|
||||
|
||||
**Project Configuration**
|
||||
- Create and configure Jira projects (Scrum, Kanban, custom)
|
||||
- Design and implement custom workflows
|
||||
- Configure issue types, fields, and screens
|
||||
- Set up project permissions and security schemes
|
||||
**Create a project**:
|
||||
```
|
||||
mcp jira create_project --name "My Project" --key "MYPROJ" --type scrum --lead "user@example.com"
|
||||
```
|
||||
|
||||
**JQL Mastery**
|
||||
- Write advanced JQL queries for any use case
|
||||
- Create complex filters with multiple conditions
|
||||
- Optimize query performance
|
||||
- Build saved filters for team use
|
||||
**Run a JQL query**:
|
||||
```
|
||||
mcp jira search_issues --jql "project = MYPROJ AND status != Done AND dueDate < now()" --maxResults 50
|
||||
```
|
||||
|
||||
**Automation & Integration**
|
||||
- Design Jira automation rules
|
||||
- Configure webhooks and integrations
|
||||
- Set up email notifications
|
||||
- Integrate with external tools (Confluence, Slack, etc.)
|
||||
For full command reference, see [Atlassian MCP Integration](#atlassian-mcp-integration). For JQL functions, see [JQL Functions Reference](#jql-functions-reference). For report templates, see [Reporting Templates](#reporting-templates).
|
||||
|
||||
**Reporting & Dashboards**
|
||||
- Create custom dashboards with gadgets
|
||||
- Build reports for sprint metrics, velocity, burndown
|
||||
- Configure portfolio-level reporting
|
||||
- Export data for executive reporting
|
||||
---
|
||||
|
||||
## Workflows
|
||||
|
||||
@@ -53,9 +43,9 @@ Master-level expertise in Jira configuration, project management, JQL, workflows
|
||||
2. Define transitions and conditions
|
||||
3. Add validators, post-functions, and conditions
|
||||
4. Configure workflow scheme
|
||||
5. Associate workflow with project
|
||||
6. Test workflow with sample issues
|
||||
7. **USE**: References for complex workflow patterns
|
||||
5. **Validate**: Deploy to a test project first; verify all transitions, conditions, and post-functions behave as expected before associating with production projects
|
||||
6. Associate workflow with project
|
||||
7. Test workflow with sample issues
|
||||
|
||||
### JQL Query Building
|
||||
**Basic Structure**: `field operator value`
|
||||
@@ -125,7 +115,6 @@ assignee in (user1, user2) AND sprint in openSprints()
|
||||
- Post comment
|
||||
4. Test automation with sample data
|
||||
5. Enable and monitor
|
||||
6. **USE**: References for complex automation patterns
|
||||
|
||||
## Advanced Features
|
||||
|
||||
@@ -135,12 +124,7 @@ assignee in (user1, user2) AND sprint in openSprints()
|
||||
- Capture process-specific information
|
||||
- Enable advanced reporting
|
||||
|
||||
**Field Types**:
|
||||
- Text: Short text, paragraph
|
||||
- Numeric: Number, decimal
|
||||
- Date: Date picker, date-time
|
||||
- Select: Single select, multi-select, cascading
|
||||
- User: User picker, multi-user picker
|
||||
**Field Types**: Text, Numeric, Date, Select (single/multi/cascading), User picker
|
||||
|
||||
**Configuration**:
|
||||
1. Create custom field
|
||||
@@ -179,7 +163,7 @@ assignee in (user1, user2) AND sprint in openSprints()
|
||||
1. Use JQL to find target issues
|
||||
2. Select bulk change operation
|
||||
3. Choose fields to update
|
||||
4. Preview changes
|
||||
4. **Validate**: Preview all changes before executing; confirm the JQL filter matches only intended issues — bulk edits are difficult to reverse
|
||||
5. Execute and confirm
|
||||
6. Monitor background task
|
||||
|
||||
@@ -187,50 +171,30 @@ assignee in (user1, user2) AND sprint in openSprints()
|
||||
- Move multiple issues through workflow
|
||||
- Useful for sprint cleanup
|
||||
- Requires appropriate permissions
|
||||
- **Validate**: Run the JQL filter and review results in small batches before applying at scale
|
||||
|
||||
## JQL Functions Reference
|
||||
|
||||
**Date Functions**:
|
||||
- `startOfDay()`, `endOfDay()`
|
||||
- `startOfWeek()`, `endOfWeek()`
|
||||
- `startOfMonth()`, `endOfMonth()`
|
||||
- `startOfYear()`, `endOfYear()`
|
||||
> **Tip**: Save frequently used queries as named filters instead of re-running complex JQL ad hoc. See [Best Practices](#best-practices) for performance guidance.
|
||||
|
||||
**Sprint Functions**:
|
||||
- `openSprints()`
|
||||
- `closedSprints()`
|
||||
- `futureSprints()`
|
||||
**Date**: `startOfDay()`, `endOfDay()`, `startOfWeek()`, `endOfWeek()`, `startOfMonth()`, `endOfMonth()`, `startOfYear()`, `endOfYear()`
|
||||
|
||||
**User Functions**:
|
||||
- `currentUser()`
|
||||
- `membersOf("group")`
|
||||
**Sprint**: `openSprints()`, `closedSprints()`, `futureSprints()`
|
||||
|
||||
**Advanced Functions**:
|
||||
- `issueHistory()`
|
||||
- `linkedIssues()`
|
||||
- `issuesWithFixVersions()`
|
||||
**User**: `currentUser()`, `membersOf("group")`
|
||||
|
||||
**Advanced**: `issueHistory()`, `linkedIssues()`, `issuesWithFixVersions()`
|
||||
|
||||
## Reporting Templates
|
||||
|
||||
**Sprint Report**:
|
||||
```jql
|
||||
project = PROJ AND sprint = 23
|
||||
```
|
||||
> **Tip**: These JQL snippets can be saved as shared filters or wired directly into Dashboard gadgets (see [Dashboard Creation](#dashboard-creation)).
|
||||
|
||||
**Team Velocity**:
|
||||
```jql
|
||||
assignee in (team) AND sprint in closedSprints() AND resolution = Done
|
||||
```
|
||||
|
||||
**Bug Trend**:
|
||||
```jql
|
||||
type = Bug AND created >= -30d
|
||||
```
|
||||
|
||||
**Blocker Analysis**:
|
||||
```jql
|
||||
priority = Blocker AND status != Done
|
||||
```
|
||||
| Report | JQL |
|
||||
|---|---|
|
||||
| Sprint Report | `project = PROJ AND sprint = 23` |
|
||||
| Team Velocity | `assignee in (team) AND sprint in closedSprints() AND resolution = Done` |
|
||||
| Bug Trend | `type = Bug AND created >= -30d` |
|
||||
| Blocker Analysis | `priority = Blocker AND status != Done` |
|
||||
|
||||
## Decision Framework
|
||||
|
||||
@@ -282,35 +246,52 @@ priority = Blocker AND status != Done
|
||||
## Best Practices
|
||||
|
||||
**Data Quality**:
|
||||
- Enforce required fields
|
||||
- Use field validation
|
||||
- Regular cleanup of stale issues
|
||||
- Consistent naming conventions
|
||||
- Enforce required fields with field validation rules
|
||||
- Use consistent issue key naming conventions per project type
|
||||
- Schedule regular cleanup of stale/orphaned issues
|
||||
|
||||
**Performance**:
|
||||
- Optimize JQL queries
|
||||
- Limit dashboard gadgets
|
||||
- Use saved filters
|
||||
- Archive old projects
|
||||
- Avoid leading wildcards in JQL (`~` on large text fields is expensive)
|
||||
- Use saved filters instead of re-running complex JQL ad hoc
|
||||
- Limit dashboard gadgets to reduce page load time
|
||||
- Archive completed projects rather than deleting to preserve history
|
||||
|
||||
**Governance**:
|
||||
- Document workflow rationale
|
||||
- Version control for schemes
|
||||
- Change management for major updates
|
||||
- Regular permission audits
|
||||
- Document rationale for custom workflow states and transitions
|
||||
- Version-control permission/workflow schemes before making changes
|
||||
- Require change management review for org-wide scheme updates
|
||||
- Run permission audits after user role changes
|
||||
|
||||
## Atlassian MCP Integration
|
||||
|
||||
**Primary Tool**: Jira MCP Server
|
||||
|
||||
**Key Operations**:
|
||||
- Create and configure projects
|
||||
- Execute JQL queries for data extraction
|
||||
- Update issue fields and statuses
|
||||
- Create and manage sprints
|
||||
- Generate reports and dashboards
|
||||
- Configure workflows and automation
|
||||
- Manage boards and filters
|
||||
**Key Operations with Example Commands**:
|
||||
|
||||
Create a project:
|
||||
```
|
||||
mcp jira create_project --name "My Project" --key "MYPROJ" --type scrum --lead "user@example.com"
|
||||
```
|
||||
|
||||
Execute a JQL query:
|
||||
```
|
||||
mcp jira search_issues --jql "project = MYPROJ AND status != Done AND dueDate < now()" --maxResults 50
|
||||
```
|
||||
|
||||
Update an issue field:
|
||||
```
|
||||
mcp jira update_issue --issue "MYPROJ-42" --field "status" --value "In Progress"
|
||||
```
|
||||
|
||||
Create a sprint:
|
||||
```
|
||||
mcp jira create_sprint --board 10 --name "Sprint 5" --startDate "2024-06-01" --endDate "2024-06-14"
|
||||
```
|
||||
|
||||
Create a board filter:
|
||||
```
|
||||
mcp jira create_filter --name "Open Blockers" --jql "priority = Blocker AND status != Done" --shareWith "project-team"
|
||||
```
|
||||
|
||||
**Integration Points**:
|
||||
- Pull metrics for Senior PM reporting
|
||||
|
||||
@@ -1,319 +0,0 @@
|
||||
# Project Management Team Skills Suite
|
||||
## World-Class Atlassian Expert Skills Collection
|
||||
|
||||
This suite contains **6 specialized, world-class expert skills** for your Project Management team stack. Each skill is a dedicated expert with deep domain knowledge, clear handoff protocols, and full integration with the Atlassian MCP Server.
|
||||
|
||||
---
|
||||
|
||||
## 📦 Included Skills
|
||||
|
||||
### 1. **Senior Project Management** (`senior-pm.zip`)
|
||||
**Role**: Strategic PM for Software, SaaS, and Digital Applications
|
||||
|
||||
**Core Capabilities**:
|
||||
- Portfolio management and strategic planning
|
||||
- Stakeholder alignment and executive reporting
|
||||
- Risk management and budget oversight
|
||||
- Cross-functional team leadership
|
||||
- Roadmap development
|
||||
|
||||
**When to Use**:
|
||||
- Strategic project planning
|
||||
- Portfolio-level decisions
|
||||
- Executive reporting
|
||||
- Risk management
|
||||
- Multi-project coordination
|
||||
|
||||
**Integrates With**: Scrum Master, Jira Expert, Confluence Expert
|
||||
|
||||
---
|
||||
|
||||
### 2. **Scrum Master** (`scrum-master.zip`)
|
||||
**Role**: Agile Facilitator for Software Development Teams
|
||||
|
||||
**Core Capabilities**:
|
||||
- Sprint planning and execution
|
||||
- Daily standups and retrospectives
|
||||
- Backlog refinement
|
||||
- Velocity tracking
|
||||
- Impediment removal
|
||||
- Team coaching on agile practices
|
||||
|
||||
**When to Use**:
|
||||
- Sprint ceremony facilitation
|
||||
- Agile coaching
|
||||
- Team performance tracking
|
||||
- Blocker resolution
|
||||
- Sprint reporting
|
||||
|
||||
**Integrates With**: Senior PM, Jira Expert, Confluence Expert
|
||||
|
||||
---
|
||||
|
||||
### 3. **Atlassian Jira Expert** (`jira-expert.zip`)
|
||||
**Role**: Jira Configuration, JQL, and Technical Operations Master
|
||||
|
||||
**Core Capabilities**:
|
||||
- Advanced JQL query writing
|
||||
- Project and workflow configuration
|
||||
- Custom fields and automation
|
||||
- Dashboards and reporting
|
||||
- Integration setup
|
||||
- Performance optimization
|
||||
|
||||
**When to Use**:
|
||||
- Jira project setup
|
||||
- Complex JQL queries
|
||||
- Workflow design
|
||||
- Dashboard creation
|
||||
- Automation rules
|
||||
- Technical Jira operations
|
||||
|
||||
**Integrates With**: All roles (provides Jira infrastructure)
|
||||
|
||||
---
|
||||
|
||||
### 4. **Atlassian Confluence Expert** (`confluence-expert.zip`)
|
||||
**Role**: Knowledge Management and Documentation Architecture Master
|
||||
|
||||
**Core Capabilities**:
|
||||
- Space architecture and organization
|
||||
- Page templates and macros
|
||||
- Documentation strategy
|
||||
- Content governance
|
||||
- Collaboration workflows
|
||||
- Integration with Jira
|
||||
|
||||
**When to Use**:
|
||||
- Documentation space setup
|
||||
- Template creation
|
||||
- Knowledge base architecture
|
||||
- Content organization
|
||||
- Macro implementation
|
||||
- Documentation governance
|
||||
|
||||
**Integrates With**: All roles (provides documentation infrastructure)
|
||||
|
||||
---
|
||||
|
||||
### 5. **Atlassian Administrator** (`atlassian-admin.zip`)
|
||||
**Role**: System Administrator for Atlassian Suite
|
||||
|
||||
**Core Capabilities**:
|
||||
- User provisioning and access management
|
||||
- Global configuration and governance
|
||||
- Security and compliance
|
||||
- SSO and integrations
|
||||
- Performance optimization
|
||||
- Disaster recovery
|
||||
|
||||
**When to Use**:
|
||||
- User management
|
||||
- Org-wide configuration
|
||||
- Security policies
|
||||
- System optimization
|
||||
- Compliance requirements
|
||||
- Integration deployment
|
||||
|
||||
**Integrates With**: All roles (provides system administration)
|
||||
|
||||
---
|
||||
|
||||
### 6. **Atlassian Template Creator** (`atlassian-templates.zip`)
|
||||
**Role**: Template and Files Creation/Modification Expert
|
||||
|
||||
**Core Capabilities**:
|
||||
- Confluence page template design
|
||||
- Jira issue template creation
|
||||
- Blueprint development
|
||||
- Standardized content structures
|
||||
- Template governance
|
||||
- Automation integration
|
||||
|
||||
**When to Use**:
|
||||
- Creating new templates
|
||||
- Modifying existing templates
|
||||
- Building blueprints
|
||||
- Standardizing content
|
||||
- Template deployment
|
||||
- Template maintenance
|
||||
|
||||
**Integrates With**: All roles (provides standardized templates)
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Handoff & Communication Matrix
|
||||
|
||||
### Information Flow
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Senior PM │ ← Strategic oversight
|
||||
└────────┬────────┘
|
||||
│
|
||||
┌────┴─────┬──────────┬─────────────┐
|
||||
▼ ▼ ▼ ▼
|
||||
┌──────┐ ┌─────────┐ ┌──────────┐ ┌──────┐
|
||||
│Scrum │ │ Jira │ │Confluence│ │Admin │
|
||||
│Master│ │ Expert │ │ Expert │ │ │
|
||||
└──┬───┘ └────┬────┘ └─────┬────┘ └───┬──┘
|
||||
│ │ │ │
|
||||
│ └──────┬───────┴───────────┘
|
||||
│ ▼
|
||||
│ ┌─────────────────┐
|
||||
└────────▶│Template Creator │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Handoff Protocols
|
||||
|
||||
**Senior PM → Scrum Master**:
|
||||
- Project scope and objectives
|
||||
- Initial backlog priorities
|
||||
- Team composition
|
||||
- Sprint cadence
|
||||
|
||||
**Senior PM → Jira Expert**:
|
||||
- Project structure requirements
|
||||
- Reporting needs
|
||||
- Integration requirements
|
||||
|
||||
**Scrum Master → Jira Expert**:
|
||||
- Sprint board configuration
|
||||
- Workflow optimization
|
||||
- Backlog filtering
|
||||
|
||||
**All Roles → Atlassian Admin**:
|
||||
- User access requests
|
||||
- Permission changes
|
||||
- App installations
|
||||
- System support
|
||||
|
||||
**All Roles → Template Creator**:
|
||||
- Template requirements
|
||||
- Standardization needs
|
||||
- Content structure requests
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start Guide
|
||||
|
||||
### Installation
|
||||
1. Upload each `.zip` file to Claude via the Skills interface
|
||||
2. Enable the skills you need
|
||||
3. Skills are immediately available
|
||||
|
||||
### Usage Pattern
|
||||
|
||||
**For Project Initiation**:
|
||||
1. Start with **Senior PM** for strategic planning
|
||||
2. Hand off to **Scrum Master** for sprint setup
|
||||
3. Use **Jira Expert** for project configuration
|
||||
4. Use **Confluence Expert** for documentation
|
||||
5. Use **Template Creator** for standardized content
|
||||
|
||||
**For Ongoing Operations**:
|
||||
- **Scrum Master**: Daily sprint management
|
||||
- **Jira Expert**: Technical queries and configuration
|
||||
- **Confluence Expert**: Documentation needs
|
||||
- **Senior PM**: Portfolio reviews and stakeholder reports
|
||||
|
||||
**For System Management**:
|
||||
- **Atlassian Admin**: User/permission management
|
||||
- **Template Creator**: Template updates and new templates
|
||||
|
||||
---
|
||||
|
||||
## ✨ Key Features
|
||||
|
||||
### ✅ World-Class Expertise
|
||||
Each skill contains deep, specialized knowledge in its domain
|
||||
|
||||
### ✅ Clear Handoffs
|
||||
Explicit protocols for collaboration between skills
|
||||
|
||||
### ✅ No Fluff
|
||||
Direct, actionable guidance without unnecessary verbosity
|
||||
|
||||
### ✅ MCP Integration
|
||||
All skills leverage Atlassian MCP Server for operations
|
||||
|
||||
### ✅ Current Best Practices
|
||||
Based on latest Atlassian features and industry standards
|
||||
|
||||
### ✅ Modular Design
|
||||
Use only the skills you need, when you need them
|
||||
|
||||
---
|
||||
|
||||
## 📊 Skill Complexity Matrix
|
||||
|
||||
| Skill | Complexity | Primary Users | Usage Frequency |
|
||||
|-------|-----------|---------------|-----------------|
|
||||
| Senior PM | Strategic | Leadership | Weekly |
|
||||
| Scrum Master | Operational | Teams | Daily |
|
||||
| Jira Expert | Technical | All | As needed |
|
||||
| Confluence Expert | Technical | All | As needed |
|
||||
| Atlassian Admin | System | Admins | As needed |
|
||||
| Template Creator | Creative | All | Monthly |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Recommended Skill Combinations
|
||||
|
||||
**For Agile Teams**:
|
||||
- Scrum Master + Jira Expert + Confluence Expert
|
||||
|
||||
**For Project Management**:
|
||||
- Senior PM + Jira Expert + Confluence Expert
|
||||
|
||||
**For System Administration**:
|
||||
- Atlassian Admin + Jira Expert + Confluence Expert
|
||||
|
||||
**For Complete Stack**:
|
||||
- All 6 skills for comprehensive coverage
|
||||
|
||||
---
|
||||
|
||||
## 📝 Notes
|
||||
|
||||
- Each skill is **standalone** and can be used independently
|
||||
- Skills are designed to **communicate clearly** with explicit handoff protocols
|
||||
- All skills **respect the current context** and use updated information
|
||||
- Skills use **Atlassian MCP Server** for all operations
|
||||
- No duplication of responsibilities between skills
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
**Regular Reviews**:
|
||||
- Quarterly skill content reviews
|
||||
- Update for new Atlassian features
|
||||
- Incorporate user feedback
|
||||
- Optimize based on usage patterns
|
||||
|
||||
**Version Control**:
|
||||
- Each skill is versioned independently
|
||||
- Track updates in skill documentation
|
||||
- Maintain backward compatibility
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
Each skill includes:
|
||||
- Detailed workflows
|
||||
- Decision frameworks
|
||||
- Best practices
|
||||
- Examples and templates
|
||||
- Handoff protocols
|
||||
|
||||
For questions or improvements, engage with the appropriate skill directly in Claude.
|
||||
|
||||
---
|
||||
|
||||
**Created**: October 2025
|
||||
**Skills Version**: 1.0
|
||||
**Compatible With**: Atlassian Cloud, Atlassian MCP Server
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "quality-manager-qms-iso13485"
|
||||
description: ISO 13485 Quality Management System implementation and maintenance for medical device organizations. Provides QMS design, documentation control, internal auditing, CAPA management, and certification support.
|
||||
description: ISO 13485 Quality Management System implementation and maintenance for medical device organizations. Provides QMS design, documentation control, internal auditing, CAPA management, and certification support. Use when working with medical device quality systems, preparing for ISO 13485 audits, managing regulatory compliance documentation, setting up corrective actions, or building audit preparation programs. Useful for quality management, audit preparation, regulatory compliance, medical device documentation, and corrective action workflows.
|
||||
triggers:
|
||||
- ISO 13485
|
||||
- QMS implementation
|
||||
@@ -52,45 +52,20 @@ Implement ISO 13485:2016 compliant quality management system from gap analysis t
|
||||
- QMS scope with justified exclusions
|
||||
- Process interactions
|
||||
- Procedure references
|
||||
6. Create required documented procedures:
|
||||
- Document control (4.2.3)
|
||||
- Record control (4.2.4)
|
||||
- Internal audit (8.2.4)
|
||||
- Nonconforming product (8.3)
|
||||
- Corrective action (8.5.2)
|
||||
- Preventive action (8.5.3)
|
||||
6. Create required documented procedures — see [Mandatory Documented Procedures](#quick-reference-mandatory-documented-procedures) for the full list
|
||||
7. Deploy processes with training
|
||||
8. **Validation:** Gap analysis complete; Quality Manual approved; all required procedures documented and trained
|
||||
|
||||
### Gap Analysis Matrix
|
||||
|
||||
| Clause | Requirement | Current State | Gap | Priority | Action |
|
||||
|--------|-------------|---------------|-----|----------|--------|
|
||||
| 4.2.2 | Quality Manual | Not documented | Major | High | Create QM |
|
||||
| 4.2.3 | Document control | Informal | Moderate | High | Formalize SOP |
|
||||
| 5.6 | Management review | Ad hoc | Major | High | Establish schedule |
|
||||
| 7.3 | Design control | Partial | Moderate | Medium | Complete procedures |
|
||||
| 8.2.4 | Internal audit | None | Major | High | Create program |
|
||||
> Use the Gap Analysis Matrix template in [qms-process-templates.md](references/qms-process-templates.md) to document clause-by-clause current state, gaps, priority, and actions.
|
||||
|
||||
### QMS Structure
|
||||
|
||||
| Level | Document Type | Purpose | Example |
|
||||
|-------|---------------|---------|---------|
|
||||
| 1 | Quality Manual | QMS overview, policy | QM-001 |
|
||||
| 2 | Procedures | How processes work | SOP-02-001 |
|
||||
| 3 | Work Instructions | Task-level detail | WI-06-012 |
|
||||
| 4 | Records | Evidence of conformity | Training records |
|
||||
|
||||
### Required Procedure List
|
||||
|
||||
| Clause | Procedure | Minimum Content |
|
||||
|--------|-----------|-----------------|
|
||||
| 4.2.3 | Document Control | Approval, review, distribution, obsolete control |
|
||||
| 4.2.4 | Record Control | Identification, storage, retention, disposal |
|
||||
| 8.2.4 | Internal Audit | Program, auditor qualification, reporting |
|
||||
| 8.3 | Nonconforming Product | Identification, segregation, disposition |
|
||||
| 8.5.2 | Corrective Action | Investigation, root cause, effectiveness |
|
||||
| 8.5.3 | Preventive Action | Risk identification, implementation, verification |
|
||||
| Level | Document Type | Example |
|
||||
|-------|---------------|---------|
|
||||
| 1 | Quality Manual | QM-001 |
|
||||
| 2 | Procedures | SOP-02-001 |
|
||||
| 3 | Work Instructions | WI-06-012 |
|
||||
| 4 | Records | Training records |
|
||||
|
||||
---
|
||||
|
||||
@@ -174,6 +149,8 @@ Plan and execute internal audits per ISO 13485 Clause 8.2.4.
|
||||
7. Track completion and reschedule as needed
|
||||
8. **Validation:** All processes covered; auditors qualified and independent; schedule approved
|
||||
|
||||
> Use the Audit Program Template in [qms-process-templates.md](references/qms-process-templates.md) to schedule audits by clause and quarter across processes such as Document Control (4.2.3/4.2.4), Management Review (5.6), Design Control (7.3), Production (7.5), and CAPA (8.5.2/8.5.3).
|
||||
|
||||
### Workflow: Individual Audit Execution
|
||||
|
||||
1. Prepare audit plan with scope, criteria, and schedule
|
||||
@@ -194,16 +171,6 @@ Plan and execute internal audits per ISO 13485 Clause 8.2.4.
|
||||
9. Issue audit report within 5 business days
|
||||
10. **Validation:** All checklist items addressed; findings supported by evidence; report distributed
|
||||
|
||||
### Audit Program Template
|
||||
|
||||
| Audit # | Process | Clauses | Q1 | Q2 | Q3 | Q4 | Auditor |
|
||||
|---------|---------|---------|----|----|----|----|---------|
|
||||
| IA-001 | Document Control | 4.2.3, 4.2.4 | X | | | | [Name] |
|
||||
| IA-002 | Management Review | 5.6 | | X | | | [Name] |
|
||||
| IA-003 | Design Control | 7.3 | | X | | | [Name] |
|
||||
| IA-004 | Production | 7.5 | | | X | | [Name] |
|
||||
| IA-005 | CAPA | 8.5.2, 8.5.3 | | | | X | [Name] |
|
||||
|
||||
### Auditor Qualification Requirements
|
||||
|
||||
| Criterion | Requirement |
|
||||
@@ -239,15 +206,9 @@ Validate special processes per ISO 13485 Clause 7.5.6.
|
||||
- Equipment and materials
|
||||
- Acceptance criteria
|
||||
- Statistical approach
|
||||
4. Execute Installation Qualification (IQ):
|
||||
- Verify equipment installed correctly
|
||||
- Document equipment specifications
|
||||
5. Execute Operational Qualification (OQ):
|
||||
- Test parameter ranges
|
||||
- Verify process control
|
||||
6. Execute Performance Qualification (PQ):
|
||||
- Run production conditions
|
||||
- Verify output meets requirements
|
||||
4. Execute IQ: verify equipment installed correctly and document specifications
|
||||
5. Execute OQ: test parameter ranges and verify process control
|
||||
6. Execute PQ: run production conditions and verify output meets requirements
|
||||
7. Write validation report with conclusions
|
||||
8. **Validation:** IQ/OQ/PQ complete; acceptance criteria met; validation report approved
|
||||
|
||||
@@ -344,26 +305,7 @@ Evaluate and approve suppliers per ISO 13485 Clause 7.4.
|
||||
|
||||
## QMS Process Reference
|
||||
|
||||
### ISO 13485 Clause Structure
|
||||
|
||||
| Clause | Title | Key Requirements |
|
||||
|--------|-------|-----------------|
|
||||
| 4.1 | General Requirements | Process identification, interaction, outsourcing |
|
||||
| 4.2 | Documentation | Quality Manual, procedures, records |
|
||||
| 5.1-5.5 | Management Responsibility | Commitment, policy, objectives, organization |
|
||||
| 5.6 | Management Review | Inputs, outputs, records |
|
||||
| 6.1-6.4 | Resource Management | Personnel, infrastructure, environment |
|
||||
| 7.1 | Product Realization Planning | Quality plan, risk management |
|
||||
| 7.2 | Customer Requirements | Determination, review, communication |
|
||||
| 7.3 | Design and Development | Planning, inputs, outputs, review, V&V, transfer, changes |
|
||||
| 7.4 | Purchasing | Supplier control, purchasing info, verification |
|
||||
| 7.5 | Production | Control, cleanliness, validation, identification, traceability |
|
||||
| 7.6 | Monitoring Equipment | Calibration, control |
|
||||
| 8.1 | Measurement Planning | Monitoring and analysis planning |
|
||||
| 8.2 | Monitoring | Feedback, complaints, reporting, audits, process, product |
|
||||
| 8.3 | Nonconforming Product | Control, disposition |
|
||||
| 8.4 | Data Analysis | Trend analysis |
|
||||
| 8.5 | Improvement | CAPA |
|
||||
For detailed requirements and audit questions for each ISO 13485:2016 clause, see [iso13485-clause-requirements.md](references/iso13485-clause-requirements.md).
|
||||
|
||||
### Management Review Required Inputs (Clause 5.6.2)
|
||||
|
||||
@@ -467,7 +409,7 @@ Nonconforming Product Identified
|
||||
| Document | Content |
|
||||
|----------|---------|
|
||||
| [iso13485-clause-requirements.md](references/iso13485-clause-requirements.md) | Detailed requirements for each ISO 13485:2016 clause with audit questions |
|
||||
| [qms-process-templates.md](references/qms-process-templates.md) | Ready-to-use templates for document control, audit, CAPA, supplier, training |
|
||||
| [qms-process-templates.md](references/qms-process-templates.md) | Ready-to-use templates for gap analysis, audit program, document control, CAPA, supplier, training |
|
||||
|
||||
### Quick Reference: Mandatory Documented Procedures
|
||||
|
||||
|
||||
Reference in New Issue
Block a user