fix: optimize 14 low-scoring skills via Tessl review (#290)
Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286.
This commit is contained in:
@@ -1,116 +1,129 @@
|
||||
---
|
||||
name: "sales-engineer"
|
||||
description: Analyzes RFP responses for coverage gaps, builds competitive feature matrices, and plans proof-of-concept engagements for pre-sales engineering
|
||||
description: Analyzes RFP/RFI responses for coverage gaps, builds competitive feature comparison matrices, and plans proof-of-concept (POC) engagements for pre-sales engineering. Use when responding to RFPs, bids, or proposal requests; comparing product features against competitors; planning or scoring a customer POC or sales demo; preparing a technical proposal; or performing win/loss competitor analysis. Handles tasks described as 'RFP response', 'bid response', 'proposal response', 'competitor comparison', 'feature matrix', 'POC planning', 'sales demo prep', or 'pre-sales engineering'.
|
||||
---
|
||||
|
||||
# Sales Engineer Skill
|
||||
|
||||
A production-ready skill package for pre-sales engineering that bridges technical expertise and sales execution. Provides automated analysis for RFP/RFI responses, competitive positioning, and proof-of-concept planning.
|
||||
|
||||
## Overview
|
||||
|
||||
**Role:** Sales Engineer / Solutions Architect
|
||||
**Domain:** Pre-Sales Engineering, Solution Design, Technical Demos, Proof of Concepts
|
||||
**Business Type:** SaaS / Pre-Sales Engineering
|
||||
|
||||
### What This Skill Does
|
||||
|
||||
- **RFP/RFI Response Analysis** - Score requirement coverage, identify gaps, generate bid/no-bid recommendations
|
||||
- **Competitive Technical Positioning** - Build feature comparison matrices, identify differentiators and vulnerabilities
|
||||
- **POC Planning** - Generate timelines, resource plans, success criteria, and evaluation scorecards
|
||||
- **Demo Preparation** - Structure demo scripts with talking points and objection handling
|
||||
- **Technical Proposal Creation** - Framework for solution architecture and implementation planning
|
||||
- **Win/Loss Analysis** - Data-driven competitive assessment for deal strategy
|
||||
|
||||
### Key Metrics
|
||||
|
||||
| Metric | Description | Target |
|
||||
|--------|-------------|--------|
|
||||
| Win Rate | Deals won / total opportunities | >30% |
|
||||
| Sales Cycle Length | Average days from discovery to close | <90 days |
|
||||
| POC Conversion Rate | POCs resulting in closed deals | >60% |
|
||||
| Customer Engagement Score | Stakeholder participation in evaluation | >75% |
|
||||
| RFP Coverage Score | Requirements fully addressed | >80% |
|
||||
|
||||
## 5-Phase Workflow
|
||||
|
||||
### Phase 1: Discovery & Research
|
||||
|
||||
**Objective:** Understand customer requirements, technical environment, and business drivers.
|
||||
|
||||
**Activities:**
|
||||
1. Conduct technical discovery calls with stakeholders
|
||||
2. Map customer's current architecture and pain points
|
||||
3. Identify integration requirements and constraints
|
||||
4. Document security and compliance requirements
|
||||
5. Assess competitive landscape for this opportunity
|
||||
**Checklist:**
|
||||
- [ ] Conduct technical discovery calls with stakeholders
|
||||
- [ ] Map customer's current architecture and pain points
|
||||
- [ ] Identify integration requirements and constraints
|
||||
- [ ] Document security and compliance requirements
|
||||
- [ ] Assess competitive landscape for this opportunity
|
||||
|
||||
**Tools:** Use `rfp_response_analyzer.py` to score initial requirement alignment.
|
||||
**Tools:** Run `rfp_response_analyzer.py` to score initial requirement alignment.
|
||||
|
||||
```bash
|
||||
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json > phase1_rfp_results.json
|
||||
```
|
||||
|
||||
**Output:** Technical discovery document, requirement map, initial coverage assessment.
|
||||
|
||||
**Validation checkpoint:** Coverage score must be >50% and must-have gaps ≤3 before proceeding to Phase 2. Check with:
|
||||
```bash
|
||||
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json | python -c "import sys,json; r=json.load(sys.stdin); print('PROCEED' if r['coverage_score']>50 and r['must_have_gaps']<=3 else 'REVIEW')"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Solution Design
|
||||
|
||||
**Objective:** Design a solution architecture that addresses customer requirements.
|
||||
|
||||
**Activities:**
|
||||
1. Map product capabilities to customer requirements
|
||||
2. Design integration architecture
|
||||
3. Identify customization needs and development effort
|
||||
4. Build competitive differentiation strategy
|
||||
5. Create solution architecture diagrams
|
||||
**Checklist:**
|
||||
- [ ] Map product capabilities to customer requirements
|
||||
- [ ] Design integration architecture
|
||||
- [ ] Identify customization needs and development effort
|
||||
- [ ] Build competitive differentiation strategy
|
||||
- [ ] Create solution architecture diagrams
|
||||
|
||||
**Tools:** Use `competitive_matrix_builder.py` to identify differentiators and vulnerabilities.
|
||||
**Tools:** Run `competitive_matrix_builder.py` using Phase 1 data to identify differentiators and vulnerabilities.
|
||||
|
||||
```bash
|
||||
python scripts/competitive_matrix_builder.py competitive_data.json --format json > phase2_competitive.json
|
||||
|
||||
python -c "import json; d=json.load(open('phase2_competitive.json')); print('Differentiators:', d['differentiators']); print('Vulnerabilities:', d['vulnerabilities'])"
|
||||
```
|
||||
|
||||
**Output:** Solution architecture, competitive positioning, technical differentiation strategy.
|
||||
|
||||
**Validation checkpoint:** Confirm at least one strong differentiator exists per customer priority before proceeding to Phase 3. If no differentiators found, escalate to Product Team (see Integration Points).
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Demo Preparation & Delivery
|
||||
|
||||
**Objective:** Deliver compelling technical demonstrations tailored to stakeholder priorities.
|
||||
|
||||
**Activities:**
|
||||
1. Build demo environment matching customer's use case
|
||||
2. Create demo script with talking points per stakeholder role
|
||||
3. Prepare objection handling responses
|
||||
4. Rehearse failure scenarios and recovery paths
|
||||
5. Collect feedback and adjust approach
|
||||
**Checklist:**
|
||||
- [ ] Build demo environment matching customer's use case
|
||||
- [ ] Create demo script with talking points per stakeholder role
|
||||
- [ ] Prepare objection handling responses
|
||||
- [ ] Rehearse failure scenarios and recovery paths
|
||||
- [ ] Collect feedback and adjust approach
|
||||
|
||||
**Templates:** Use `demo_script_template.md` for structured demo preparation.
|
||||
**Templates:** Use `assets/demo_script_template.md` for structured demo preparation.
|
||||
|
||||
**Output:** Customized demo, stakeholder-specific talking points, feedback capture.
|
||||
|
||||
**Validation checkpoint:** Demo script must cover every must-have requirement flagged in `phase1_rfp_results.json` before delivery. Cross-reference with:
|
||||
```bash
|
||||
python -c "import json; rfp=json.load(open('phase1_rfp_results.json')); [print('UNCOVERED:', r) for r in rfp['must_have_requirements'] if r['coverage']=='Gap']"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: POC & Evaluation
|
||||
|
||||
**Objective:** Execute a structured proof-of-concept that validates the solution.
|
||||
|
||||
**Activities:**
|
||||
1. Define POC scope, success criteria, and timeline
|
||||
2. Allocate resources and set up environment
|
||||
3. Execute phased testing (core, advanced, edge cases)
|
||||
4. Track progress against success criteria
|
||||
5. Generate evaluation scorecard
|
||||
**Checklist:**
|
||||
- [ ] Define POC scope, success criteria, and timeline
|
||||
- [ ] Allocate resources and set up environment
|
||||
- [ ] Execute phased testing (core, advanced, edge cases)
|
||||
- [ ] Track progress against success criteria
|
||||
- [ ] Generate evaluation scorecard
|
||||
|
||||
**Tools:** Use `poc_planner.py` to generate the complete POC plan.
|
||||
**Tools:** Run `poc_planner.py` to generate the complete POC plan.
|
||||
|
||||
**Templates:** Use `poc_scorecard_template.md` for evaluation tracking.
|
||||
```bash
|
||||
python scripts/poc_planner.py poc_data.json --format json > phase4_poc_plan.json
|
||||
|
||||
python -c "import json; p=json.load(open('phase4_poc_plan.json')); print('Go/No-Go:', p['recommendation'])"
|
||||
```
|
||||
|
||||
**Templates:** Use `assets/poc_scorecard_template.md` for evaluation tracking.
|
||||
|
||||
**Output:** POC plan, evaluation scorecard, go/no-go recommendation.
|
||||
|
||||
**Validation checkpoint:** POC conversion requires scorecard score >60% across all evaluation dimensions (functionality, performance, integration, usability, support). If score <60%, document gaps and loop back to Phase 2 for solution redesign.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Proposal & Closing
|
||||
|
||||
**Objective:** Deliver a technical proposal that supports the commercial close.
|
||||
|
||||
**Activities:**
|
||||
1. Compile POC results and success metrics
|
||||
2. Create technical proposal with implementation plan
|
||||
3. Address outstanding objections with evidence
|
||||
4. Support pricing and packaging discussions
|
||||
5. Conduct win/loss analysis post-decision
|
||||
**Checklist:**
|
||||
- [ ] Compile POC results and success metrics
|
||||
- [ ] Create technical proposal with implementation plan
|
||||
- [ ] Address outstanding objections with evidence
|
||||
- [ ] Support pricing and packaging discussions
|
||||
- [ ] Conduct win/loss analysis post-decision
|
||||
|
||||
**Templates:** Use `technical_proposal_template.md` for the proposal document.
|
||||
**Templates:** Use `assets/technical_proposal_template.md` for the proposal document.
|
||||
|
||||
**Output:** Technical proposal, implementation timeline, risk mitigation plan.
|
||||
|
||||
---
|
||||
|
||||
## Python Automation Tools
|
||||
|
||||
### 1. RFP Response Analyzer
|
||||
@@ -119,63 +132,42 @@ A production-ready skill package for pre-sales engineering that bridges technica
|
||||
|
||||
**Purpose:** Parse RFP/RFI requirements, score coverage, identify gaps, and generate bid/no-bid recommendations.
|
||||
|
||||
**Coverage Categories:**
|
||||
- **Full (100%)** - Requirement fully met by current product
|
||||
- **Partial (50%)** - Requirement partially met, workaround or configuration needed
|
||||
- **Planned (25%)** - On product roadmap, not yet available
|
||||
- **Gap (0%)** - Not supported, no current plan
|
||||
|
||||
**Priority Weighting:**
|
||||
- Must-Have: 3x weight
|
||||
- Should-Have: 2x weight
|
||||
- Nice-to-Have: 1x weight
|
||||
**Coverage Categories:** Full (100%), Partial (50%), Planned (25%), Gap (0%).
|
||||
**Priority Weighting:** Must-Have 3×, Should-Have 2×, Nice-to-Have 1×.
|
||||
|
||||
**Bid/No-Bid Logic:**
|
||||
- **Bid:** Coverage score >70% AND must-have gaps <=3
|
||||
- **Conditional Bid:** Coverage score 50-70% OR must-have gaps 2-3
|
||||
- **No-Bid:** Coverage score <50% OR must-have gaps >3
|
||||
- **Bid:** Coverage >70% AND must-have gaps ≤3
|
||||
- **Conditional Bid:** Coverage 50–70% OR must-have gaps 2–3
|
||||
- **No-Bid:** Coverage <50% OR must-have gaps >3
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Human-readable output
|
||||
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json
|
||||
|
||||
# JSON output
|
||||
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json
|
||||
|
||||
# Help
|
||||
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json # human-readable
|
||||
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json # JSON output
|
||||
python scripts/rfp_response_analyzer.py --help
|
||||
```
|
||||
|
||||
**Input Format:** See `assets/sample_rfp_data.json` for the complete schema.
|
||||
|
||||
---
|
||||
|
||||
### 2. Competitive Matrix Builder
|
||||
|
||||
**Script:** `scripts/competitive_matrix_builder.py`
|
||||
|
||||
**Purpose:** Generate feature comparison matrices, calculate competitive scores, identify differentiators and vulnerabilities.
|
||||
|
||||
**Feature Scoring:**
|
||||
- **Full (3)** - Complete feature support
|
||||
- **Partial (2)** - Partial or limited feature support
|
||||
- **Limited (1)** - Minimal or basic feature support
|
||||
- **None (0)** - Feature not available
|
||||
**Feature Scoring:** Full (3), Partial (2), Limited (1), None (0).
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Human-readable output
|
||||
python scripts/competitive_matrix_builder.py competitive_data.json
|
||||
|
||||
# JSON output
|
||||
python scripts/competitive_matrix_builder.py competitive_data.json --format json
|
||||
python scripts/competitive_matrix_builder.py competitive_data.json # human-readable
|
||||
python scripts/competitive_matrix_builder.py competitive_data.json --format json # JSON output
|
||||
```
|
||||
|
||||
**Output Includes:**
|
||||
- Feature comparison matrix with scores
|
||||
- Weighted competitive scores per product
|
||||
- Differentiators (features where our product leads)
|
||||
- Vulnerabilities (features where competitors lead)
|
||||
- Win themes based on differentiators
|
||||
**Output Includes:** Feature comparison matrix, weighted competitive scores, differentiators, vulnerabilities, and win themes.
|
||||
|
||||
---
|
||||
|
||||
### 3. POC Planner
|
||||
|
||||
@@ -184,27 +176,20 @@ python scripts/competitive_matrix_builder.py competitive_data.json --format json
|
||||
**Purpose:** Generate structured POC plans with timeline, resource allocation, success criteria, and evaluation scorecards.
|
||||
|
||||
**Default Phase Breakdown:**
|
||||
- **Week 1:** Setup - Environment provisioning, data migration, configuration
|
||||
- **Weeks 2-3:** Core Testing - Primary use cases, integration testing
|
||||
- **Week 4:** Advanced Testing - Edge cases, performance, security
|
||||
- **Week 5:** Evaluation - Scorecard completion, stakeholder review, go/no-go
|
||||
- **Week 1:** Setup — environment provisioning, data migration, configuration
|
||||
- **Weeks 2–3:** Core Testing — primary use cases, integration testing
|
||||
- **Week 4:** Advanced Testing — edge cases, performance, security
|
||||
- **Week 5:** Evaluation — scorecard completion, stakeholder review, go/no-go
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Human-readable output
|
||||
python scripts/poc_planner.py poc_data.json
|
||||
|
||||
# JSON output
|
||||
python scripts/poc_planner.py poc_data.json --format json
|
||||
python scripts/poc_planner.py poc_data.json # human-readable
|
||||
python scripts/poc_planner.py poc_data.json --format json # JSON output
|
||||
```
|
||||
|
||||
**Output Includes:**
|
||||
- POC plan with phased timeline
|
||||
- Resource allocation (SE, engineering, customer)
|
||||
- Success criteria with measurable metrics
|
||||
- Evaluation scorecard (functionality, performance, integration, usability, support)
|
||||
- Risk register with mitigation strategies
|
||||
- Go/No-Go recommendation framework
|
||||
**Output Includes:** Phased POC plan, resource allocation, success criteria, evaluation scorecard, risk register, and go/no-go recommendation framework.
|
||||
|
||||
---
|
||||
|
||||
## Reference Knowledge Bases
|
||||
|
||||
@@ -224,13 +209,6 @@ python scripts/poc_planner.py poc_data.json --format json
|
||||
| `assets/sample_rfp_data.json` | Sample RFP data for testing the analyzer |
|
||||
| `assets/expected_output.json` | Expected output from rfp_response_analyzer.py |
|
||||
|
||||
## Communication Style
|
||||
|
||||
- **Technical yet accessible** - Translate complex concepts for business stakeholders
|
||||
- **Confident and consultative** - Position as trusted advisor, not vendor
|
||||
- **Evidence-based** - Back every claim with data, demos, or case studies
|
||||
- **Stakeholder-aware** - Tailor depth and focus to audience (CTO vs. end user vs. procurement)
|
||||
|
||||
## Integration Points
|
||||
|
||||
- **Marketing Skills** - Leverage competitive intelligence and messaging frameworks from `../../marketing-skill/`
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "c-level-advisor"
|
||||
description: "Complete virtual board of directors — 28 skills covering 10 C-level roles, orchestration, cross-cutting capabilities, and culture frameworks. Features internal quality loop, two-layer memory, board meeting protocol with Phase 2 isolation, proactive triggers, and structured communication standard."
|
||||
description: "Provides strategic business advice by channelling the perspectives of 10 executive roles — CEO, CTO, COO, CPO, CMO, CFO, CRO, CISO, CHRO, and Executive Mentor — across decisions, trade-offs, and org challenges. Runs multi-role board meetings, routes questions to the right executive voice, and delivers structured recommendations (Bottom Line → What → Why → How to Act → Your Decision). Use when a founder or executive needs business strategy advice, leadership perspective, executive decision support, board-level input, fundraising guidance, product-market fit review, hiring or culture frameworks, risk assessment, or competitive analysis."
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 2.0.0
|
||||
@@ -21,8 +21,106 @@ A complete virtual board of directors for founders and executives.
|
||||
|
||||
```
|
||||
1. Run /cs:setup → creates company-context.md (all agents read this)
|
||||
✓ Verify company-context.md was created and contains your company name,
|
||||
stage, and core metrics before proceeding.
|
||||
2. Ask any strategic question → Chief of Staff routes to the right role
|
||||
3. For big decisions → /cs:board triggers a multi-role board meeting
|
||||
✓ Confirm at least 3 roles have weighed in before accepting a conclusion.
|
||||
```
|
||||
|
||||
### Commands
|
||||
|
||||
#### `/cs:setup` — Onboarding Questionnaire
|
||||
|
||||
Walks through the following prompts and writes `company-context.md` to the project root. Run once per company or when context changes significantly.
|
||||
|
||||
```
|
||||
Q1. What is your company name and one-line description?
|
||||
Q2. What stage are you at? (Idea / Pre-seed / Seed / Series A / Series B+)
|
||||
Q3. What is your current ARR (or MRR) and runway in months?
|
||||
Q4. What is your team size and structure?
|
||||
Q5. What industry and customer segment do you serve?
|
||||
Q6. What are your top 3 priorities for the next 90 days?
|
||||
Q7. What is your biggest current risk or blocker?
|
||||
```
|
||||
|
||||
After collecting answers, the agent writes structured output:
|
||||
|
||||
```markdown
|
||||
# Company Context
|
||||
- Name: <answer>
|
||||
- Stage: <answer>
|
||||
- Industry: <answer>
|
||||
- Team size: <answer>
|
||||
- Key metrics: <ARR/MRR, growth rate, runway>
|
||||
- Top priorities: <answer>
|
||||
- Key risks: <answer>
|
||||
```
|
||||
|
||||
#### `/cs:board` — Full Board Meeting
|
||||
|
||||
Convenes all relevant executive roles in three phases:
|
||||
|
||||
```
|
||||
Phase 1 — Framing: Chief of Staff states the decision and success criteria.
|
||||
Phase 2 — Isolation: Each role produces independent analysis (no cross-talk).
|
||||
Phase 3 — Debate: Roles surface conflicts, stress-test assumptions, align on
|
||||
a recommendation. Dissenting views are preserved in the log.
|
||||
```
|
||||
|
||||
Use for high-stakes or cross-functional decisions. Confirm at least 3 roles have weighed in before accepting a conclusion.
|
||||
|
||||
### Chief of Staff Routing Matrix
|
||||
|
||||
When a question arrives without a role prefix, the Chief of Staff maps it to the appropriate executive using these primary signals:
|
||||
|
||||
| Topic Signal | Primary Role | Supporting Roles |
|
||||
|---|---|---|
|
||||
| Fundraising, valuation, burn | CFO | CEO, CRO |
|
||||
| Architecture, build vs. buy, tech debt | CTO | CPO, CISO |
|
||||
| Hiring, culture, performance | CHRO | CEO, Executive Mentor |
|
||||
| GTM, demand gen, positioning | CMO | CRO, CPO |
|
||||
| Revenue, pipeline, sales motion | CRO | CMO, CFO |
|
||||
| Security, compliance, risk | CISO | CTO, CFO |
|
||||
| Product roadmap, prioritisation | CPO | CTO, CMO |
|
||||
| Ops, process, scaling | COO | CFO, CHRO |
|
||||
| Vision, strategy, investor relations | CEO | Executive Mentor |
|
||||
| Career, founder psychology, leadership | Executive Mentor | CEO, CHRO |
|
||||
| Multi-domain / unclear | Chief of Staff convenes board | All relevant roles |
|
||||
|
||||
### Invoking a Specific Role Directly
|
||||
|
||||
To bypass Chief of Staff routing and address one executive directly, prefix your question with the role name:
|
||||
|
||||
```
|
||||
CFO: What is our optimal burn rate heading into a Series A?
|
||||
CTO: Should we rebuild our auth layer in-house or buy a solution?
|
||||
CHRO: How do we design a performance review process for a 15-person team?
|
||||
```
|
||||
|
||||
The Chief of Staff still logs the exchange; only routing is skipped.
|
||||
|
||||
### Example: Strategic Question
|
||||
|
||||
**Input:** "Should we raise a Series A now or extend runway and grow ARR first?"
|
||||
|
||||
**Output format:**
|
||||
- **Bottom Line:** Extend runway 6 months; raise at $2M ARR for better terms.
|
||||
- **What:** Current $800K ARR is below the threshold most Series A investors benchmark.
|
||||
- **Why:** Raising now increases dilution risk; 6-month extension is achievable with current burn.
|
||||
- **How to Act:** Cut 2 low-ROI channels, hit $2M ARR, then run a 6-week fundraise sprint.
|
||||
- **Your Decision:** Proceed with extension / Raise now anyway (choose one).
|
||||
|
||||
### Example: company-context.md (after /cs:setup)
|
||||
|
||||
```markdown
|
||||
# Company Context
|
||||
- Name: Acme Inc.
|
||||
- Stage: Seed ($800K ARR)
|
||||
- Industry: B2B SaaS
|
||||
- Team size: 12
|
||||
- Key metrics: 15% MoM growth, 18-month runway
|
||||
- Top priorities: Series A readiness, enterprise GTM
|
||||
```
|
||||
|
||||
## What's Included
|
||||
|
||||
@@ -1,176 +1,214 @@
|
||||
---
|
||||
name: "senior-data-scientist"
|
||||
description: World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.
|
||||
description: World-class senior data scientist skill specialising in statistical modeling, experiment design, causal inference, and predictive analytics. Covers A/B testing (sample sizing, two-proportion z-tests, Bonferroni correction), difference-in-differences, feature engineering pipelines (Scikit-learn, XGBoost), cross-validated model evaluation (AUC-ROC, AUC-PR, SHAP), and MLflow experiment tracking — using Python (NumPy, Pandas, Scikit-learn), R, and SQL. Use when designing or analysing controlled experiments, building and evaluating classification or regression models, performing causal analysis on observational data, engineering features for structured tabular datasets, or translating statistical findings into data-driven business decisions.
|
||||
---
|
||||
|
||||
# Senior Data Scientist
|
||||
|
||||
World-class senior data scientist skill for production-grade AI/ML/Data systems.
|
||||
|
||||
## Quick Start
|
||||
## Core Workflows
|
||||
|
||||
### Main Capabilities
|
||||
### 1. Design an A/B Test
|
||||
|
||||
```bash
|
||||
# Core Tool 1
|
||||
python scripts/experiment_designer.py --input data/ --output results/
|
||||
```python
|
||||
import numpy as np
|
||||
from scipy import stats
|
||||
|
||||
# Core Tool 2
|
||||
python scripts/feature_engineering_pipeline.py --target project/ --analyze
|
||||
def calculate_sample_size(baseline_rate, mde, alpha=0.05, power=0.8):
|
||||
"""
|
||||
Calculate required sample size per variant.
|
||||
baseline_rate: current conversion rate (e.g. 0.10)
|
||||
mde: minimum detectable effect (relative, e.g. 0.05 = 5% lift)
|
||||
"""
|
||||
p1 = baseline_rate
|
||||
p2 = baseline_rate * (1 + mde)
|
||||
effect_size = abs(p2 - p1) / np.sqrt((p1 * (1 - p1) + p2 * (1 - p2)) / 2)
|
||||
z_alpha = stats.norm.ppf(1 - alpha / 2)
|
||||
z_beta = stats.norm.ppf(power)
|
||||
n = ((z_alpha + z_beta) / effect_size) ** 2
|
||||
return int(np.ceil(n))
|
||||
|
||||
# Core Tool 3
|
||||
python scripts/model_evaluation_suite.py --config config.yaml --deploy
|
||||
def analyze_experiment(control, treatment, alpha=0.05):
|
||||
"""
|
||||
Run two-proportion z-test and return structured results.
|
||||
control/treatment: dicts with 'conversions' and 'visitors'.
|
||||
"""
|
||||
p_c = control["conversions"] / control["visitors"]
|
||||
p_t = treatment["conversions"] / treatment["visitors"]
|
||||
pooled = (control["conversions"] + treatment["conversions"]) / (control["visitors"] + treatment["visitors"])
|
||||
se = np.sqrt(pooled * (1 - pooled) * (1 / control["visitors"] + 1 / treatment["visitors"]))
|
||||
z = (p_t - p_c) / se
|
||||
p_value = 2 * (1 - stats.norm.cdf(abs(z)))
|
||||
ci_low = (p_t - p_c) - stats.norm.ppf(1 - alpha / 2) * se
|
||||
ci_high = (p_t - p_c) + stats.norm.ppf(1 - alpha / 2) * se
|
||||
return {
|
||||
"lift": (p_t - p_c) / p_c,
|
||||
"p_value": p_value,
|
||||
"significant": p_value < alpha,
|
||||
"ci_95": (ci_low, ci_high),
|
||||
}
|
||||
|
||||
# --- Experiment checklist ---
|
||||
# 1. Define ONE primary metric and pre-register secondary metrics.
|
||||
# 2. Calculate sample size BEFORE starting: calculate_sample_size(0.10, 0.05)
|
||||
# 3. Randomise at the user (not session) level to avoid leakage.
|
||||
# 4. Run for at least 1 full business cycle (typically 2 weeks).
|
||||
# 5. Check for sample ratio mismatch: abs(n_control - n_treatment) / expected < 0.01
|
||||
# 6. Analyze with analyze_experiment() and report lift + CI, not just p-value.
|
||||
# 7. Apply Bonferroni correction if testing multiple metrics: alpha / n_metrics
|
||||
```
|
||||
|
||||
## Core Expertise
|
||||
### 2. Build a Feature Engineering Pipeline
|
||||
|
||||
This skill covers world-class capabilities in:
|
||||
```python
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from sklearn.pipeline import Pipeline
|
||||
from sklearn.preprocessing import StandardScaler, OneHotEncoder
|
||||
from sklearn.impute import SimpleImputer
|
||||
from sklearn.compose import ColumnTransformer
|
||||
|
||||
- Advanced production patterns and architectures
|
||||
- Scalable system design and implementation
|
||||
- Performance optimization at scale
|
||||
- MLOps and DataOps best practices
|
||||
- Real-time processing and inference
|
||||
- Distributed computing frameworks
|
||||
- Model deployment and monitoring
|
||||
- Security and compliance
|
||||
- Cost optimization
|
||||
- Team leadership and mentoring
|
||||
def build_feature_pipeline(numeric_cols, categorical_cols, date_cols=None):
|
||||
"""
|
||||
Returns a fitted-ready ColumnTransformer for structured tabular data.
|
||||
"""
|
||||
numeric_pipeline = Pipeline([
|
||||
("impute", SimpleImputer(strategy="median")),
|
||||
("scale", StandardScaler()),
|
||||
])
|
||||
categorical_pipeline = Pipeline([
|
||||
("impute", SimpleImputer(strategy="most_frequent")),
|
||||
("encode", OneHotEncoder(handle_unknown="ignore", sparse_output=False)),
|
||||
])
|
||||
transformers = [
|
||||
("num", numeric_pipeline, numeric_cols),
|
||||
("cat", categorical_pipeline, categorical_cols),
|
||||
]
|
||||
return ColumnTransformer(transformers, remainder="drop")
|
||||
|
||||
## Tech Stack
|
||||
def add_time_features(df, date_col):
|
||||
"""Extract cyclical and lag features from a datetime column."""
|
||||
df = df.copy()
|
||||
df[date_col] = pd.to_datetime(df[date_col])
|
||||
df["dow_sin"] = np.sin(2 * np.pi * df[date_col].dt.dayofweek / 7)
|
||||
df["dow_cos"] = np.cos(2 * np.pi * df[date_col].dt.dayofweek / 7)
|
||||
df["month_sin"] = np.sin(2 * np.pi * df[date_col].dt.month / 12)
|
||||
df["month_cos"] = np.cos(2 * np.pi * df[date_col].dt.month / 12)
|
||||
df["is_weekend"] = (df[date_col].dt.dayofweek >= 5).astype(int)
|
||||
return df
|
||||
|
||||
**Languages:** Python, SQL, R, Scala, Go
|
||||
**ML Frameworks:** PyTorch, TensorFlow, Scikit-learn, XGBoost
|
||||
**Data Tools:** Spark, Airflow, dbt, Kafka, Databricks
|
||||
**LLM Frameworks:** LangChain, LlamaIndex, DSPy
|
||||
**Deployment:** Docker, Kubernetes, AWS/GCP/Azure
|
||||
**Monitoring:** MLflow, Weights & Biases, Prometheus
|
||||
**Databases:** PostgreSQL, BigQuery, Snowflake, Pinecone
|
||||
# --- Feature engineering checklist ---
|
||||
# 1. Never fit transformers on the full dataset — fit on train, transform test.
|
||||
# 2. Log-transform right-skewed numeric features before scaling.
|
||||
# 3. For high-cardinality categoricals (>50 levels), use target encoding or embeddings.
|
||||
# 4. Generate lag/rolling features BEFORE the train/test split to avoid leakage.
|
||||
# 5. Document each feature's business meaning alongside its code.
|
||||
```
|
||||
|
||||
### 3. Train, Evaluate, and Select a Prediction Model
|
||||
|
||||
```python
|
||||
from sklearn.model_selection import StratifiedKFold, cross_validate
|
||||
from sklearn.metrics import make_scorer, roc_auc_score, average_precision_score
|
||||
import xgboost as xgb
|
||||
import mlflow
|
||||
|
||||
SCORERS = {
|
||||
"roc_auc": make_scorer(roc_auc_score, needs_proba=True),
|
||||
"avg_prec": make_scorer(average_precision_score, needs_proba=True),
|
||||
}
|
||||
|
||||
def evaluate_model(model, X, y, cv=5):
|
||||
"""
|
||||
Cross-validate and return mean ± std for each scorer.
|
||||
Use StratifiedKFold for classification to preserve class balance.
|
||||
"""
|
||||
cv_results = cross_validate(
|
||||
model, X, y,
|
||||
cv=StratifiedKFold(n_splits=cv, shuffle=True, random_state=42),
|
||||
scoring=SCORERS,
|
||||
return_train_score=True,
|
||||
)
|
||||
summary = {}
|
||||
for metric in SCORERS:
|
||||
test_scores = cv_results[f"test_{metric}"]
|
||||
summary[metric] = {"mean": test_scores.mean(), "std": test_scores.std()}
|
||||
# Flag overfitting: large gap between train and test score
|
||||
train_mean = cv_results[f"train_{metric}"].mean()
|
||||
summary[metric]["overfit_gap"] = train_mean - test_scores.mean()
|
||||
return summary
|
||||
|
||||
def train_and_log(model, X_train, y_train, X_test, y_test, run_name):
|
||||
"""Train model and log all artefacts to MLflow."""
|
||||
with mlflow.start_run(run_name=run_name):
|
||||
model.fit(X_train, y_train)
|
||||
proba = model.predict_proba(X_test)[:, 1]
|
||||
metrics = {
|
||||
"roc_auc": roc_auc_score(y_test, proba),
|
||||
"avg_prec": average_precision_score(y_test, proba),
|
||||
}
|
||||
mlflow.log_params(model.get_params())
|
||||
mlflow.log_metrics(metrics)
|
||||
mlflow.sklearn.log_model(model, "model")
|
||||
return metrics
|
||||
|
||||
# --- Model evaluation checklist ---
|
||||
# 1. Always report AUC-PR alongside AUC-ROC for imbalanced datasets.
|
||||
# 2. Check overfit_gap > 0.05 as a warning sign of overfitting.
|
||||
# 3. Calibrate probabilities (Platt scaling / isotonic) before production use.
|
||||
# 4. Compute SHAP values to validate feature importance makes business sense.
|
||||
# 5. Run a baseline (e.g. DummyClassifier) and verify the model beats it.
|
||||
# 6. Log every run to MLflow — never rely on notebook output for comparison.
|
||||
```
|
||||
|
||||
### 4. Causal Inference: Difference-in-Differences
|
||||
|
||||
```python
|
||||
import statsmodels.formula.api as smf
|
||||
|
||||
def diff_in_diff(df, outcome, treatment_col, post_col, controls=None):
|
||||
"""
|
||||
Estimate ATT via OLS DiD with optional covariates.
|
||||
df must have: outcome, treatment_col (0/1), post_col (0/1).
|
||||
Returns the interaction coefficient (treatment × post) and its p-value.
|
||||
"""
|
||||
covariates = " + ".join(controls) if controls else ""
|
||||
formula = (
|
||||
f"{outcome} ~ {treatment_col} * {post_col}"
|
||||
+ (f" + {covariates}" if covariates else "")
|
||||
)
|
||||
result = smf.ols(formula, data=df).fit(cov_type="HC3")
|
||||
interaction = f"{treatment_col}:{post_col}"
|
||||
return {
|
||||
"att": result.params[interaction],
|
||||
"p_value": result.pvalues[interaction],
|
||||
"ci_95": result.conf_int().loc[interaction].tolist(),
|
||||
"summary": result.summary(),
|
||||
}
|
||||
|
||||
# --- Causal inference checklist ---
|
||||
# 1. Validate parallel trends in pre-period before trusting DiD estimates.
|
||||
# 2. Use HC3 robust standard errors to handle heteroskedasticity.
|
||||
# 3. For panel data, cluster SEs at the unit level (add groups= param to fit).
|
||||
# 4. Consider propensity score matching if groups differ at baseline.
|
||||
# 5. Report the ATT with confidence interval, not just statistical significance.
|
||||
```
|
||||
|
||||
## Reference Documentation
|
||||
|
||||
### 1. Statistical Methods Advanced
|
||||
|
||||
Comprehensive guide available in `references/statistical_methods_advanced.md` covering:
|
||||
|
||||
- Advanced patterns and best practices
|
||||
- Production implementation strategies
|
||||
- Performance optimization techniques
|
||||
- Scalability considerations
|
||||
- Security and compliance
|
||||
- Real-world case studies
|
||||
|
||||
### 2. Experiment Design Frameworks
|
||||
|
||||
Complete workflow documentation in `references/experiment_design_frameworks.md` including:
|
||||
|
||||
- Step-by-step processes
|
||||
- Architecture design patterns
|
||||
- Tool integration guides
|
||||
- Performance tuning strategies
|
||||
- Troubleshooting procedures
|
||||
|
||||
### 3. Feature Engineering Patterns
|
||||
|
||||
Technical reference guide in `references/feature_engineering_patterns.md` with:
|
||||
|
||||
- System design principles
|
||||
- Implementation examples
|
||||
- Configuration best practices
|
||||
- Deployment strategies
|
||||
- Monitoring and observability
|
||||
|
||||
## Production Patterns
|
||||
|
||||
### Pattern 1: Scalable Data Processing
|
||||
|
||||
Enterprise-scale data processing with distributed computing:
|
||||
|
||||
- Horizontal scaling architecture
|
||||
- Fault-tolerant design
|
||||
- Real-time and batch processing
|
||||
- Data quality validation
|
||||
- Performance monitoring
|
||||
|
||||
### Pattern 2: ML Model Deployment
|
||||
|
||||
Production ML system with high availability:
|
||||
|
||||
- Model serving with low latency
|
||||
- A/B testing infrastructure
|
||||
- Feature store integration
|
||||
- Model monitoring and drift detection
|
||||
- Automated retraining pipelines
|
||||
|
||||
### Pattern 3: Real-Time Inference
|
||||
|
||||
High-throughput inference system:
|
||||
|
||||
- Batching and caching strategies
|
||||
- Load balancing
|
||||
- Auto-scaling
|
||||
- Latency optimization
|
||||
- Cost optimization
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Development
|
||||
|
||||
- Test-driven development
|
||||
- Code reviews and pair programming
|
||||
- Documentation as code
|
||||
- Version control everything
|
||||
- Continuous integration
|
||||
|
||||
### Production
|
||||
|
||||
- Monitor everything critical
|
||||
- Automate deployments
|
||||
- Feature flags for releases
|
||||
- Canary deployments
|
||||
- Comprehensive logging
|
||||
|
||||
### Team Leadership
|
||||
|
||||
- Mentor junior engineers
|
||||
- Drive technical decisions
|
||||
- Establish coding standards
|
||||
- Foster learning culture
|
||||
- Cross-functional collaboration
|
||||
|
||||
## Performance Targets
|
||||
|
||||
**Latency:**
|
||||
- P50: < 50ms
|
||||
- P95: < 100ms
|
||||
- P99: < 200ms
|
||||
|
||||
**Throughput:**
|
||||
- Requests/second: > 1000
|
||||
- Concurrent users: > 10,000
|
||||
|
||||
**Availability:**
|
||||
- Uptime: 99.9%
|
||||
- Error rate: < 0.1%
|
||||
|
||||
## Security & Compliance
|
||||
|
||||
- Authentication & authorization
|
||||
- Data encryption (at rest & in transit)
|
||||
- PII handling and anonymization
|
||||
- GDPR/CCPA compliance
|
||||
- Regular security audits
|
||||
- Vulnerability management
|
||||
- **Statistical Methods:** `references/statistical_methods_advanced.md`
|
||||
- **Experiment Design Frameworks:** `references/experiment_design_frameworks.md`
|
||||
- **Feature Engineering Patterns:** `references/feature_engineering_patterns.md`
|
||||
|
||||
## Common Commands
|
||||
|
||||
```bash
|
||||
# Development
|
||||
python -m pytest tests/ -v --cov
|
||||
python -m black src/
|
||||
python -m pylint src/
|
||||
# Testing & linting
|
||||
python -m pytest tests/ -v --cov=src/
|
||||
python -m black src/ && python -m pylint src/
|
||||
|
||||
# Training
|
||||
# Training & evaluation
|
||||
python scripts/train.py --config prod.yaml
|
||||
python scripts/evaluate.py --model best.pth
|
||||
|
||||
@@ -179,48 +217,7 @@ docker build -t service:v1 .
|
||||
kubectl apply -f k8s/
|
||||
helm upgrade service ./charts/
|
||||
|
||||
# Monitoring
|
||||
# Monitoring & health
|
||||
kubectl logs -f deployment/service
|
||||
python scripts/health_check.py
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- Advanced Patterns: `references/statistical_methods_advanced.md`
|
||||
- Implementation Guide: `references/experiment_design_frameworks.md`
|
||||
- Technical Reference: `references/feature_engineering_patterns.md`
|
||||
- Automation Scripts: `scripts/` directory
|
||||
|
||||
## Senior-Level Responsibilities
|
||||
|
||||
As a world-class senior professional:
|
||||
|
||||
1. **Technical Leadership**
|
||||
- Drive architectural decisions
|
||||
- Mentor team members
|
||||
- Establish best practices
|
||||
- Ensure code quality
|
||||
|
||||
2. **Strategic Thinking**
|
||||
- Align with business goals
|
||||
- Evaluate trade-offs
|
||||
- Plan for scale
|
||||
- Manage technical debt
|
||||
|
||||
3. **Collaboration**
|
||||
- Work across teams
|
||||
- Communicate effectively
|
||||
- Build consensus
|
||||
- Share knowledge
|
||||
|
||||
4. **Innovation**
|
||||
- Stay current with research
|
||||
- Experiment with new approaches
|
||||
- Contribute to community
|
||||
- Drive continuous improvement
|
||||
|
||||
5. **Production Excellence**
|
||||
- Ensure high availability
|
||||
- Monitor proactively
|
||||
- Optimize performance
|
||||
- Respond to incidents
|
||||
|
||||
@@ -14,196 +14,262 @@ Complete toolkit for senior devops with modern tools and best practices.
|
||||
This skill provides three core capabilities through automated scripts:
|
||||
|
||||
```bash
|
||||
# Script 1: Pipeline Generator
|
||||
python scripts/pipeline_generator.py [options]
|
||||
# Script 1: Pipeline Generator — scaffolds CI/CD pipelines for GitHub Actions or CircleCI
|
||||
python scripts/pipeline_generator.py ./app --platform=github --stages=build,test,deploy
|
||||
|
||||
# Script 2: Terraform Scaffolder
|
||||
python scripts/terraform_scaffolder.py [options]
|
||||
# Script 2: Terraform Scaffolder — generates and validates IaC modules for AWS/GCP/Azure
|
||||
python scripts/terraform_scaffolder.py ./infra --provider=aws --module=ecs-service --verbose
|
||||
|
||||
# Script 3: Deployment Manager
|
||||
python scripts/deployment_manager.py [options]
|
||||
# Script 3: Deployment Manager — orchestrates container deployments with rollback support
|
||||
python scripts/deployment_manager.py deploy --env=production --image=app:1.2.3 --strategy=blue-green
|
||||
```
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
### 1. Pipeline Generator
|
||||
|
||||
Automated tool for pipeline generator tasks.
|
||||
Scaffolds CI/CD pipeline configurations for GitHub Actions or CircleCI, with stages for build, test, security scan, and deploy.
|
||||
|
||||
**Features:**
|
||||
- Automated scaffolding
|
||||
- Best practices built-in
|
||||
- Configurable templates
|
||||
- Quality checks
|
||||
**Example — GitHub Actions workflow:**
|
||||
```yaml
|
||||
# .github/workflows/ci.yml
|
||||
name: CI/CD Pipeline
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
cache: 'npm'
|
||||
- run: npm ci
|
||||
- run: npm run lint
|
||||
- run: npm test -- --coverage
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v4
|
||||
|
||||
build-docker:
|
||||
needs: build-and-test
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Build and push image
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
push: ${{ github.ref == 'refs/heads/main' }}
|
||||
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
|
||||
|
||||
deploy:
|
||||
needs: build-docker
|
||||
if: github.ref == 'refs/heads/main'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Deploy to ECS
|
||||
run: |
|
||||
aws ecs update-service \
|
||||
--cluster production \
|
||||
--service app-service \
|
||||
--force-new-deployment
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
python scripts/pipeline_generator.py <project-path> [options]
|
||||
python scripts/pipeline_generator.py <project-path> --platform=github|circleci --stages=build,test,deploy
|
||||
```
|
||||
|
||||
### 2. Terraform Scaffolder
|
||||
|
||||
Comprehensive analysis and optimization tool.
|
||||
Generates, validates, and plans Terraform modules. Enforces consistent module structure and runs `terraform validate` + `terraform plan` before any apply.
|
||||
|
||||
**Features:**
|
||||
- Deep analysis
|
||||
- Performance metrics
|
||||
- Recommendations
|
||||
- Automated fixes
|
||||
**Example — AWS ECS service module:**
|
||||
```hcl
|
||||
# modules/ecs-service/main.tf
|
||||
resource "aws_ecs_task_definition" "app" {
|
||||
family = var.service_name
|
||||
requires_compatibilities = ["FARGATE"]
|
||||
network_mode = "awsvpc"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
|
||||
container_definitions = jsonencode([{
|
||||
name = var.service_name
|
||||
image = var.container_image
|
||||
essential = true
|
||||
portMappings = [{
|
||||
containerPort = var.container_port
|
||||
protocol = "tcp"
|
||||
}]
|
||||
environment = [for k, v in var.env_vars : { name = k, value = v }]
|
||||
logConfiguration = {
|
||||
logDriver = "awslogs"
|
||||
options = {
|
||||
awslogs-group = "/ecs/${var.service_name}"
|
||||
awslogs-region = var.aws_region
|
||||
awslogs-stream-prefix = "ecs"
|
||||
}
|
||||
}
|
||||
}])
|
||||
}
|
||||
|
||||
resource "aws_ecs_service" "app" {
|
||||
name = var.service_name
|
||||
cluster = var.cluster_id
|
||||
task_definition = aws_ecs_task_definition.app.arn
|
||||
desired_count = var.desired_count
|
||||
launch_type = "FARGATE"
|
||||
|
||||
network_configuration {
|
||||
subnets = var.private_subnet_ids
|
||||
security_groups = [aws_security_group.app.id]
|
||||
assign_public_ip = false
|
||||
}
|
||||
|
||||
load_balancer {
|
||||
target_group_arn = aws_lb_target_group.app.arn
|
||||
container_name = var.service_name
|
||||
container_port = var.container_port
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
python scripts/terraform_scaffolder.py <target-path> [--verbose]
|
||||
python scripts/terraform_scaffolder.py <target-path> --provider=aws|gcp|azure --module=ecs-service|gke-deployment|aks-service [--verbose]
|
||||
```
|
||||
|
||||
### 3. Deployment Manager
|
||||
|
||||
Advanced tooling for specialized tasks.
|
||||
Orchestrates deployments with blue/green or rolling strategies, health-check gates, and automatic rollback on failure.
|
||||
|
||||
**Features:**
|
||||
- Expert-level automation
|
||||
- Custom configurations
|
||||
- Integration ready
|
||||
- Production-grade output
|
||||
**Example — Kubernetes blue/green deployment (blue-slot specific elements):**
|
||||
```yaml
|
||||
# k8s/deployment-blue.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app-blue
|
||||
labels:
|
||||
app: myapp
|
||||
slot: blue # slot label distinguishes blue from green
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: myapp
|
||||
slot: blue
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: myapp
|
||||
slot: blue
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
image: ghcr.io/org/app:1.2.3
|
||||
readinessProbe: # gate: pod must pass before traffic switches
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
resources:
|
||||
requests:
|
||||
cpu: "250m"
|
||||
memory: "256Mi"
|
||||
limits:
|
||||
cpu: "500m"
|
||||
memory: "512Mi"
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
python scripts/deployment_manager.py [arguments] [options]
|
||||
python scripts/deployment_manager.py deploy \
|
||||
--env=staging|production \
|
||||
--image=app:1.2.3 \
|
||||
--strategy=blue-green|rolling \
|
||||
--health-check-url=https://app.example.com/healthz
|
||||
|
||||
python scripts/deployment_manager.py rollback --env=production --to-version=1.2.2
|
||||
python scripts/deployment_manager.py --analyze --env=production # audit current state
|
||||
```
|
||||
|
||||
## Reference Documentation
|
||||
## Resources
|
||||
|
||||
### Cicd Pipeline Guide
|
||||
|
||||
Comprehensive guide available in `references/cicd_pipeline_guide.md`:
|
||||
|
||||
- Detailed patterns and practices
|
||||
- Code examples
|
||||
- Best practices
|
||||
- Anti-patterns to avoid
|
||||
- Real-world scenarios
|
||||
|
||||
### Infrastructure As Code
|
||||
|
||||
Complete workflow documentation in `references/infrastructure_as_code.md`:
|
||||
|
||||
- Step-by-step processes
|
||||
- Optimization strategies
|
||||
- Tool integrations
|
||||
- Performance tuning
|
||||
- Troubleshooting guide
|
||||
|
||||
### Deployment Strategies
|
||||
|
||||
Technical reference guide in `references/deployment_strategies.md`:
|
||||
|
||||
- Technology stack details
|
||||
- Configuration examples
|
||||
- Integration patterns
|
||||
- Security considerations
|
||||
- Scalability guidelines
|
||||
|
||||
## Tech Stack
|
||||
|
||||
**Languages:** TypeScript, JavaScript, Python, Go, Swift, Kotlin
|
||||
**Frontend:** React, Next.js, React Native, Flutter
|
||||
**Backend:** Node.js, Express, GraphQL, REST APIs
|
||||
**Database:** PostgreSQL, Prisma, NeonDB, Supabase
|
||||
**DevOps:** Docker, Kubernetes, Terraform, GitHub Actions, CircleCI
|
||||
**Cloud:** AWS, GCP, Azure
|
||||
- Pattern Reference: `references/cicd_pipeline_guide.md` — detailed CI/CD patterns, best practices, anti-patterns
|
||||
- Workflow Guide: `references/infrastructure_as_code.md` — IaC step-by-step processes, optimization, troubleshooting
|
||||
- Technical Guide: `references/deployment_strategies.md` — deployment strategy configs, security considerations, scalability
|
||||
- Tool Scripts: `scripts/` directory
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### 1. Setup and Configuration
|
||||
### 1. Infrastructure Changes (Terraform)
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
npm install
|
||||
# or
|
||||
pip install -r requirements.txt
|
||||
# Scaffold or update module
|
||||
python scripts/terraform_scaffolder.py ./infra --provider=aws --module=ecs-service --verbose
|
||||
|
||||
# Configure environment
|
||||
cp .env.example .env
|
||||
# Validate and plan — review diff before applying
|
||||
terraform -chdir=infra init
|
||||
terraform -chdir=infra validate
|
||||
terraform -chdir=infra plan -out=tfplan
|
||||
|
||||
# Apply only after plan review
|
||||
terraform -chdir=infra apply tfplan
|
||||
|
||||
# Verify resources are healthy
|
||||
aws ecs describe-services --cluster production --services app-service \
|
||||
--query 'services[0].{Status:status,Running:runningCount,Desired:desiredCount}'
|
||||
```
|
||||
|
||||
### 2. Run Quality Checks
|
||||
### 2. Application Deployment
|
||||
|
||||
```bash
|
||||
# Use the analyzer script
|
||||
python scripts/terraform_scaffolder.py .
|
||||
# Generate or update pipeline config
|
||||
python scripts/pipeline_generator.py . --platform=github --stages=build,test,security,deploy
|
||||
|
||||
# Review recommendations
|
||||
# Apply fixes
|
||||
# Build and tag image
|
||||
docker build -t ghcr.io/org/app:$(git rev-parse --short HEAD) .
|
||||
docker push ghcr.io/org/app:$(git rev-parse --short HEAD)
|
||||
|
||||
# Deploy with health-check gate
|
||||
python scripts/deployment_manager.py deploy \
|
||||
--env=production \
|
||||
--image=app:$(git rev-parse --short HEAD) \
|
||||
--strategy=blue-green \
|
||||
--health-check-url=https://app.example.com/healthz
|
||||
|
||||
# Verify pods are running
|
||||
kubectl get pods -n production -l app=myapp
|
||||
kubectl rollout status deployment/app-blue -n production
|
||||
|
||||
# Switch traffic after verification
|
||||
kubectl patch service app-svc -n production \
|
||||
-p '{"spec":{"selector":{"slot":"blue"}}}'
|
||||
```
|
||||
|
||||
### 3. Implement Best Practices
|
||||
|
||||
Follow the patterns and practices documented in:
|
||||
- `references/cicd_pipeline_guide.md`
|
||||
- `references/infrastructure_as_code.md`
|
||||
- `references/deployment_strategies.md`
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### Code Quality
|
||||
- Follow established patterns
|
||||
- Write comprehensive tests
|
||||
- Document decisions
|
||||
- Review regularly
|
||||
|
||||
### Performance
|
||||
- Measure before optimizing
|
||||
- Use appropriate caching
|
||||
- Optimize critical paths
|
||||
- Monitor in production
|
||||
|
||||
### Security
|
||||
- Validate all inputs
|
||||
- Use parameterized queries
|
||||
- Implement proper authentication
|
||||
- Keep dependencies updated
|
||||
|
||||
### Maintainability
|
||||
- Write clear code
|
||||
- Use consistent naming
|
||||
- Add helpful comments
|
||||
- Keep it simple
|
||||
|
||||
## Common Commands
|
||||
### 3. Rollback Procedure
|
||||
|
||||
```bash
|
||||
# Development
|
||||
npm run dev
|
||||
npm run build
|
||||
npm run test
|
||||
npm run lint
|
||||
# Immediate rollback via deployment manager
|
||||
python scripts/deployment_manager.py rollback --env=production --to-version=1.2.2
|
||||
|
||||
# Analysis
|
||||
python scripts/terraform_scaffolder.py .
|
||||
python scripts/deployment_manager.py --analyze
|
||||
# Or via kubectl
|
||||
kubectl rollout undo deployment/app -n production
|
||||
kubectl rollout status deployment/app -n production
|
||||
|
||||
# Deployment
|
||||
docker build -t app:latest .
|
||||
docker-compose up -d
|
||||
kubectl apply -f k8s/
|
||||
# Verify rollback succeeded
|
||||
kubectl get pods -n production -l app=myapp
|
||||
curl -sf https://app.example.com/healthz || echo "ROLLBACK FAILED — escalate"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
Check the comprehensive troubleshooting section in `references/deployment_strategies.md`.
|
||||
|
||||
### Getting Help
|
||||
|
||||
- Review reference documentation
|
||||
- Check script output messages
|
||||
- Consult tech stack documentation
|
||||
- Review error logs
|
||||
|
||||
## Resources
|
||||
|
||||
- Pattern Reference: `references/cicd_pipeline_guide.md`
|
||||
- Workflow Guide: `references/infrastructure_as_code.md`
|
||||
- Technical Guide: `references/deployment_strategies.md`
|
||||
- Tool Scripts: `scripts/` directory
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "tdd-guide"
|
||||
description: Test-driven development workflow with test generation, coverage analysis, and multi-framework support
|
||||
description: "Test-driven development skill for writing unit tests, generating test fixtures and mocks, analyzing coverage gaps, and guiding red-green-refactor workflows across Jest, Pytest, JUnit, Vitest, and Mocha. Use when the user asks to write tests, improve test coverage, practice TDD, generate mocks or stubs, or mentions testing frameworks like Jest, pytest, or JUnit. Handles test generation from source code, coverage report parsing (LCOV/JSON/XML), quality scoring, and framework conversion for TypeScript, JavaScript, Python, and Java projects."
|
||||
triggers:
|
||||
- generate tests
|
||||
- analyze coverage
|
||||
@@ -16,27 +16,6 @@ triggers:
|
||||
|
||||
Test-driven development skill for generating tests, analyzing coverage, and guiding red-green-refactor workflows across Jest, Pytest, JUnit, and Vitest.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Capabilities](#capabilities)
|
||||
- [Workflows](#workflows)
|
||||
- [Tools](#tools)
|
||||
- [Input Requirements](#input-requirements)
|
||||
- [Limitations](#limitations)
|
||||
|
||||
---
|
||||
|
||||
## Capabilities
|
||||
|
||||
| Capability | Description |
|
||||
|------------|-------------|
|
||||
| Test Generation | Convert requirements or code into test cases with proper structure |
|
||||
| Coverage Analysis | Parse LCOV/JSON/XML reports, identify gaps, prioritize fixes |
|
||||
| TDD Workflow | Guide red-green-refactor cycles with validation |
|
||||
| Framework Adapters | Generate tests for Jest, Pytest, JUnit, Vitest, Mocha |
|
||||
| Quality Scoring | Assess test isolation, assertions, naming, detect test smells |
|
||||
| Fixture Generation | Create realistic test data, mocks, and factories |
|
||||
|
||||
---
|
||||
|
||||
## Workflows
|
||||
@@ -68,18 +47,85 @@ Test-driven development skill for generating tests, analyzing coverage, and guid
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
## Examples
|
||||
|
||||
### Test Generation — Input → Output (Pytest)
|
||||
|
||||
**Input source function (`math_utils.py`):**
|
||||
```python
|
||||
def divide(a: float, b: float) -> float:
|
||||
if b == 0:
|
||||
raise ValueError("Cannot divide by zero")
|
||||
return a / b
|
||||
```
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
python scripts/test_generator.py --input math_utils.py --framework pytest
|
||||
```
|
||||
|
||||
**Generated test output (`test_math_utils.py`):**
|
||||
```python
|
||||
import pytest
|
||||
from math_utils import divide
|
||||
|
||||
class TestDivide:
|
||||
def test_divide_positive_numbers(self):
|
||||
assert divide(10, 2) == 5.0
|
||||
|
||||
def test_divide_negative_numerator(self):
|
||||
assert divide(-10, 2) == -5.0
|
||||
|
||||
def test_divide_float_result(self):
|
||||
assert divide(1, 3) == pytest.approx(0.333, rel=1e-3)
|
||||
|
||||
def test_divide_by_zero_raises_value_error(self):
|
||||
with pytest.raises(ValueError, match="Cannot divide by zero"):
|
||||
divide(10, 0)
|
||||
|
||||
def test_divide_zero_numerator(self):
|
||||
assert divide(0, 5) == 0.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Coverage Analysis — Sample P0/P1/P2 Output
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
python scripts/coverage_analyzer.py --report lcov.info --threshold 80
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Coverage Report — Overall: 63% (threshold: 80%)
|
||||
|
||||
P0 — Critical gaps (uncovered error paths):
|
||||
auth/login.py:42-58 handle_expired_token() 0% covered
|
||||
payments/process.py:91-110 handle_payment_failure() 0% covered
|
||||
|
||||
P1 — High-value gaps (core logic branches):
|
||||
users/service.py:77 update_profile() — else branch 0% covered
|
||||
orders/cart.py:134 apply_discount() — zero-qty guard 0% covered
|
||||
|
||||
P2 — Low-risk gaps (utility / helper functions):
|
||||
utils/formatting.py:12 format_currency() 0% covered
|
||||
|
||||
Recommended: Generate tests for P0 items first to reach 80% threshold.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Tools
|
||||
|
||||
| Tool | Purpose | Usage |
|
||||
|------|---------|-------|
|
||||
| `test_generator.py` | Generate test cases from code/requirements | `python scripts/test_generator.py --input source.py --framework pytest` |
|
||||
| `coverage_analyzer.py` | Parse and analyze coverage reports | `python scripts/coverage_analyzer.py --report lcov.info --threshold 80` |
|
||||
| `tdd_workflow.py` | Guide red-green-refactor cycles | `python scripts/tdd_workflow.py --phase red --test test_auth.py` |
|
||||
| `framework_adapter.py` | Convert tests between frameworks | `python scripts/framework_adapter.py --from jest --to pytest` |
|
||||
| `fixture_generator.py` | Generate test data and mocks | `python scripts/fixture_generator.py --entity User --count 5` |
|
||||
| `metrics_calculator.py` | Calculate test quality metrics | `python scripts/metrics_calculator.py --tests tests/` |
|
||||
| `format_detector.py` | Detect language and framework | `python scripts/format_detector.py --file source.ts` |
|
||||
| `output_formatter.py` | Format output for CLI/desktop/CI | `python scripts/output_formatter.py --format markdown` |
|
||||
|
||||
Additional scripts: `framework_adapter.py` (convert between frameworks), `metrics_calculator.py` (quality metrics), `format_detector.py` (detect language/framework), `output_formatter.py` (CLI/desktop/CI output).
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "app-store-optimization"
|
||||
description: App Store Optimization toolkit for researching keywords, optimizing metadata, and tracking mobile app performance on Apple App Store and Google Play Store.
|
||||
description: App Store Optimization (ASO) toolkit for researching keywords, analyzing competitor rankings, generating metadata suggestions, and improving app visibility on Apple App Store and Google Play Store. Use when the user asks about ASO, app store rankings, app metadata, app titles and descriptions, app store listings, app visibility, or mobile app marketing on iOS or Android. Supports keyword research and scoring, competitor keyword analysis, metadata optimization, A/B test planning, launch checklists, and tracking ranking changes.
|
||||
triggers:
|
||||
- ASO
|
||||
- app store optimization
|
||||
@@ -18,20 +18,6 @@ triggers:
|
||||
|
||||
# App Store Optimization (ASO)
|
||||
|
||||
ASO tools for researching keywords, optimizing metadata, analyzing competitors, and improving app store visibility on Apple App Store and Google Play Store.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Keyword Research Workflow](#keyword-research-workflow)
|
||||
- [Metadata Optimization Workflow](#metadata-optimization-workflow)
|
||||
- [Competitor Analysis Workflow](#competitor-analysis-workflow)
|
||||
- [App Launch Workflow](#app-launch-workflow)
|
||||
- [A/B Testing Workflow](#ab-testing-workflow)
|
||||
- [Before/After Examples](#beforeafter-examples)
|
||||
- [Tools and References](#tools-and-references)
|
||||
|
||||
---
|
||||
|
||||
## Keyword Research Workflow
|
||||
@@ -75,13 +61,13 @@ Discover and evaluate keywords that drive app store visibility.
|
||||
|
||||
### Keyword Placement Priority
|
||||
|
||||
| Location | Search Weight | Character Limit |
|
||||
|----------|---------------|-----------------|
|
||||
| App Title | Highest | 30 (iOS) / 50 (Android) |
|
||||
| Subtitle (iOS) | High | 30 |
|
||||
| Keyword Field (iOS) | High | 100 |
|
||||
| Short Description (Android) | High | 80 |
|
||||
| Full Description | Medium | 4,000 |
|
||||
| Location | Search Weight |
|
||||
|----------|---------------|
|
||||
| App Title | Highest |
|
||||
| Subtitle (iOS) | High |
|
||||
| Keyword Field (iOS) | High |
|
||||
| Short Description (Android) | High |
|
||||
| Full Description | Medium |
|
||||
|
||||
See: [references/keyword-research-guide.md](references/keyword-research-guide.md)
|
||||
|
||||
@@ -454,35 +440,18 @@ Trusted by 500,000+ professionals.
|
||||
|
||||
---
|
||||
|
||||
## Platform Limitations
|
||||
## Platform Notes
|
||||
|
||||
### Data Constraints
|
||||
| Platform / Constraint | Behavior / Impact |
|
||||
|-----------------------|-------------------|
|
||||
| iOS keyword changes | Require app submission |
|
||||
| iOS promotional text | Editable without an app update |
|
||||
| Android metadata changes | Index in 1-2 hours |
|
||||
| Android keyword field | None — use description instead |
|
||||
| Keyword volume data | Estimates only; no official source |
|
||||
| Competitor data | Public listings only |
|
||||
|
||||
| Constraint | Impact |
|
||||
|------------|--------|
|
||||
| No official keyword volume data | Estimates based on third-party tools |
|
||||
| Competitor data limited to public info | Cannot see internal metrics |
|
||||
| Review access limited to public reviews | No access to private feedback |
|
||||
| Historical data unavailable for new apps | Cannot compare to past performance |
|
||||
|
||||
### Platform Behavior
|
||||
|
||||
| Platform | Behavior |
|
||||
|----------|----------|
|
||||
| iOS | Keyword changes require app submission |
|
||||
| iOS | Promotional text editable without update |
|
||||
| Android | Metadata changes index in 1-2 hours |
|
||||
| Android | No separate keyword field (use description) |
|
||||
| Both | Algorithm changes without notice |
|
||||
|
||||
### When Not to Use This Skill
|
||||
|
||||
| Scenario | Alternative |
|
||||
|----------|-------------|
|
||||
| Web apps | Use web SEO skills |
|
||||
| Enterprise apps (not public) | Internal distribution tools |
|
||||
| Beta/TestFlight only | Focus on feedback, not ASO |
|
||||
| Paid advertising strategy | Use paid acquisition skills |
|
||||
**When not to use this skill:** web apps (use web SEO), enterprise/internal apps, TestFlight-only betas, or paid advertising strategy.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "content-creator"
|
||||
description: "DEPRECATED — Use content-production for full content pipeline, or content-strategy for planning. This skill redirects to the appropriate specialist."
|
||||
description: "Deprecated redirect skill that routes legacy 'content creator' requests to the correct specialist. Use when a user invokes 'content creator', asks to write a blog post, article, guide, or brand voice analysis (routes to content-production), or asks to plan content, build a topic cluster, or create a content calendar (routes to content-strategy). Does not handle requests directly — identifies user intent and redirects to content-production for writing/SEO/brand-voice tasks or content-strategy for planning tasks."
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 2.0.0
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "marketing-strategy-pmm"
|
||||
description: Product marketing skill for positioning, GTM strategy, competitive intelligence, and product launches. Covers April Dunford positioning, ICP definition, competitive battlecards, launch playbooks, and international market entry.
|
||||
description: Product marketing skill for positioning, GTM strategy, competitive intelligence, and product launches. Use when the user asks about product positioning, go-to-market planning, competitive analysis, target audience definition, ICP definition, market research, launch plans, or sales enablement. Covers April Dunford positioning, ICP definition, competitive battlecards, launch playbooks, and international market entry. Produces deliverables including positioning statements, battlecard documents, launch plans, and go-to-market strategies.
|
||||
triggers:
|
||||
- product marketing
|
||||
- PMM
|
||||
@@ -58,20 +58,11 @@ Define ideal customer profile for targeting:
|
||||
|
||||
### Buyer Personas
|
||||
|
||||
**Economic Buyer** (signs contract):
|
||||
- Title: VP, Director, Head of [Department]
|
||||
- Goals: ROI, team productivity, cost reduction
|
||||
- Messaging: Business outcomes, ROI, case studies
|
||||
|
||||
**Technical Buyer** (evaluates product):
|
||||
- Title: Engineer, Architect, Tech Lead
|
||||
- Goals: Technical fit, easy integration
|
||||
- Messaging: Architecture, security, documentation
|
||||
|
||||
**User/Champion** (advocates internally):
|
||||
- Title: Manager, Team Lead, Power User
|
||||
- Goals: Makes job easier, quick wins
|
||||
- Messaging: UX, ease of use, time savings
|
||||
| Persona | Title | Goals | Messaging |
|
||||
|---------|-------|-------|-----------|
|
||||
| Economic Buyer | VP, Director, Head of [Department] | ROI, team productivity, cost reduction | Business outcomes, ROI, case studies |
|
||||
| Technical Buyer | Engineer, Architect, Tech Lead | Technical fit, easy integration | Architecture, security, documentation |
|
||||
| User/Champion | Manager, Team Lead, Power User | Makes job easier, quick wins | UX, ease of use, time savings |
|
||||
|
||||
### ICP Validation Checklist
|
||||
|
||||
|
||||
@@ -1,87 +1,58 @@
|
||||
---
|
||||
name: "atlassian-admin"
|
||||
description: Atlassian Administrator for managing and organizing Atlassian products, users, customization of the Atlassian suite, permissions, security, integrations, system configuration, and all administrative features. Use for user provisioning, global settings, security policies, system optimization, and org-wide Atlassian governance.
|
||||
description: Atlassian Administrator for managing and organizing Atlassian products (Jira, Confluence, Bitbucket, Trello), users, permissions, security, integrations, system configuration, and org-wide governance. Use when asked to add users to Jira, change Confluence permissions, configure access control, update admin settings, manage Atlassian groups, set up SSO, install marketplace apps, review security policies, or handle any org-wide Atlassian administration task.
|
||||
---
|
||||
|
||||
# Atlassian Administrator Expert
|
||||
|
||||
System administrator with deep expertise in Atlassian Cloud/Data Center management, user provisioning, security, integrations, and org-wide configuration and governance.
|
||||
|
||||
## Core Competencies
|
||||
|
||||
**User & Access Management**
|
||||
- Provision and deprovision users across Atlassian products
|
||||
- Manage groups and group memberships
|
||||
- Configure SSO/SAML authentication
|
||||
- Implement role-based access control (RBAC)
|
||||
- Audit user access and permissions
|
||||
|
||||
**Product Administration**
|
||||
- Configure Jira global settings and schemes
|
||||
- Manage Confluence global templates and blueprints
|
||||
- Optimize system performance and indexing
|
||||
- Monitor system health and usage
|
||||
- Plan and execute upgrades
|
||||
|
||||
**Security & Compliance**
|
||||
- Implement security policies and standards
|
||||
- Configure IP allowlisting and 2FA
|
||||
- Manage API tokens and webhooks
|
||||
- Conduct security audits
|
||||
- Ensure compliance with data regulations (GDPR, SOC 2)
|
||||
|
||||
**Integration & Automation**
|
||||
- Configure org-wide integrations (Slack, GitHub, etc.)
|
||||
- Manage marketplace apps and licenses
|
||||
- Set up enterprise automation
|
||||
- Configure webhooks and API access
|
||||
- Implement SSO with identity providers
|
||||
|
||||
## Workflows
|
||||
|
||||
### User Provisioning
|
||||
1. Receive request for new user access
|
||||
2. Verify user identity and role
|
||||
3. Create user account in organization
|
||||
4. Add to appropriate groups (Jira users, Confluence users, etc.)
|
||||
5. Assign product access (Jira, Confluence)
|
||||
6. Configure default permissions
|
||||
7. Send welcome email with onboarding info
|
||||
8. **NOTIFY**: Relevant team leads of new member
|
||||
1. Create user account: `admin.atlassian.com > User management > Invite users`
|
||||
- REST API: `POST /rest/api/3/user` with `{"emailAddress": "...", "displayName": "...","products": [...]}`
|
||||
2. Add to appropriate groups: `admin.atlassian.com > User management > Groups > [group] > Add members`
|
||||
3. Assign product access (Jira, Confluence) via `admin.atlassian.com > Products > [product] > Access`
|
||||
4. Configure default permissions per group scheme
|
||||
5. Send welcome email with onboarding info
|
||||
6. **NOTIFY**: Relevant team leads of new member
|
||||
7. **VERIFY**: Confirm user appears active at `admin.atlassian.com/o/{orgId}/users` and can log in
|
||||
|
||||
### User Deprovisioning
|
||||
1. Receive offboarding request
|
||||
2. **CRITICAL**: Audit user's owned content and tickets
|
||||
3. Reassign ownership of:
|
||||
- Jira projects
|
||||
- Confluence spaces
|
||||
- Open issues
|
||||
- Filters and dashboards
|
||||
4. Remove from all groups
|
||||
5. Revoke product access
|
||||
6. Deactivate or delete account (per policy)
|
||||
1. **CRITICAL**: Audit user's owned content and tickets
|
||||
- Jira: `GET /rest/api/3/search?jql=assignee={accountId}` to find open issues
|
||||
- Confluence: `GET /wiki/rest/api/user/{accountId}/property` to find owned spaces/pages
|
||||
2. Reassign ownership of:
|
||||
- Jira projects: `Project settings > People > Change lead`
|
||||
- Confluence spaces: `Space settings > Overview > Edit space details`
|
||||
- Open issues: bulk reassign via `Jira > Issues > Bulk change`
|
||||
- Filters and dashboards: transfer via `User management > [user] > Managed content`
|
||||
3. Remove from all groups: `admin.atlassian.com > User management > [user] > Groups`
|
||||
4. Revoke product access
|
||||
5. Deactivate account: `admin.atlassian.com > User management > [user] > Deactivate`
|
||||
- REST API: `DELETE /rest/api/3/user?accountId={accountId}`
|
||||
6. **VERIFY**: Confirm `GET /rest/api/3/user?accountId={accountId}` returns `"active": false`
|
||||
7. Document deprovisioning in audit log
|
||||
8. **USE**: Jira Expert to reassign issues
|
||||
8. **USE**: Jira Expert to reassign any remaining issues
|
||||
|
||||
### Group Management
|
||||
1. Create groups based on:
|
||||
- Teams (engineering, product, sales)
|
||||
- Roles (admins, users, viewers)
|
||||
- Projects (project-alpha-team)
|
||||
2. Define group purpose and membership criteria
|
||||
1. Create groups: `admin.atlassian.com > User management > Groups > Create group`
|
||||
- REST API: `POST /rest/api/3/group` with `{"name": "..."}`
|
||||
- Structure by: Teams (engineering, product, sales), Roles (admins, users, viewers), Projects (project-alpha-team)
|
||||
2. Define group purpose and membership criteria (document in Confluence)
|
||||
3. Assign default permissions per group
|
||||
4. Add users to appropriate groups
|
||||
5. Regular review and cleanup (quarterly)
|
||||
6. **USE**: Confluence Expert to document group structure
|
||||
5. **VERIFY**: Confirm group members via `GET /rest/api/3/group/member?groupName={name}`
|
||||
6. Regular review and cleanup (quarterly)
|
||||
7. **USE**: Confluence Expert to document group structure
|
||||
|
||||
### Permission Scheme Design
|
||||
**Jira Permission Schemes**:
|
||||
**Jira Permission Schemes** (`Jira Settings > Issues > Permission Schemes`):
|
||||
- **Public Project**: All users can view, members can edit
|
||||
- **Team Project**: Team members full access, stakeholders view
|
||||
- **Restricted Project**: Named individuals only
|
||||
- **Admin Project**: Admins only
|
||||
|
||||
**Confluence Permission Schemes**:
|
||||
**Confluence Permission Schemes** (`Confluence Admin > Space permissions`):
|
||||
- **Public Space**: All users view, space members edit
|
||||
- **Team Space**: Team-specific access
|
||||
- **Personal Space**: Individual user only
|
||||
@@ -95,304 +66,153 @@ System administrator with deep expertise in Atlassian Cloud/Data Center manageme
|
||||
|
||||
### SSO Configuration
|
||||
1. Choose identity provider (Okta, Azure AD, Google)
|
||||
2. Configure SAML settings in Atlassian
|
||||
3. Test SSO with admin account
|
||||
2. Configure SAML settings: `admin.atlassian.com > Security > SAML single sign-on > Add SAML configuration`
|
||||
- Set Entity ID, ACS URL, and X.509 certificate from IdP
|
||||
3. Test SSO with admin account (keep password login active during test)
|
||||
4. Test with regular user account
|
||||
5. Enable SSO for organization
|
||||
6. Enforce SSO (disable password login)
|
||||
7. Configure SCIM for auto-provisioning (optional)
|
||||
8. Monitor SSO logs for failures
|
||||
6. Enforce SSO: `admin.atlassian.com > Security > Authentication policies > Enforce SSO`
|
||||
7. Configure SCIM for auto-provisioning: `admin.atlassian.com > User provisioning > [IdP] > Enable SCIM`
|
||||
8. **VERIFY**: Confirm SSO flow succeeds and audit logs show `saml.login.success` events
|
||||
9. Monitor SSO logs: `admin.atlassian.com > Security > Audit log > filter: SSO`
|
||||
|
||||
### Marketplace App Management
|
||||
1. Evaluate app need and security
|
||||
2. Review vendor security documentation
|
||||
1. Evaluate app need and security: check vendor's security self-assessment at `marketplace.atlassian.com`
|
||||
2. Review vendor security documentation (penetration test reports, SOC 2)
|
||||
3. Test app in sandbox environment
|
||||
4. Purchase or request trial
|
||||
5. Install app on production
|
||||
6. Configure app settings
|
||||
4. Purchase or request trial: `admin.atlassian.com > Billing > Manage subscriptions`
|
||||
5. Install app: `admin.atlassian.com > Products > [product] > Apps > Find new apps`
|
||||
6. Configure app settings per vendor documentation
|
||||
7. Train users on app usage
|
||||
8. Monitor app performance and usage
|
||||
9. Review app annually for continued need
|
||||
8. **VERIFY**: Confirm app appears in `GET /rest/plugins/1.0/` and health check passes
|
||||
9. Monitor app performance and usage; review annually for continued need
|
||||
|
||||
### System Performance Optimization
|
||||
**Jira Optimization**:
|
||||
- Archive old projects and issues
|
||||
- Reindex when performance degrades
|
||||
- Optimize JQL queries
|
||||
- Clean up unused workflows and schemes
|
||||
- Monitor queue and thread counts
|
||||
**Jira** (`Jira Settings > System`):
|
||||
- Archive old projects: `Project settings > Archive project`
|
||||
- Reindex: `Jira Settings > System > Indexing > Full re-index`
|
||||
- Clean up unused workflows and schemes: `Jira Settings > Issues > Workflows`
|
||||
- Monitor queue/thread counts: `Jira Settings > System > System info`
|
||||
|
||||
**Confluence Optimization**:
|
||||
- Archive inactive spaces
|
||||
- Remove orphaned pages
|
||||
- Compress attachments
|
||||
- Monitor index and cache
|
||||
- Clean up unused macros and apps
|
||||
**Confluence** (`Confluence Admin > Configuration`):
|
||||
- Archive inactive spaces: `Space tools > Overview > Archive space`
|
||||
- Remove orphaned pages: `Confluence Admin > Orphaned pages`
|
||||
- Monitor index and cache: `Confluence Admin > Cache management`
|
||||
|
||||
**Monitoring**:
|
||||
- Daily health checks
|
||||
**Monitoring Cadence**:
|
||||
- Daily health checks: `admin.atlassian.com > Products > [product] > Health`
|
||||
- Weekly performance reports
|
||||
- Monthly capacity planning
|
||||
- Quarterly optimization reviews
|
||||
|
||||
### Integration Setup
|
||||
**Common Integrations**:
|
||||
- **Slack**: Notifications for Jira and Confluence
|
||||
- **GitHub/Bitbucket**: Link commits to issues
|
||||
- **Microsoft Teams**: Collaboration and notifications
|
||||
- **Zoom**: Meeting links in issues and pages
|
||||
- **Salesforce**: Customer issue tracking
|
||||
- **Slack**: `Jira Settings > Apps > Slack integration` — notifications for Jira and Confluence
|
||||
- **GitHub/Bitbucket**: `Jira Settings > Apps > DVCS accounts` — link commits to issues
|
||||
- **Microsoft Teams**: `admin.atlassian.com > Apps > Microsoft Teams`
|
||||
- **Zoom**: Available via Marketplace app `zoom-for-jira`
|
||||
- **Salesforce**: Via Marketplace app `salesforce-connector`
|
||||
|
||||
**Configuration Steps**:
|
||||
1. Review integration requirements
|
||||
2. Configure OAuth or API authentication
|
||||
1. Review integration requirements and OAuth scopes needed
|
||||
2. Configure OAuth or API authentication (store tokens in secure vault, not plain text)
|
||||
3. Map fields and data flows
|
||||
4. Test integration thoroughly
|
||||
5. Document configuration
|
||||
4. Test integration thoroughly with sample data
|
||||
5. Document configuration in Confluence runbook
|
||||
6. Train users on integration features
|
||||
7. Monitor integration health
|
||||
7. **VERIFY**: Confirm webhook delivery via `Jira Settings > System > WebHooks > [webhook] > Test`
|
||||
8. Monitor integration health via app-specific dashboards
|
||||
|
||||
## Global Configuration
|
||||
|
||||
### Jira Global Settings
|
||||
**Issue Types**:
|
||||
- Create and manage org-wide issue types
|
||||
- Define issue type schemes
|
||||
- Standardize across projects
|
||||
### Jira Global Settings (`Jira Settings > Issues`)
|
||||
**Issue Types**: Create and manage org-wide issue types; define issue type schemes; standardize across projects
|
||||
**Workflows**: Create global workflow templates via `Workflows > Add workflow`; manage workflow schemes
|
||||
**Custom Fields**: Create org-wide custom fields at `Custom fields > Add custom field`; manage field configurations and context
|
||||
**Notification Schemes**: Configure default notification rules; create custom notification schemes; manage email templates
|
||||
|
||||
**Workflows**:
|
||||
- Create global workflow templates
|
||||
- Define standard workflows (simple, complex)
|
||||
- Manage workflow schemes
|
||||
### Confluence Global Settings (`Confluence Admin`)
|
||||
**Blueprints & Templates**: Create org-wide templates at `Configuration > Global Templates and Blueprints`; manage blueprint availability
|
||||
**Themes & Appearance**: Configure org branding at `Configuration > Themes`; customize logos and colors
|
||||
**Macros**: Enable/disable macros at `Configuration > Macro usage`; configure macro permissions
|
||||
|
||||
**Custom Fields**:
|
||||
- Create org-wide custom fields
|
||||
- Manage field configurations
|
||||
- Control field context
|
||||
|
||||
**Notification Schemes**:
|
||||
- Configure default notification rules
|
||||
- Create custom notification schemes
|
||||
- Manage email templates
|
||||
|
||||
### Confluence Global Settings
|
||||
**Blueprints & Templates**:
|
||||
- Create org-wide templates
|
||||
- Manage blueprint availability
|
||||
- Standardize content structure
|
||||
|
||||
**Themes & Appearance**:
|
||||
- Configure org branding
|
||||
- Manage global themes
|
||||
- Customize logos and colors
|
||||
|
||||
**Macros**:
|
||||
- Enable/disable macros
|
||||
- Configure macro defaults
|
||||
- Manage macro permissions
|
||||
|
||||
### Security Settings
|
||||
### Security Settings (`admin.atlassian.com > Security`)
|
||||
**Authentication**:
|
||||
- Password policies (length, complexity, expiry)
|
||||
- Session timeout settings
|
||||
- Failed login lockout
|
||||
- API token management
|
||||
- Password policies: `Security > Authentication policies > Edit`
|
||||
- Session timeout: `Security > Session duration`
|
||||
- API token management: `Security > API token controls`
|
||||
|
||||
**Data Residency**:
|
||||
- Configure data location (US, EU, APAC)
|
||||
- Ensure compliance with regulations
|
||||
- Document data residency for audits
|
||||
**Data Residency**: Configure data location at `admin.atlassian.com > Data residency > Pin products`
|
||||
|
||||
**Encryption**:
|
||||
- Enable encryption at rest
|
||||
- Configure encryption in transit
|
||||
- Manage encryption keys
|
||||
|
||||
**Audit Logs**:
|
||||
- Enable comprehensive audit logging
|
||||
- Review logs regularly for anomalies
|
||||
- Export logs for compliance
|
||||
- Retain logs per policy (7 years for compliance)
|
||||
**Audit Logs**: `admin.atlassian.com > Security > Audit log`
|
||||
- Enable comprehensive logging; export via `GET /admin/v1/orgs/{orgId}/audit-log`
|
||||
- Retain per policy (minimum 7 years for SOC 2/GDPR compliance)
|
||||
|
||||
## Governance & Policies
|
||||
|
||||
### Access Governance
|
||||
**User Access Review**:
|
||||
- Quarterly review of all user access
|
||||
- Verify user roles and permissions
|
||||
- Remove inactive users
|
||||
- Update group memberships
|
||||
|
||||
**Admin Access Control**:
|
||||
- Limit org admins to 2-3 individuals
|
||||
- Use project/space admins for delegation
|
||||
- Audit admin actions monthly
|
||||
- Require MFA for all admins
|
||||
- Quarterly review of all user access: `admin.atlassian.com > User management > Export users`
|
||||
- Verify user roles and permissions; remove inactive users
|
||||
- Limit org admins to 2–3 individuals; audit admin actions monthly
|
||||
- Require MFA for all admins: `Security > Authentication policies > Require 2FA`
|
||||
|
||||
### Naming Conventions
|
||||
**Jira**:
|
||||
- Project keys: 3-4 letters, uppercase (PROJ, WEB)
|
||||
- Issue types: Title case, descriptive
|
||||
- Custom fields: Prefix with type (CF: Story Points)
|
||||
|
||||
**Confluence**:
|
||||
- Spaces: Team/Project prefix (TEAM: Engineering)
|
||||
- Pages: Descriptive, consistent format
|
||||
- Labels: Lowercase, hyphen-separated
|
||||
**Jira**: Project keys 3–4 uppercase letters (PROJ, WEB); issue types Title Case; custom fields prefixed (CF: Story Points)
|
||||
**Confluence**: Spaces use Team/Project prefix (TEAM: Engineering); pages descriptive and consistent; labels lowercase, hyphen-separated
|
||||
|
||||
### Change Management
|
||||
**Major Changes**:
|
||||
- Announce 2 weeks in advance
|
||||
- Test in sandbox
|
||||
- Create rollback plan
|
||||
- Execute during off-peak
|
||||
- Post-implementation review
|
||||
|
||||
**Minor Changes**:
|
||||
- Announce 48 hours in advance
|
||||
- Document in change log
|
||||
- Monitor for issues
|
||||
**Major Changes**: Announce 2 weeks in advance; test in sandbox; create rollback plan; execute during off-peak; post-implementation review
|
||||
**Minor Changes**: Announce 48 hours in advance; document in change log; monitor for issues
|
||||
|
||||
## Disaster Recovery
|
||||
|
||||
### Backup Strategy
|
||||
**Jira**:
|
||||
- Daily automated backups
|
||||
- Weekly manual verification
|
||||
- 30-day retention
|
||||
- Offsite storage
|
||||
**Jira & Confluence**: Daily automated backups; weekly manual verification; 30-day retention; offsite storage
|
||||
- Trigger manual backup: `Jira Settings > System > Backup system` / `Confluence Admin > Backup and Restore`
|
||||
|
||||
**Confluence**:
|
||||
- Daily automated backups
|
||||
- Weekly export validation
|
||||
- 30-day retention
|
||||
- Offsite storage
|
||||
|
||||
**Recovery Testing**:
|
||||
- Quarterly recovery drills
|
||||
- Document recovery procedures
|
||||
- Measure recovery time objectives (RTO)
|
||||
- Measure recovery point objectives (RPO)
|
||||
**Recovery Testing**: Quarterly recovery drills; document procedures; measure RTO and RPO
|
||||
|
||||
### Incident Response
|
||||
**Severity Levels**:
|
||||
- **P1 (Critical)**: System down, respond in 15 min
|
||||
- **P2 (High)**: Major feature broken, respond in 1 hour
|
||||
- **P3 (Medium)**: Minor issue, respond in 4 hours
|
||||
- **P4 (Low)**: Enhancement, respond in 24 hours
|
||||
- **P1 (Critical)**: System down — respond in 15 min
|
||||
- **P2 (High)**: Major feature broken — respond in 1 hour
|
||||
- **P3 (Medium)**: Minor issue — respond in 4 hours
|
||||
- **P4 (Low)**: Enhancement — respond in 24 hours
|
||||
|
||||
**Response Steps**:
|
||||
1. Acknowledge incident
|
||||
1. Acknowledge and log incident
|
||||
2. Assess impact and severity
|
||||
3. Communicate status to stakeholders
|
||||
4. Investigate root cause
|
||||
4. Investigate root cause (check `admin.atlassian.com > Products > [product] > Health` and Atlassian Status Page)
|
||||
5. Implement fix
|
||||
6. Verify resolution
|
||||
6. **VERIFY**: Confirm resolution via affected user test and health check
|
||||
7. Post-mortem and lessons learned
|
||||
|
||||
## Metrics & Reporting
|
||||
|
||||
### System Health Metrics
|
||||
- Active users (daily, weekly, monthly)
|
||||
- Storage utilization
|
||||
- API rate limits
|
||||
- Integration health
|
||||
- App performance
|
||||
- Response times
|
||||
**System Health**: Active users (daily/weekly/monthly), storage utilization, API rate limits, integration health, response times
|
||||
- Export via: `GET /admin/v1/orgs/{orgId}/users` for user counts; product-specific analytics dashboards
|
||||
|
||||
### Usage Analytics
|
||||
- Most active projects/spaces
|
||||
- Content creation trends
|
||||
- User engagement
|
||||
- Search patterns
|
||||
- Popular pages/issues
|
||||
**Usage Analytics**: Most active projects/spaces, content creation trends, user engagement, search patterns
|
||||
**Compliance Metrics**: User access review completion, security audit findings, failed login attempts, API token usage
|
||||
|
||||
### Compliance Metrics
|
||||
- User access review completion
|
||||
- Security audit findings
|
||||
- Failed login attempts
|
||||
- API token usage
|
||||
- Data residency compliance
|
||||
## Decision Framework & Handoff Protocols
|
||||
|
||||
## Decision Framework
|
||||
**Escalate to Atlassian Support**: System outage, performance degradation org-wide, data loss/corruption, license/billing issues, complex migrations
|
||||
|
||||
**When to Escalate to Atlassian Support**:
|
||||
- System outage or critical bug
|
||||
- Performance degradation across org
|
||||
- Data loss or corruption
|
||||
- License or billing issues
|
||||
- Complex migration needs
|
||||
|
||||
**When to Delegate to Product Experts**:
|
||||
**Delegate to Product Experts**:
|
||||
- Jira Expert: Project-specific configuration
|
||||
- Confluence Expert: Space-specific settings
|
||||
- Scrum Master: Team workflow needs
|
||||
- Senior PM: Strategic planning input
|
||||
|
||||
**When to Involve Security Team**:
|
||||
- Security incidents or breaches
|
||||
- Unusual access patterns
|
||||
- Compliance audit preparation
|
||||
- New integration security review
|
||||
**Involve Security Team**: Security incidents, unusual access patterns, compliance audit preparation, new integration security review
|
||||
|
||||
## Handoff Protocols
|
||||
|
||||
**TO Jira Expert**:
|
||||
- New global workflows available
|
||||
- Custom field created
|
||||
- Permission scheme deployed
|
||||
- Automation capabilities enabled
|
||||
|
||||
**TO Confluence Expert**:
|
||||
- New global template available
|
||||
- Space permission scheme updated
|
||||
- Blueprint configured
|
||||
- Macro enabled/disabled
|
||||
|
||||
**TO Senior PM**:
|
||||
- Usage analytics for portfolio
|
||||
- Capacity planning insights
|
||||
- Cost optimization opportunities
|
||||
- Security compliance status
|
||||
|
||||
**TO Scrum Master**:
|
||||
- Team access provisioned
|
||||
- Board configuration options
|
||||
- Automation rules available
|
||||
- Integration enabled
|
||||
|
||||
**FROM All Roles**:
|
||||
- User access requests
|
||||
- Permission change requests
|
||||
- App installation requests
|
||||
- Configuration support needs
|
||||
- Incident reports
|
||||
|
||||
## Best Practices
|
||||
|
||||
**User Management**:
|
||||
- Automate provisioning with SCIM
|
||||
- Use groups for scalability
|
||||
- Regular access reviews
|
||||
- Document user lifecycle
|
||||
|
||||
**Security**:
|
||||
- Enforce MFA for all users
|
||||
- Regular security audits
|
||||
- Least privilege principle
|
||||
- Monitor anomalous behavior
|
||||
|
||||
**Performance**:
|
||||
- Proactive monitoring
|
||||
- Regular cleanup
|
||||
- Optimize before issues occur
|
||||
- Capacity planning
|
||||
|
||||
**Documentation**:
|
||||
- Document all configurations
|
||||
- Maintain runbooks
|
||||
- Update after changes
|
||||
- Make searchable in Confluence
|
||||
**TO Jira Expert**: New global workflows, custom fields, permission schemes, or automation capabilities available
|
||||
**TO Confluence Expert**: New global templates, space permission schemes, blueprints, or macros configured
|
||||
**TO Senior PM**: Usage analytics, capacity planning insights, cost optimization, security compliance status
|
||||
**TO Scrum Master**: Team access provisioned, board configuration options, automation rules, integrations enabled
|
||||
**FROM All Roles**: User access requests, permission changes, app installation requests, configuration support, incident reports
|
||||
|
||||
## Atlassian MCP Integration
|
||||
|
||||
|
||||
@@ -1,37 +1,13 @@
|
||||
---
|
||||
name: "atlassian-templates"
|
||||
description: Atlassian Template and Files Creator/Modifier expert for creating, modifying, and managing Jira and Confluence templates, blueprints, custom layouts, reusable components, and standardized content structures. Use for building org-wide templates, custom blueprints, page layouts, and automated content generation.
|
||||
description: Atlassian Template and Files Creator/Modifier expert for creating, modifying, and managing Jira and Confluence templates, blueprints, custom layouts, reusable components, and standardized content structures. Use when building org-wide templates, custom blueprints, page layouts, and automated content generation.
|
||||
---
|
||||
|
||||
# Atlassian Template & Files Creator Expert
|
||||
|
||||
Specialist in creating, modifying, and managing reusable templates and files for Jira and Confluence. Ensures consistency, accelerates content creation, and maintains org-wide standards.
|
||||
|
||||
## Core Competencies
|
||||
|
||||
**Template Design**
|
||||
- Create Confluence page templates with dynamic content
|
||||
- Design Jira issue templates and descriptions
|
||||
- Build blueprints for complex content structures
|
||||
- Implement template versioning and updates
|
||||
|
||||
**Content Standardization**
|
||||
- Establish org-wide content standards
|
||||
- Create reusable components and macros
|
||||
- Design template libraries
|
||||
- Maintain template documentation
|
||||
|
||||
**Automation**
|
||||
- Build templates with dynamic fields and automation
|
||||
- Create templates that integrate with Jira
|
||||
- Design self-updating content structures
|
||||
- Implement template-based workflows
|
||||
|
||||
**Template Governance**
|
||||
- Manage template lifecycle
|
||||
- Version control for templates
|
||||
- Deprecate outdated templates
|
||||
- Track template usage and adoption
|
||||
---
|
||||
|
||||
## Workflows
|
||||
|
||||
@@ -40,22 +16,23 @@ Specialist in creating, modifying, and managing reusable templates and files for
|
||||
2. **Analyze**: Review existing content patterns
|
||||
3. **Design**: Create template structure and placeholders
|
||||
4. **Implement**: Build template with macros and formatting
|
||||
5. **Test**: Validate with sample data
|
||||
5. **Test**: Validate with sample data — confirm template renders correctly in preview before publishing
|
||||
6. **Document**: Create usage instructions
|
||||
7. **Publish**: Deploy to appropriate space/project
|
||||
8. **Train**: Educate users on template usage
|
||||
9. **Monitor**: Track adoption and gather feedback
|
||||
10. **Iterate**: Refine based on usage
|
||||
7. **Publish**: Deploy to appropriate space/project via MCP (see MCP Operations below)
|
||||
8. **Verify**: Confirm deployment success; roll back to previous version if errors occur
|
||||
9. **Train**: Educate users on template usage
|
||||
10. **Monitor**: Track adoption and gather feedback
|
||||
11. **Iterate**: Refine based on usage
|
||||
|
||||
### Template Modification Process
|
||||
1. **Assess**: Review change request and impact
|
||||
2. **Version**: Create new version, keep old available
|
||||
3. **Modify**: Update template structure/content
|
||||
4. **Test**: Validate changes don't break existing usage
|
||||
4. **Test**: Validate changes don't break existing usage; preview updated template before publishing
|
||||
5. **Migrate**: Provide migration path for existing content
|
||||
6. **Communicate**: Announce changes to users
|
||||
7. **Support**: Assist users with migration
|
||||
8. **Archive**: Deprecate old version after transition
|
||||
8. **Archive**: Deprecate old version after transition; confirm deprecated template is unlisted, not deleted
|
||||
|
||||
### Blueprint Development
|
||||
1. Define blueprint scope and purpose
|
||||
@@ -63,689 +40,213 @@ Specialist in creating, modifying, and managing reusable templates and files for
|
||||
3. Create page templates for each section
|
||||
4. Configure page creation rules
|
||||
5. Add dynamic content (Jira queries, user data)
|
||||
6. Test blueprint creation flow
|
||||
7. **HANDOFF TO**: Atlassian Admin for global deployment
|
||||
6. Test blueprint creation flow end-to-end with a sample space
|
||||
7. Verify all macro references resolve correctly before deployment
|
||||
8. **HANDOFF TO**: Atlassian Admin for global deployment
|
||||
|
||||
---
|
||||
|
||||
## Confluence Templates Library
|
||||
|
||||
### 1. Meeting Notes Template
|
||||
```markdown
|
||||
See **TEMPLATES.md** for full reference tables and copy-paste-ready template structures. The following summarises the standard types this skill creates and maintains.
|
||||
|
||||
### Confluence Template Types
|
||||
| Template | Purpose | Key Macros Used |
|
||||
|----------|---------|-----------------|
|
||||
| **Meeting Notes** | Structured meeting records with agenda, decisions, and action items | `{date}`, `{tasks}`, `{panel}`, `{info}`, `{note}` |
|
||||
| **Project Charter** | Org-level project scope, stakeholder RACI, timeline, and budget | `{panel}`, `{status}`, `{timeline}`, `{info}` |
|
||||
| **Sprint Retrospective** | Agile ceremony template with What Went Well / Didn't Go Well / Actions | `{panel}`, `{expand}`, `{tasks}`, `{status}` |
|
||||
| **PRD** | Feature definition with goals, user stories, functional/non-functional requirements, and release plan | `{panel}`, `{status}`, `{jira}`, `{warning}` |
|
||||
| **Decision Log** | Structured option analysis with decision matrix and implementation tracking | `{panel}`, `{status}`, `{info}`, `{tasks}` |
|
||||
|
||||
**Standard Sections** included across all Confluence templates:
|
||||
- Header panel with metadata (owner, date, status)
|
||||
- Clearly labelled content sections with inline placeholder instructions
|
||||
- Action items block using `{tasks}` macro
|
||||
- Related links and references
|
||||
|
||||
### Complete Example: Meeting Notes Template
|
||||
|
||||
The following is a copy-paste-ready Meeting Notes template in Confluence storage format (wiki markup):
|
||||
|
||||
```
|
||||
{panel:title=Meeting Metadata|borderColor=#0052CC|titleBGColor=#0052CC|titleColor=#FFFFFF}
|
||||
*Date:* {date}
|
||||
*Owner / Facilitator:* @[facilitator name]
|
||||
*Attendees:* @[name], @[name]
|
||||
*Status:* {status:colour=Yellow|title=In Progress}
|
||||
{panel}
|
||||
|
||||
h2. Agenda
|
||||
# [Agenda item 1]
|
||||
# [Agenda item 2]
|
||||
# [Agenda item 3]
|
||||
|
||||
h2. Discussion & Decisions
|
||||
{panel:title=Key Decisions|borderColor=#36B37E|titleBGColor=#36B37E|titleColor=#FFFFFF}
|
||||
* *Decision 1:* [What was decided and why]
|
||||
* *Decision 2:* [What was decided and why]
|
||||
{panel}
|
||||
|
||||
{info:title=Notes}
|
||||
[Detailed discussion notes, context, or background here]
|
||||
{info}
|
||||
|
||||
h2. Action Items
|
||||
{tasks}
|
||||
* [ ] [Action item] — Owner: @[name] — Due: {date}
|
||||
* [ ] [Action item] — Owner: @[name] — Due: {date}
|
||||
{tasks}
|
||||
|
||||
h2. Next Steps & Related Links
|
||||
* Next meeting: {date}
|
||||
* Related pages: [link]
|
||||
* Related Jira issues: {jira:key=PROJ-123}
|
||||
```
|
||||
|
||||
> Full examples for all other template types (Project Charter, Sprint Retrospective, PRD, Decision Log) and all Jira templates can be generated on request or found in **TEMPLATES.md**.
|
||||
|
||||
---
|
||||
**Meeting Title**: [Meeting Name]
|
||||
**Date**: {date:format=dd MMM yyyy}
|
||||
**Time**: [Time]
|
||||
**Attendees**: @user1, @user2, @user3
|
||||
**Facilitator**: @facilitator
|
||||
**Note Taker**: @notetaker
|
||||
---
|
||||
|
||||
{info}
|
||||
**Quick Links**:
|
||||
- [Previous Meeting](link)
|
||||
- [Project Page](link)
|
||||
- [Jira Board](link)
|
||||
{info}
|
||||
|
||||
## Agenda
|
||||
1. [Topic 1] - [Duration] - [Owner]
|
||||
2. [Topic 2] - [Duration] - [Owner]
|
||||
3. [Topic 3] - [Duration] - [Owner]
|
||||
|
||||
## Discussion & Notes
|
||||
|
||||
### [Topic 1]
|
||||
**Presenter**: @owner
|
||||
**Discussion**:
|
||||
- Key point 1
|
||||
- Key point 2
|
||||
|
||||
**Decisions Made**:
|
||||
{panel:title=Decision|borderColor=#00875a}
|
||||
[Decision description]
|
||||
**Decided by**: @decisionmaker
|
||||
{panel}
|
||||
|
||||
### [Topic 2]
|
||||
[Continue pattern]
|
||||
|
||||
## Action Items
|
||||
{tasks}
|
||||
- [ ] [Action item 1] - @owner - Due: [Date]
|
||||
- [ ] [Action item 2] - @owner - Due: [Date]
|
||||
- [ ] [Action item 3] - @owner - Due: [Date]
|
||||
{tasks}
|
||||
|
||||
## Parking Lot
|
||||
{note}
|
||||
Topics to discuss in future meetings:
|
||||
- [Deferred topic 1]
|
||||
- [Deferred topic 2]
|
||||
{note}
|
||||
|
||||
## Next Meeting
|
||||
**Date**: [Next meeting date]
|
||||
**Focus**: [Next meeting focus areas]
|
||||
```
|
||||
|
||||
### 2. Project Charter Template
|
||||
```markdown
|
||||
{panel:title=Project Overview|borderColor=#0052cc}
|
||||
**Project Name**: [Project Name]
|
||||
**Project Code**: [PROJ]
|
||||
**Status**: {status:colour=Blue|title=Planning}
|
||||
**Owner**: @projectowner
|
||||
**Sponsor**: @sponsor
|
||||
**Start Date**: [DD/MM/YYYY]
|
||||
**Target End Date**: [DD/MM/YYYY]
|
||||
{panel}
|
||||
|
||||
## Executive Summary
|
||||
[2-3 paragraphs summarizing the project purpose, scope, and expected outcomes]
|
||||
|
||||
## Business Case
|
||||
|
||||
### Problem Statement
|
||||
[Describe the problem or opportunity]
|
||||
|
||||
### Objectives
|
||||
1. [SMART Objective 1]
|
||||
2. [SMART Objective 2]
|
||||
3. [SMART Objective 3]
|
||||
|
||||
### Success Criteria
|
||||
{info}
|
||||
**Definition of Success**:
|
||||
- [Measurable outcome 1]
|
||||
- [Measurable outcome 2]
|
||||
- [Measurable outcome 3]
|
||||
{info}
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
- [Deliverable 1]
|
||||
- [Deliverable 2]
|
||||
- [Deliverable 3]
|
||||
|
||||
### Out of Scope
|
||||
- [Explicitly excluded item 1]
|
||||
- [Explicitly excluded item 2]
|
||||
|
||||
## Stakeholders
|
||||
|
||||
| Name | Role | Responsibility | Influence |
|
||||
|------|------|----------------|-----------|
|
||||
| @user1 | Sponsor | Funding & approval | High |
|
||||
| @user2 | PM | Day-to-day management | High |
|
||||
| @user3 | Tech Lead | Technical direction | Medium |
|
||||
|
||||
**RACI Matrix**: [Link to detailed RACI]
|
||||
|
||||
## Timeline & Milestones
|
||||
|
||||
{timeline}
|
||||
| Phase | Start | End | Deliverables |
|
||||
|-------|-------|-----|--------------|
|
||||
| Discovery | DD/MM | DD/MM | Requirements doc |
|
||||
| Design | DD/MM | DD/MM | Design specs |
|
||||
| Development | DD/MM | DD/MM | MVP |
|
||||
| Testing | DD/MM | DD/MM | Test report |
|
||||
| Launch | DD/MM | DD/MM | Production release |
|
||||
{timeline}
|
||||
|
||||
## Budget
|
||||
**Total Budget**: $XXX,XXX
|
||||
|
||||
| Category | Estimated Cost | Notes |
|
||||
|----------|----------------|-------|
|
||||
| Personnel | $XX,XXX | FTE allocation |
|
||||
| Software/Tools | $XX,XXX | Licenses & subscriptions |
|
||||
| External Services | $XX,XXX | Contractors, vendors |
|
||||
| Contingency (10%) | $X,XXX | Risk buffer |
|
||||
|
||||
## Risks & Assumptions
|
||||
|
||||
### Top Risks
|
||||
| Risk | Probability | Impact | Mitigation |
|
||||
|------|-------------|--------|-----------|
|
||||
| [Risk 1] | Medium | High | [Mitigation strategy] |
|
||||
| [Risk 2] | Low | High | [Mitigation strategy] |
|
||||
|
||||
### Assumptions
|
||||
- [Critical assumption 1]
|
||||
- [Critical assumption 2]
|
||||
|
||||
## Resources & Links
|
||||
- [Jira Project](#)
|
||||
- [Confluence Space](#)
|
||||
- [Design Files](#)
|
||||
- [Technical Docs](#)
|
||||
```
|
||||
|
||||
### 3. Sprint Retrospective Template
|
||||
```markdown
|
||||
{panel:title=Sprint Retrospective|borderColor=#00875a}
|
||||
**Sprint**: Sprint [Number]
|
||||
**Sprint Dates**: [Start Date] - [End Date]
|
||||
**Team**: [Team Name]
|
||||
**Facilitator**: @facilitator
|
||||
**Date**: {date}
|
||||
{panel}
|
||||
|
||||
## Sprint Overview
|
||||
**Sprint Goal**: [Sprint goal]
|
||||
**Completed Stories**: [X/Y]
|
||||
**Velocity**: [XX points]
|
||||
**Carry Over**: [X stories]
|
||||
|
||||
## Ceremony: What Went Well? 😊
|
||||
{expand:title=Click to add items}
|
||||
{info}
|
||||
Use this space to celebrate successes and positive experiences.
|
||||
{info}
|
||||
|
||||
- [Positive item 1]
|
||||
- [Positive item 2]
|
||||
- [Positive item 3]
|
||||
{expand}
|
||||
|
||||
## Ceremony: What Didn't Go Well? 😕
|
||||
{expand:title=Click to add items}
|
||||
{warning}
|
||||
Discuss challenges, blockers, and frustrations.
|
||||
{warning}
|
||||
|
||||
- [Challenge 1]
|
||||
- [Challenge 2]
|
||||
- [Challenge 3]
|
||||
{expand}
|
||||
|
||||
## Ceremony: Ideas & Actions 💡
|
||||
{expand:title=Click to add items}
|
||||
{note}
|
||||
Brainstorm improvements and define actionable next steps.
|
||||
{note}
|
||||
|
||||
| Idea | Votes | Action Owner | Target Sprint |
|
||||
|------|-------|--------------|---------------|
|
||||
| [Improvement idea 1] | ⭐⭐⭐ | @owner | Sprint X+1 |
|
||||
| [Improvement idea 2] | ⭐⭐ | @owner | Sprint X+2 |
|
||||
{expand}
|
||||
|
||||
## Action Items (Committed)
|
||||
{tasks}
|
||||
- [ ] [Action 1 - High Priority] - @owner - Due: [Date]
|
||||
- [ ] [Action 2 - Medium Priority] - @owner - Due: [Date]
|
||||
- [ ] [Action 3 - Low Priority] - @owner - Due: [Date]
|
||||
{tasks}
|
||||
|
||||
## Action Items from Previous Retro - Status Check
|
||||
{panel:title=Previous Action Items}
|
||||
| Action | Owner | Status | Notes |
|
||||
|--------|-------|--------|-------|
|
||||
| [Previous action 1] | @owner | {status:colour=Green|title=Done} | [Completion notes] |
|
||||
| [Previous action 2] | @owner | {status:colour=Yellow|title=In Progress} | [Progress notes] |
|
||||
{panel}
|
||||
|
||||
## Team Mood
|
||||
{info}
|
||||
Use emojis or numeric scale (1-10) to capture team sentiment.
|
||||
{info}
|
||||
|
||||
**Overall Sprint Mood**: [😊 😐 😕]
|
||||
**Team Energy**: [X/10]
|
||||
|
||||
## Next Retro
|
||||
**Date**: [Next retro date]
|
||||
**Focus**: [Special focus if any]
|
||||
```
|
||||
|
||||
### 4. Product Requirements Document (PRD) Template
|
||||
```markdown
|
||||
{panel:title=PRD Overview|borderColor=#0052cc}
|
||||
**Feature Name**: [Feature Name]
|
||||
**PRD ID**: PRD-XXX
|
||||
**Author**: @author
|
||||
**Status**: {status:colour=Blue|title=Draft}
|
||||
**Last Updated**: {date}
|
||||
**Epic Link**: {jira:Epic Key}
|
||||
{panel}
|
||||
|
||||
## Problem Statement
|
||||
[Describe the user problem or business need. Answer: What problem are we solving and for whom?]
|
||||
|
||||
## Goals & Success Metrics
|
||||
|
||||
### Goals
|
||||
1. [Primary goal]
|
||||
2. [Secondary goal]
|
||||
|
||||
### Success Metrics
|
||||
| Metric | Target | Measurement |
|
||||
|--------|--------|-------------|
|
||||
| [Metric 1] | [Target value] | [How to measure] |
|
||||
| [Metric 2] | [Target value] | [How to measure] |
|
||||
|
||||
## User Stories & Use Cases
|
||||
|
||||
### Primary User Story
|
||||
**As a** [user type]
|
||||
**I want** [capability]
|
||||
**So that** [benefit]
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] [Criterion 1]
|
||||
- [ ] [Criterion 2]
|
||||
- [ ] [Criterion 3]
|
||||
|
||||
### Use Cases
|
||||
1. **Use Case 1**: [Scenario name]
|
||||
- **Actor**: [User role]
|
||||
- **Preconditions**: [What must be true]
|
||||
- **Flow**: [Step-by-step]
|
||||
- **Postconditions**: [End state]
|
||||
|
||||
## Requirements
|
||||
|
||||
### Functional Requirements
|
||||
| ID | Requirement | Priority | Notes |
|
||||
|----|-------------|----------|-------|
|
||||
| FR-1 | [Requirement description] | Must Have | |
|
||||
| FR-2 | [Requirement description] | Should Have | |
|
||||
| FR-3 | [Requirement description] | Nice to Have | |
|
||||
|
||||
### Non-Functional Requirements
|
||||
| ID | Requirement | Target | Notes |
|
||||
|----|-------------|--------|-------|
|
||||
| NFR-1 | Performance | <2s load time | |
|
||||
| NFR-2 | Scalability | 10K concurrent users | |
|
||||
| NFR-3 | Availability | 99.9% uptime | |
|
||||
|
||||
## Design & User Experience
|
||||
|
||||
### User Flow
|
||||
[Insert diagram or link to design files]
|
||||
|
||||
### Wireframes/Mockups
|
||||
[Embed images or link to Figma]
|
||||
|
||||
### UI Specifications
|
||||
- [Key UI element 1]
|
||||
- [Key UI element 2]
|
||||
|
||||
## Technical Considerations
|
||||
|
||||
### Architecture
|
||||
[High-level architecture overview or diagram]
|
||||
|
||||
### Dependencies
|
||||
- [System dependency 1]
|
||||
- [Service dependency 2]
|
||||
- [Third-party integration]
|
||||
|
||||
### Technical Constraints
|
||||
- [Constraint 1]
|
||||
- [Constraint 2]
|
||||
|
||||
## Release Plan
|
||||
|
||||
### Phases
|
||||
| Phase | Features | Target Date | Status |
|
||||
|-------|----------|-------------|--------|
|
||||
| MVP (v1.0) | [Core features] | [Date] | {status:colour=Blue|title=Planned} |
|
||||
| v1.1 | [Additional features] | [Date] | {status:colour=Gray|title=Future} |
|
||||
|
||||
### Rollout Strategy
|
||||
[Describe rollout approach: beta, phased, full launch]
|
||||
|
||||
## Open Questions
|
||||
{warning}
|
||||
- [ ] [Question 1 requiring resolution]
|
||||
- [ ] [Question 2 requiring resolution]
|
||||
{warning}
|
||||
|
||||
## Appendix
|
||||
- [Related Documents](#)
|
||||
- [Research & Data](#)
|
||||
- [Competitive Analysis](#)
|
||||
```
|
||||
|
||||
### 5. Decision Log Template
|
||||
```markdown
|
||||
{panel:title=Decision Record|borderColor=#ff5630}
|
||||
**Decision ID**: [PROJ]-DEC-[XXX]
|
||||
**Date**: {date}
|
||||
**Status**: {status:colour=Green|title=Approved}
|
||||
**Decision Maker**: @decisionmaker
|
||||
**Stakeholders**: @stakeholder1, @stakeholder2
|
||||
{panel}
|
||||
|
||||
## Context & Background
|
||||
[Provide background on what led to this decision. Include relevant history, constraints, and why a decision is needed now.]
|
||||
|
||||
## Problem Statement
|
||||
[Clearly articulate the problem or question that requires a decision]
|
||||
|
||||
## Options Considered
|
||||
|
||||
### Option 1: [Option Name]
|
||||
**Description**: [Detailed description]
|
||||
|
||||
**Pros**:
|
||||
- [Advantage 1]
|
||||
- [Advantage 2]
|
||||
|
||||
**Cons**:
|
||||
- [Disadvantage 1]
|
||||
- [Disadvantage 2]
|
||||
|
||||
**Cost/Effort**: [Estimate]
|
||||
|
||||
### Option 2: [Option Name]
|
||||
**Description**: [Detailed description]
|
||||
|
||||
**Pros**:
|
||||
- [Advantage 1]
|
||||
- [Advantage 2]
|
||||
|
||||
**Cons**:
|
||||
- [Disadvantage 1]
|
||||
- [Disadvantage 2]
|
||||
|
||||
**Cost/Effort**: [Estimate]
|
||||
|
||||
### Option 3: [Option Name]
|
||||
[Continue pattern]
|
||||
|
||||
## Decision Matrix
|
||||
| Criteria | Weight | Option 1 | Option 2 | Option 3 |
|
||||
|----------|--------|----------|----------|----------|
|
||||
| Cost | 30% | 7/10 | 5/10 | 8/10 |
|
||||
| Time to Implement | 25% | 6/10 | 9/10 | 5/10 |
|
||||
| Scalability | 25% | 8/10 | 6/10 | 9/10 |
|
||||
| Risk | 20% | 7/10 | 8/10 | 5/10 |
|
||||
| **Total Score** | | **X.X** | **Y.Y** | **Z.Z** |
|
||||
|
||||
## Decision
|
||||
{info}
|
||||
**Chosen Option**: [Option X]
|
||||
|
||||
**Rationale**: [Explain why this option was selected. Reference the decision matrix and key factors.]
|
||||
{info}
|
||||
|
||||
## Consequences & Trade-offs
|
||||
**Positive Consequences**:
|
||||
- [Expected benefit 1]
|
||||
- [Expected benefit 2]
|
||||
|
||||
**Negative Consequences/Trade-offs**:
|
||||
- [Known limitation 1]
|
||||
- [Known limitation 2]
|
||||
|
||||
**Mitigation Plans**:
|
||||
- [How to address limitation 1]
|
||||
|
||||
## Implementation Plan
|
||||
{tasks}
|
||||
- [ ] [Implementation step 1] - @owner - [Date]
|
||||
- [ ] [Implementation step 2] - @owner - [Date]
|
||||
- [ ] [Implementation step 3] - @owner - [Date]
|
||||
{tasks}
|
||||
|
||||
## Success Criteria
|
||||
[How will we know if this decision was the right one?]
|
||||
- [Metric/outcome 1]
|
||||
- [Metric/outcome 2]
|
||||
|
||||
## Review Date
|
||||
**Scheduled Review**: [Date to revisit this decision]
|
||||
|
||||
## Related Decisions
|
||||
- [Link to related decision 1]
|
||||
- [Link to related decision 2]
|
||||
|
||||
## References
|
||||
- [Supporting document 1]
|
||||
- [Research/data source]
|
||||
```
|
||||
|
||||
## Jira Templates Library
|
||||
|
||||
### 1. User Story Template
|
||||
```
|
||||
**As a** [type of user]
|
||||
**I want** [capability or goal]
|
||||
**So that** [benefit or value]
|
||||
### Jira Template Types
|
||||
| Template | Purpose | Key Sections |
|
||||
|----------|---------|--------------|
|
||||
| **User Story** | Feature requests in As a / I want / So that format | Acceptance Criteria (Given/When/Then), Design links, Technical Notes, Definition of Done |
|
||||
| **Bug Report** | Defect capture with reproduction steps | Environment, Steps to Reproduce, Expected vs Actual Behavior, Severity, Workaround |
|
||||
| **Epic** | High-level initiative scope | Vision, Goals, Success Metrics, Story Breakdown, Dependencies, Timeline |
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Given [context], when [action], then [outcome]
|
||||
- [ ] Given [context], when [action], then [outcome]
|
||||
- [ ] Given [context], when [action], then [outcome]
|
||||
**Standard Sections** included across all Jira templates:
|
||||
- Clear summary line
|
||||
- Acceptance or success criteria as checkboxes
|
||||
- Related issues and dependencies block
|
||||
- Definition of Done (for stories)
|
||||
|
||||
## Design
|
||||
[Link to design files, wireframes, or mockups]
|
||||
---
|
||||
|
||||
## Technical Notes
|
||||
[Any technical considerations, dependencies, or constraints]
|
||||
## Macro Usage Guidelines
|
||||
|
||||
## Definition of Done
|
||||
- [ ] Code reviewed and approved
|
||||
- [ ] Unit tests written and passing
|
||||
- [ ] Integration tests passing
|
||||
- [ ] Documentation updated
|
||||
- [ ] Deployed to staging
|
||||
- [ ] QA approved
|
||||
- [ ] Deployed to production
|
||||
|
||||
## Related Stories
|
||||
[Links to related issues, epics, or dependencies]
|
||||
```
|
||||
|
||||
### 2. Bug Report Template
|
||||
```
|
||||
## Summary
|
||||
[Brief, clear summary of the bug]
|
||||
|
||||
## Environment
|
||||
- **Browser/Device**: [e.g., Chrome 118, iOS 17, Android 13]
|
||||
- **OS**: [e.g., Windows 11, macOS 14]
|
||||
- **App Version**: [e.g., v2.3.1]
|
||||
- **User Type**: [e.g., Admin, End User]
|
||||
|
||||
## Steps to Reproduce
|
||||
1. [First step]
|
||||
2. [Second step]
|
||||
3. [Third step]
|
||||
4. [Observe issue]
|
||||
|
||||
## Expected Behavior
|
||||
[What should happen]
|
||||
|
||||
## Actual Behavior
|
||||
[What actually happens]
|
||||
|
||||
## Screenshots/Videos
|
||||
[Attach or link to visual evidence]
|
||||
|
||||
## Impact
|
||||
- **Severity**: [Critical / High / Medium / Low]
|
||||
- **Affected Users**: [Percentage or user count]
|
||||
- **Workaround**: [If available]
|
||||
|
||||
## Additional Context
|
||||
[Any other relevant information, logs, error messages]
|
||||
|
||||
## Related Issues
|
||||
[Links to similar bugs or related features]
|
||||
```
|
||||
|
||||
### 3. Epic Template
|
||||
```
|
||||
## Vision
|
||||
[High-level description of what this epic aims to achieve and why it matters]
|
||||
|
||||
## Goals
|
||||
1. [Primary goal]
|
||||
2. [Secondary goal]
|
||||
3. [Tertiary goal]
|
||||
|
||||
## Success Metrics
|
||||
| Metric | Target | How to Measure |
|
||||
|--------|--------|----------------|
|
||||
| [Metric 1] | [Value] | [Method] |
|
||||
| [Metric 2] | [Value] | [Method] |
|
||||
|
||||
## User Stories (Breakdown)
|
||||
- [ ] [User story 1] - [PROJ-XXX]
|
||||
- [ ] [User story 2] - [PROJ-XXX]
|
||||
- [ ] [User story 3] - [PROJ-XXX]
|
||||
|
||||
## Dependencies
|
||||
- [Dependency 1]
|
||||
- [Dependency 2]
|
||||
|
||||
## Timeline
|
||||
**Target Start**: [Date]
|
||||
**Target Completion**: [Date]
|
||||
|
||||
## Risks
|
||||
- [Risk 1]
|
||||
- [Risk 2]
|
||||
|
||||
## Related Epics
|
||||
[Links to related epics]
|
||||
|
||||
## Resources
|
||||
- [PRD Link]
|
||||
- [Design Files]
|
||||
- [Technical Specs]
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Template Design Principles
|
||||
**Clarity**: Use clear section headers and instructions
|
||||
**Consistency**: Maintain visual and structural consistency
|
||||
**Completeness**: Include all necessary sections
|
||||
**Flexibility**: Allow customization where appropriate
|
||||
**Guidance**: Provide inline instructions and examples
|
||||
|
||||
### Macro Usage Guidelines
|
||||
**Dynamic Content**: Use macros for auto-updating content (dates, user mentions, Jira queries)
|
||||
**Visual Hierarchy**: Use panels, infos, and notes to create visual distinction
|
||||
**Interactivity**: Use expand macros for collapsible sections
|
||||
**Integration**: Embed Jira charts and tables for live data
|
||||
**Visual Hierarchy**: Use `{panel}`, `{info}`, and `{note}` to create visual distinction
|
||||
**Interactivity**: Use `{expand}` for collapsible sections in long templates
|
||||
**Integration**: Embed Jira charts and tables via `{jira}` macro for live data
|
||||
|
||||
### Template Maintenance
|
||||
**Version Control**: Track template versions and changes
|
||||
**Deprecation**: Clearly mark outdated templates
|
||||
**Documentation**: Maintain usage guides for each template
|
||||
**Feedback Loop**: Regularly gather user feedback and iterate
|
||||
---
|
||||
|
||||
## Handoff Protocols
|
||||
## Atlassian MCP Integration
|
||||
|
||||
**FROM Senior PM**:
|
||||
- Template requirements for projects
|
||||
- Reporting template needs
|
||||
- Executive summary formats
|
||||
- Portfolio tracking templates
|
||||
**Primary Tools**: Confluence MCP, Jira MCP
|
||||
|
||||
**TO Senior PM**:
|
||||
- Completed templates ready for use
|
||||
- Template usage analytics
|
||||
- Suggestions for new templates
|
||||
- Template optimization opportunities
|
||||
### Template Operations via MCP
|
||||
|
||||
**FROM Scrum Master**:
|
||||
- Sprint ceremony template needs
|
||||
- Team-specific template requests
|
||||
- Retrospective format preferences
|
||||
- Sprint planning layouts
|
||||
All MCP calls below use the exact parameter names expected by the Atlassian MCP server. Replace angle-bracket placeholders with real values before executing.
|
||||
|
||||
**TO Scrum Master**:
|
||||
- Sprint-ready templates
|
||||
- Team documentation templates
|
||||
- Agile ceremony structures
|
||||
- Velocity tracking templates
|
||||
**Create a Confluence page template:**
|
||||
```json
|
||||
{
|
||||
"tool": "confluence_create_page",
|
||||
"parameters": {
|
||||
"space_key": "PROJ",
|
||||
"title": "Template: Meeting Notes",
|
||||
"body": "<storage-format template content>",
|
||||
"labels": ["template", "meeting-notes"],
|
||||
"parent_id": "<optional parent page id>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**FROM Jira Expert**:
|
||||
- Issue template requirements
|
||||
- Custom field display needs
|
||||
- Workflow-specific templates
|
||||
- Reporting template requests
|
||||
**Update an existing template:**
|
||||
```json
|
||||
{
|
||||
"tool": "confluence_update_page",
|
||||
"parameters": {
|
||||
"page_id": "<existing page id>",
|
||||
"version": "<current_version + 1>",
|
||||
"title": "Template: Meeting Notes",
|
||||
"body": "<updated storage-format content>",
|
||||
"version_comment": "v2 — added status macro to header"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**TO Jira Expert**:
|
||||
- Issue description templates
|
||||
- Field configuration templates
|
||||
- Workflow documentation
|
||||
- JQL query templates
|
||||
**Create a Jira issue description template (via field configuration):**
|
||||
```json
|
||||
{
|
||||
"tool": "jira_update_field_configuration",
|
||||
"parameters": {
|
||||
"project_key": "PROJ",
|
||||
"field_id": "description",
|
||||
"default_value": "<template markdown or Atlassian Document Format JSON>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**FROM Confluence Expert**:
|
||||
- Space-specific template needs
|
||||
- Global template requests
|
||||
- Blueprint requirements
|
||||
- Macro-based templates
|
||||
**Deploy template to multiple spaces (batch):**
|
||||
```json
|
||||
// Repeat for each target space key
|
||||
{
|
||||
"tool": "confluence_create_page",
|
||||
"parameters": {
|
||||
"space_key": "<SPACE_KEY>",
|
||||
"title": "Template: Meeting Notes",
|
||||
"body": "<storage-format template content>",
|
||||
"labels": ["template"]
|
||||
}
|
||||
}
|
||||
// After each create, verify:
|
||||
{
|
||||
"tool": "confluence_get_page",
|
||||
"parameters": {
|
||||
"space_key": "<SPACE_KEY>",
|
||||
"title": "Template: Meeting Notes"
|
||||
}
|
||||
}
|
||||
// Assert response status == 200 and page body is non-empty before proceeding to next space
|
||||
```
|
||||
|
||||
**TO Confluence Expert**:
|
||||
- Configured page templates
|
||||
- Blueprint structures
|
||||
- Template deployment plans
|
||||
- Usage guidelines
|
||||
**Validation checkpoint after deployment:**
|
||||
- Retrieve the created/updated page and assert it renders without macro errors
|
||||
- Check that `{jira}` embeds resolve against the target Jira project
|
||||
- Confirm `{tasks}` blocks are interactive in the published view
|
||||
- If any check fails: revert using `confluence_update_page` with `version: <current + 1>` and the previous version body
|
||||
|
||||
**FROM Atlassian Admin**:
|
||||
- Org-wide template standards
|
||||
- Global template deployment
|
||||
- Template governance requirements
|
||||
- Compliance templates
|
||||
---
|
||||
|
||||
**TO Atlassian Admin**:
|
||||
- Global templates for approval
|
||||
- Template usage reports
|
||||
- Template compliance status
|
||||
- Recommendations for standards
|
||||
## Best Practices & Governance
|
||||
|
||||
## Template Governance
|
||||
**Org-Specific Standards:**
|
||||
- Track template versions with version notes in the page header
|
||||
- Mark outdated templates with a `{warning}` banner before archiving; archive (do not delete)
|
||||
- Maintain usage guides linked from each template
|
||||
- Gather feedback on a quarterly review cycle; incorporate usage metrics before deprecating
|
||||
|
||||
**Creation Process**:
|
||||
**Quality Gates (apply before every deployment):**
|
||||
- Example content provided for each section
|
||||
- Tested with sample data in preview
|
||||
- Version comment added to change log
|
||||
- Feedback mechanism in place (comments enabled or linked survey)
|
||||
|
||||
**Governance Process**:
|
||||
1. Request and justification
|
||||
2. Design and review
|
||||
3. Testing with pilot users
|
||||
4. Documentation
|
||||
5. Approval
|
||||
6. Deployment
|
||||
6. Deployment (via MCP or manual)
|
||||
7. Training
|
||||
8. Monitoring
|
||||
|
||||
**Review Cycle**:
|
||||
- Templates reviewed quarterly
|
||||
- Usage metrics analyzed
|
||||
- Feedback incorporated
|
||||
- Updates deployed
|
||||
- Deprecated templates archived
|
||||
---
|
||||
|
||||
**Quality Standards**:
|
||||
- All templates documented
|
||||
- Clear usage instructions
|
||||
- Example content provided
|
||||
- Tested before deployment
|
||||
- Version controlled
|
||||
- Feedback mechanism in place
|
||||
## Handoff Protocols
|
||||
|
||||
## Atlassian MCP Integration
|
||||
See **HANDOFFS.md** for the full handoff matrix. Summary:
|
||||
|
||||
**Primary Tools**: Jira MCP, Confluence MCP
|
||||
|
||||
**Template Operations**:
|
||||
- Create page templates in Confluence
|
||||
- Deploy issue description templates in Jira
|
||||
- Build automated template deployment scripts
|
||||
- Track template usage via analytics
|
||||
- Update templates programmatically
|
||||
- Version control template content
|
||||
|
||||
**Integration Points**:
|
||||
- Support all roles with standardized templates
|
||||
- Enable Confluence Expert with deployable templates
|
||||
- Provide Jira Expert with issue templates
|
||||
- Supply Senior PM with reporting templates
|
||||
- Give Scrum Master sprint ceremony templates
|
||||
| Partner | Receives FROM | Sends TO |
|
||||
|---------|--------------|---------|
|
||||
| **Senior PM** | Template requirements, reporting templates, executive formats | Completed templates, usage analytics, optimization suggestions |
|
||||
| **Scrum Master** | Sprint ceremony needs, team-specific requests, retro format preferences | Sprint-ready templates, agile ceremony structures, velocity tracking templates |
|
||||
| **Jira Expert** | Issue template requirements, custom field display needs | Issue description templates, field config templates, JQL query templates |
|
||||
| **Confluence Expert** | Space-specific needs, global template requests, blueprint requirements | Configured page templates, blueprint structures, deployment plans |
|
||||
| **Atlassian Admin** | Org-wide standards, global deployment requirements, compliance templates | Global templates for approval, usage reports, compliance status |
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "scrum-master"
|
||||
description: Advanced Scrum Master with data-driven team health analysis, velocity forecasting, retrospective insights, and team development expertise. Features comprehensive sprint health scoring, Monte Carlo forecasting, and psychological safety frameworks for high-performing agile teams.
|
||||
description: "Advanced Scrum Master skill for data-driven agile team analysis and coaching. Use when the user asks about sprint planning, velocity tracking, retrospectives, standup facilitation, backlog grooming, story points, burndown charts, blocker resolution, or agile team health. Runs Python scripts to analyse sprint JSON exports from Jira or similar tools: velocity_analyzer.py for Monte Carlo sprint forecasting, sprint_health_scorer.py for multi-dimension health scoring, and retrospective_analyzer.py for action-item and theme tracking. Produces confidence-interval forecasts, health grade reports, and improvement-velocity trends for high-performing Scrum teams."
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 2.0.0
|
||||
@@ -14,55 +14,81 @@ metadata:
|
||||
|
||||
# Scrum Master Expert
|
||||
|
||||
Advanced agile practitioner specializing in data-driven team development, psychological safety facilitation, and high-performance sprint execution. Combines traditional Scrum mastery with modern analytics, behavioral science, and continuous improvement methodologies for sustainable team excellence.
|
||||
Data-driven Scrum Master skill combining sprint analytics, probabilistic forecasting, and team development coaching. The unique value is in the three Python analysis scripts and their workflows — refer to `references/` and `assets/` for deeper framework detail.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Capabilities](#capabilities)
|
||||
- [Analysis Tools & Usage](#analysis-tools--usage)
|
||||
- [Input Requirements](#input-requirements)
|
||||
- [Analysis Tools](#analysis-tools)
|
||||
- [Methodology](#methodology)
|
||||
- [Templates & Assets](#templates--assets)
|
||||
- [Reference Frameworks](#reference-frameworks)
|
||||
- [Implementation Workflows](#implementation-workflows)
|
||||
- [Assessment & Measurement](#assessment--measurement)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Advanced Techniques](#advanced-techniques)
|
||||
- [Limitations & Considerations](#limitations--considerations)
|
||||
- [Sprint Execution Workflows](#sprint-execution-workflows)
|
||||
- [Team Development Workflow](#team-development-workflow)
|
||||
- [Key Metrics & Targets](#key-metrics--targets)
|
||||
- [Limitations](#limitations)
|
||||
|
||||
---
|
||||
|
||||
## Capabilities
|
||||
## Analysis Tools & Usage
|
||||
|
||||
### Data-Driven Sprint Analytics
|
||||
- **Velocity Analysis**: Multi-dimensional velocity tracking with trend detection, anomaly identification, and Monte Carlo forecasting using `velocity_analyzer.py`
|
||||
- **Sprint Health Scoring**: Comprehensive health assessment across 6 dimensions (commitment reliability, scope stability, blocker resolution, ceremony engagement, story completion, velocity predictability) via `sprint_health_scorer.py`
|
||||
- **Retrospective Intelligence**: Pattern recognition in team feedback, action item completion tracking, and improvement trend analysis through `retrospective_analyzer.py`
|
||||
### 1. Velocity Analyzer (`scripts/velocity_analyzer.py`)
|
||||
|
||||
### Team Development & Psychology
|
||||
- **Psychological Safety Facilitation**: Research-based approach to creating safe-to-fail environments using Google's Project Aristotle findings
|
||||
- **Team Maturity Assessment**: Tuckman's model applied to Scrum teams with stage-specific coaching interventions
|
||||
- **Conflict Resolution**: Structured approaches for productive disagreement and healthy team dynamics
|
||||
- **Performance Coaching**: Individual and team coaching using behavioral science and adult learning principles
|
||||
Runs rolling averages, linear-regression trend detection, and Monte Carlo simulation over sprint history.
|
||||
|
||||
### Advanced Forecasting & Planning
|
||||
- **Monte Carlo Simulation**: Probabilistic sprint and release forecasting with confidence intervals
|
||||
- **Capacity Planning**: Statistical modeling of team capacity with seasonal adjustments and dependency analysis
|
||||
- **Risk Assessment**: Early warning systems for team performance degradation and intervention recommendations
|
||||
```bash
|
||||
# Text report
|
||||
python velocity_analyzer.py sprint_data.json --format text
|
||||
|
||||
### Process Excellence
|
||||
- **Ceremony Optimization**: Data-driven improvement of sprint ceremonies for maximum value and engagement
|
||||
- **Continuous Improvement Systems**: Automated tracking of retrospective action items and improvement velocity
|
||||
- **Stakeholder Communication**: Executive-ready reports with actionable insights and trend analysis
|
||||
# JSON output for downstream processing
|
||||
python velocity_analyzer.py sprint_data.json --format json > analysis.json
|
||||
```
|
||||
|
||||
**Outputs**: velocity trend (improving/stable/declining), coefficient of variation, 6-sprint Monte Carlo forecast at 50 / 70 / 85 / 95% confidence intervals, anomaly flags with root-cause suggestions.
|
||||
|
||||
**Validation**: If fewer than 3 sprints are present in the input, stop and prompt the user: *"Velocity analysis needs at least 3 sprints. Please provide additional sprint data."* 6+ sprints are recommended for statistically significant Monte Carlo results.
|
||||
|
||||
---
|
||||
|
||||
### 2. Sprint Health Scorer (`scripts/sprint_health_scorer.py`)
|
||||
|
||||
Scores team health across 6 weighted dimensions, producing an overall 0–100 grade.
|
||||
|
||||
| Dimension | Weight | Target |
|
||||
|---|---|---|
|
||||
| Commitment Reliability | 25% | >85% sprint goals met |
|
||||
| Scope Stability | 20% | <15% mid-sprint changes |
|
||||
| Blocker Resolution | 15% | <3 days average |
|
||||
| Ceremony Engagement | 15% | >90% participation |
|
||||
| Story Completion Distribution | 15% | High ratio of fully done stories |
|
||||
| Velocity Predictability | 10% | CV <20% |
|
||||
|
||||
```bash
|
||||
python sprint_health_scorer.py sprint_data.json --format text
|
||||
```
|
||||
|
||||
**Outputs**: overall health score + grade, per-dimension scores with recommendations, sprint-over-sprint trend, intervention priority matrix.
|
||||
|
||||
**Validation**: Requires 2+ sprints with ceremony and story-completion data. If data is missing, report which dimensions cannot be scored and ask the user to supply the gaps.
|
||||
|
||||
---
|
||||
|
||||
### 3. Retrospective Analyzer (`scripts/retrospective_analyzer.py`)
|
||||
|
||||
Tracks action-item completion, recurring themes, sentiment trends, and team maturity progression.
|
||||
|
||||
```bash
|
||||
python retrospective_analyzer.py sprint_data.json --format text
|
||||
```
|
||||
|
||||
**Outputs**: action-item completion rate by priority/owner, recurring-theme persistence scores, team maturity level (forming/storming/norming/performing), improvement-velocity trend.
|
||||
|
||||
**Validation**: Requires 3+ retrospectives with action-item tracking. With fewer, note the limitation and offer partial theme analysis only.
|
||||
|
||||
---
|
||||
|
||||
## Input Requirements
|
||||
|
||||
### Sprint Data Structure
|
||||
All analysis tools accept JSON input following the schema in `assets/sample_sprint_data.json`:
|
||||
All scripts accept JSON following the schema in `assets/sample_sprint_data.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -70,7 +96,7 @@ All analysis tools accept JSON input following the schema in `assets/sample_spri
|
||||
"sprints": [
|
||||
{
|
||||
"sprint_number": "number",
|
||||
"planned_points": "number",
|
||||
"planned_points": "number",
|
||||
"completed_points": "number",
|
||||
"stories": [...],
|
||||
"blockers": [...],
|
||||
@@ -88,408 +114,102 @@ All analysis tools accept JSON input following the schema in `assets/sample_spri
|
||||
}
|
||||
```
|
||||
|
||||
### Minimum Data Requirements
|
||||
- **Velocity Analysis**: 3+ sprints (6+ recommended for statistical significance)
|
||||
- **Health Scoring**: 2+ sprints with ceremony and story completion data
|
||||
- **Retrospective Analysis**: 3+ retrospectives with action item tracking
|
||||
- **Team Development Assessment**: 4+ weeks of observation data
|
||||
Jira and similar tools can export sprint data; map exported fields to this schema before running the scripts. See `assets/sample_sprint_data.json` for a complete 6-sprint example and `assets/expected_output.json` for corresponding expected results (velocity avg 20.2 pts, CV 12.7%, health score 78.3/100, action-item completion 46.7%).
|
||||
|
||||
---
|
||||
|
||||
## Analysis Tools
|
||||
## Sprint Execution Workflows
|
||||
|
||||
### Velocity Analyzer (`scripts/velocity_analyzer.py`)
|
||||
Comprehensive velocity analysis with statistical modeling and forecasting.
|
||||
### Sprint Planning
|
||||
|
||||
**Features**:
|
||||
- Rolling averages (3, 5, 8 sprint windows)
|
||||
- Trend detection using linear regression
|
||||
- Volatility assessment (coefficient of variation)
|
||||
- Anomaly detection (outliers beyond 2σ)
|
||||
- Monte Carlo forecasting with confidence intervals
|
||||
1. Run velocity analysis: `python velocity_analyzer.py sprint_data.json --format text`
|
||||
2. Use the 70% confidence interval as the recommended commitment ceiling for the sprint backlog.
|
||||
3. Review the health scorer's Commitment Reliability and Scope Stability scores to calibrate negotiation with the Product Owner.
|
||||
4. If Monte Carlo output shows high volatility (CV >20%), surface this to stakeholders with range estimates rather than single-point forecasts.
|
||||
5. Document capacity assumptions (leave, dependencies) for retrospective comparison.
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
python velocity_analyzer.py sprint_data.json --format text
|
||||
python velocity_analyzer.py sprint_data.json --format json > analysis.json
|
||||
```
|
||||
### Daily Standup
|
||||
|
||||
**Outputs**:
|
||||
- Velocity trends (improving/stable/declining)
|
||||
- Predictability metrics (CV, volatility classification)
|
||||
- 6-sprint forecast with 50%, 70%, 85%, 95% confidence intervals
|
||||
- Anomaly identification with root cause suggestions
|
||||
1. Track participation and help-seeking patterns — feed ceremony data into `sprint_health_scorer.py` at sprint end.
|
||||
2. Log each blocker with date opened; resolution time feeds the Blocker Resolution dimension.
|
||||
3. If a blocker is unresolved after 2 days, escalate proactively and note in sprint data.
|
||||
|
||||
### Sprint Health Scorer (`scripts/sprint_health_scorer.py`)
|
||||
Multi-dimensional team health assessment with actionable recommendations.
|
||||
### Sprint Review
|
||||
|
||||
**Scoring Dimensions** (weighted):
|
||||
1. **Commitment Reliability** (25%): Sprint goal achievement consistency
|
||||
2. **Scope Stability** (20%): Mid-sprint scope change frequency
|
||||
3. **Blocker Resolution** (15%): Average time to resolve impediments
|
||||
4. **Ceremony Engagement** (15%): Participation and effectiveness metrics
|
||||
5. **Story Completion Distribution** (15%): Ratio of completed vs. partial stories
|
||||
6. **Velocity Predictability** (10%): Delivery consistency measurement
|
||||
1. Present velocity trend and health score alongside the demo to give stakeholders delivery context.
|
||||
2. Capture scope-change requests raised during review; record as scope-change events in sprint data for next scoring cycle.
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
python sprint_health_scorer.py sprint_data.json --format text
|
||||
```
|
||||
### Sprint Retrospective
|
||||
|
||||
**Outputs**:
|
||||
- Overall health score (0-100) with grade classification
|
||||
- Individual dimension scores with improvement recommendations
|
||||
- Trend analysis across sprints
|
||||
- Intervention priority matrix
|
||||
|
||||
### Retrospective Analyzer (`scripts/retrospective_analyzer.py`)
|
||||
Advanced retrospective data analysis for continuous improvement insights.
|
||||
|
||||
**Analysis Components**:
|
||||
- **Action Item Tracking**: Completion rates by priority and owner
|
||||
- **Theme Identification**: Recurring patterns in team feedback
|
||||
- **Sentiment Analysis**: Positive/negative trend tracking
|
||||
- **Improvement Velocity**: Rate of team development and problem resolution
|
||||
- **Team Maturity Scoring**: Development stage assessment
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
python retrospective_analyzer.py sprint_data.json --format text
|
||||
```
|
||||
|
||||
**Outputs**:
|
||||
- Action item completion analytics with bottleneck identification
|
||||
- Recurring theme analysis with persistence scoring
|
||||
- Team maturity level assessment (forming/storming/norming/performing)
|
||||
- Improvement velocity trends and recommendations
|
||||
|
||||
---
|
||||
|
||||
## Methodology
|
||||
|
||||
### Data-Driven Scrum Mastery
|
||||
Traditional Scrum practices enhanced with quantitative analysis and behavioral science:
|
||||
|
||||
#### 1. Measurement-First Approach
|
||||
- Establish baseline metrics before implementing changes
|
||||
- Use statistical significance testing for process improvements
|
||||
- Track leading indicators (engagement, psychological safety) alongside lagging indicators (velocity)
|
||||
- Apply continuous feedback loops for rapid iteration
|
||||
|
||||
#### 2. Psychological Safety Foundation
|
||||
Based on Amy Edmondson's research and Google's Project Aristotle findings:
|
||||
- **Assessment**: Regular psychological safety surveys and behavioral observation
|
||||
- **Intervention**: Structured vulnerability modeling and safe-to-fail experiments
|
||||
- **Measurement**: Track speaking-up frequency, mistake discussion openness, help-seeking behavior
|
||||
|
||||
#### 3. Team Development Lifecycle
|
||||
Tuckman's model applied to Scrum teams with stage-specific facilitation:
|
||||
- **Forming**: Structure provision, process education, relationship building
|
||||
- **Storming**: Conflict facilitation, psychological safety maintenance, process flexibility
|
||||
- **Norming**: Autonomy building, process ownership transfer, external relationship development
|
||||
- **Performing**: Challenge introduction, innovation support, organizational impact facilitation
|
||||
|
||||
#### 4. Continuous Improvement Science
|
||||
Evidence-based approach to retrospective outcomes:
|
||||
- Action item completion rate optimization
|
||||
- Root cause analysis using statistical methods
|
||||
- Improvement experiment design and measurement
|
||||
- Knowledge retention and pattern recognition
|
||||
|
||||
---
|
||||
|
||||
## Templates & Assets
|
||||
|
||||
### Sprint Reporting (`assets/sprint_report_template.md`)
|
||||
Production-ready sprint report template including:
|
||||
- Executive summary with health grade and key metrics
|
||||
- Delivery performance dashboard (commitment ratio, velocity trends)
|
||||
- Process health indicators (scope change, blocker resolution)
|
||||
- Quality metrics (DoD adherence, technical debt)
|
||||
- Risk assessment and stakeholder communication
|
||||
|
||||
### Team Health Assessment (`assets/team_health_check_template.md`)
|
||||
Spotify Squad Health Check model adaptation featuring:
|
||||
- 9-dimension health assessment (delivering value, learning, fun, codebase health, mission clarity, suitable process, support, speed, pawns vs. players)
|
||||
- Psychological safety evaluation framework
|
||||
- Team maturity level assessment
|
||||
- Action item prioritization matrix
|
||||
|
||||
### Sample Data (`assets/sample_sprint_data.json`)
|
||||
Comprehensive 6-sprint dataset demonstrating:
|
||||
- Multi-story sprint structure with realistic complexity
|
||||
- Blocker tracking and resolution patterns
|
||||
- Ceremony engagement metrics
|
||||
- Retrospective data with action item follow-through
|
||||
- Team capacity variations and external dependencies
|
||||
|
||||
### Expected Outputs (`assets/expected_output.json`)
|
||||
Standardized analysis results showing:
|
||||
- Velocity analysis with 20.2 point average and low volatility (CV: 12.7%)
|
||||
- Sprint health score of 78.3/100 with dimension breakdowns
|
||||
- Retrospective insights showing 46.7% action item completion rate
|
||||
- Team maturity assessment at "performing" level
|
||||
|
||||
---
|
||||
|
||||
## Reference Frameworks
|
||||
|
||||
### Velocity Forecasting Guide (`references/velocity-forecasting-guide.md`)
|
||||
Comprehensive guide to probabilistic estimation including:
|
||||
- Monte Carlo simulation implementation details
|
||||
- Confidence interval calculation methods
|
||||
- Trend adjustment techniques for improving/declining teams
|
||||
- Stakeholder communication strategies for uncertainty
|
||||
- Advanced techniques: seasonality adjustment, capacity modeling, multi-team dependencies
|
||||
|
||||
### Team Dynamics Framework (`references/team-dynamics-framework.md`)
|
||||
Research-based team development approach covering:
|
||||
- Tuckman's stages applied to Scrum teams with specific behavioral indicators
|
||||
- Psychological safety assessment and building techniques
|
||||
- Conflict resolution strategies for productive disagreement
|
||||
- Stage-specific facilitation approaches and intervention strategies
|
||||
- Measurement tools for team development tracking
|
||||
|
||||
---
|
||||
|
||||
## Implementation Workflows
|
||||
|
||||
### Sprint Execution Cycle
|
||||
|
||||
#### Sprint Planning (Data-Informed)
|
||||
1. **Pre-Planning Analysis**:
|
||||
- Run velocity analysis to determine sustainable commitment level
|
||||
- Review sprint health scores from previous sprints
|
||||
- Analyze retrospective action items for capacity impact
|
||||
|
||||
2. **Capacity Determination**:
|
||||
- Apply Monte Carlo forecasting for realistic point estimation
|
||||
- Factor in team member availability and external dependencies
|
||||
- Use historical commitment reliability data for scope negotiation
|
||||
|
||||
3. **Goal Setting & Commitment**:
|
||||
- Align sprint goals with team maturity level and capability trends
|
||||
- Ensure psychological safety in commitment discussions
|
||||
- Document assumptions and dependencies for retrospective analysis
|
||||
|
||||
#### Daily Standups (Team Development Focus)
|
||||
1. **Structured Format** with team development overlay:
|
||||
- Progress updates with impediment surfacing
|
||||
- Help requests and collaboration opportunities
|
||||
- Team dynamic observation and psychological safety assessment
|
||||
|
||||
2. **Data Collection**:
|
||||
- Track participation patterns and engagement levels
|
||||
- Note conflict emergence and resolution attempts
|
||||
- Monitor help-seeking behavior and vulnerability expression
|
||||
|
||||
3. **Real-Time Coaching**:
|
||||
- Model psychological safety through Scrum Master vulnerability
|
||||
- Facilitate productive conflict when disagreements arise
|
||||
- Encourage cross-functional collaboration and knowledge sharing
|
||||
|
||||
#### Sprint Review (Stakeholder Alignment)
|
||||
1. **Demonstration with Context**:
|
||||
- Present completed work with velocity and health context
|
||||
- Share team development progress and capability growth
|
||||
- Discuss impediments and organizational support needs
|
||||
|
||||
2. **Feedback Integration**:
|
||||
- Capture stakeholder input for retrospective analysis
|
||||
- Assess scope change impacts on team health
|
||||
- Plan adaptations based on team maturity and capacity
|
||||
|
||||
#### Sprint Retrospective (Intelligence-Driven)
|
||||
1. **Data-Informed Facilitation**:
|
||||
- Present sprint health scores and trends as starting point
|
||||
- Use retrospective analyzer insights to guide discussion focus
|
||||
- Surface patterns from historical retrospective themes
|
||||
|
||||
2. **Action Item Optimization**:
|
||||
- Limit action items based on team's completion rate history
|
||||
- Assign owners and deadlines based on previous success patterns
|
||||
- Design experiments with measurable success criteria
|
||||
|
||||
3. **Continuous Improvement**:
|
||||
- Track action item completion for next retrospective
|
||||
- Measure team maturity progression using behavioral indicators
|
||||
- Adjust facilitation approach based on team development stage
|
||||
|
||||
### Team Development Intervention
|
||||
|
||||
#### Assessment Phase
|
||||
1. **Multi-Dimensional Data Collection**:
|
||||
1. Run all three scripts before the session:
|
||||
```bash
|
||||
python sprint_health_scorer.py team_data.json > health_assessment.txt
|
||||
python retrospective_analyzer.py team_data.json > retro_insights.txt
|
||||
python sprint_health_scorer.py sprint_data.json --format text > health.txt
|
||||
python retrospective_analyzer.py sprint_data.json --format text > retro.txt
|
||||
```
|
||||
|
||||
2. **Psychological Safety Evaluation**:
|
||||
- Conduct anonymous team survey using Edmondson's 7-point scale
|
||||
- Observe team interactions during ceremonies for safety indicators
|
||||
- Interview team members individually for deeper insights
|
||||
|
||||
3. **Team Maturity Assessment**:
|
||||
- Map behaviors against Tuckman's model stages
|
||||
- Assess autonomy level and self-organization capability
|
||||
- Evaluate conflict handling and collaboration patterns
|
||||
|
||||
#### Intervention Design
|
||||
1. **Stage-Appropriate Coaching**:
|
||||
- **Forming**: Structure provision, process education, trust building
|
||||
- **Storming**: Conflict facilitation, safety maintenance, process flexibility
|
||||
- **Norming**: Autonomy building, ownership transfer, skill development
|
||||
- **Performing**: Challenge provision, innovation support, organizational impact
|
||||
|
||||
2. **Psychological Safety Building**:
|
||||
- Model vulnerability and mistake admission
|
||||
- Reward help-seeking and question-asking behavior
|
||||
- Create safe-to-fail experiments and learning opportunities
|
||||
- Facilitate difficult conversations with protective boundaries
|
||||
|
||||
#### Progress Measurement
|
||||
1. **Quantitative Tracking**:
|
||||
- Weekly ceremony engagement scores
|
||||
- Monthly psychological safety pulse surveys
|
||||
- Sprint-level team health score progression
|
||||
- Quarterly team maturity assessment
|
||||
|
||||
2. **Qualitative Indicators**:
|
||||
- Behavioral observation during ceremonies
|
||||
- Individual 1:1 conversation insights
|
||||
- Stakeholder feedback on team collaboration
|
||||
- External team perception and reputation
|
||||
2. Open with the health score and top-flagged dimensions to focus discussion.
|
||||
3. Use the retrospective analyzer's action-item completion rate to determine how many new action items the team can realistically absorb (target: ≤3 if completion rate <60%).
|
||||
4. Assign each action item an owner and measurable success criterion before closing the session.
|
||||
5. Record new action items in `sprint_data.json` for tracking in the next cycle.
|
||||
|
||||
---
|
||||
|
||||
## Assessment & Measurement
|
||||
## Team Development Workflow
|
||||
|
||||
### Key Performance Indicators
|
||||
### Assessment
|
||||
|
||||
#### Team Health Metrics
|
||||
- **Overall Health Score**: Composite score across 6 dimensions (target: >80)
|
||||
- **Psychological Safety Index**: Team safety assessment (target: >4.0/5.0)
|
||||
- **Team Maturity Level**: Development stage classification with progression tracking
|
||||
- **Improvement Velocity**: Rate of retrospective action item completion (target: >70%)
|
||||
```bash
|
||||
python sprint_health_scorer.py team_data.json > health_assessment.txt
|
||||
python retrospective_analyzer.py team_data.json > retro_insights.txt
|
||||
```
|
||||
|
||||
#### Sprint Performance Metrics
|
||||
- **Velocity Predictability**: Coefficient of variation in sprint delivery (target: <20%)
|
||||
- **Commitment Reliability**: Percentage of sprint goals achieved (target: >85%)
|
||||
- **Scope Stability**: Mid-sprint change frequency (target: <15%)
|
||||
- **Blocker Resolution Time**: Average days to resolve impediments (target: <3 days)
|
||||
- Map retrospective analyzer maturity output to the appropriate development stage.
|
||||
- Supplement with an anonymous psychological safety pulse survey (Edmondson 7-point scale) and individual 1:1 observations.
|
||||
- If maturity output is `forming` or `storming`, prioritise safety and conflict-facilitation interventions before process optimisation.
|
||||
|
||||
#### Engagement Metrics
|
||||
- **Ceremony Participation**: Attendance and engagement quality (target: >90%)
|
||||
- **Knowledge Sharing**: Cross-training and collaboration frequency
|
||||
- **Innovation Frequency**: New ideas generated and implemented per sprint
|
||||
- **Stakeholder Satisfaction**: External perception of team performance
|
||||
### Intervention
|
||||
|
||||
### Assessment Schedule
|
||||
- **Daily**: Ceremony observation and team dynamic monitoring
|
||||
- **Weekly**: Sprint progress and impediment tracking
|
||||
- **Sprint**: Comprehensive health scoring and velocity analysis
|
||||
- **Monthly**: Psychological safety assessment and team maturity evaluation
|
||||
- **Quarterly**: Deep retrospective analysis and intervention strategy review
|
||||
Apply stage-specific facilitation (details in `references/team-dynamics-framework.md`):
|
||||
|
||||
### Calibration & Validation
|
||||
- Compare analytical insights with team self-assessment
|
||||
- Validate predictions against actual sprint outcomes
|
||||
- Cross-reference quantitative metrics with qualitative observations
|
||||
- Adjust models based on long-term team development patterns
|
||||
| Stage | Focus |
|
||||
|---|---|
|
||||
| Forming | Structure, process education, trust building |
|
||||
| Storming | Conflict facilitation, psychological safety maintenance |
|
||||
| Norming | Autonomy building, process ownership transfer |
|
||||
| Performing | Challenge introduction, innovation support |
|
||||
|
||||
### Progress Measurement
|
||||
|
||||
- **Sprint cadence**: re-run health scorer; target overall score improvement of ≥5 points per quarter.
|
||||
- **Monthly**: psychological safety pulse survey; target >4.0/5.0.
|
||||
- **Quarterly**: full maturity re-assessment via retrospective analyzer.
|
||||
- If scores plateau or regress for 2 consecutive sprints, escalate intervention strategy (see `references/team-dynamics-framework.md`).
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
## Key Metrics & Targets
|
||||
|
||||
### Data Collection Excellence
|
||||
1. **Consistency**: Maintain regular data collection rhythms without overwhelming the team
|
||||
2. **Transparency**: Share analytical insights openly to build trust and understanding
|
||||
3. **Actionability**: Focus on metrics that directly inform coaching decisions
|
||||
4. **Privacy**: Respect individual confidentiality while enabling team-level insights
|
||||
|
||||
### Facilitation Mastery
|
||||
1. **Adaptive Leadership**: Match facilitation style to team development stage
|
||||
2. **Psychological Safety First**: Prioritize safety over process adherence when conflicts arise
|
||||
3. **Systems Thinking**: Address root causes rather than symptoms in team performance issues
|
||||
4. **Evidence-Based Coaching**: Use data to support coaching conversations and intervention decisions
|
||||
|
||||
### Stakeholder Communication
|
||||
1. **Range Estimates**: Communicate uncertainty through confidence intervals rather than single points
|
||||
2. **Context Provision**: Explain team development stage and capability constraints
|
||||
3. **Trend Focus**: Emphasize improvement trajectories over absolute performance levels
|
||||
4. **Risk Transparency**: Surface impediments and dependencies proactively
|
||||
|
||||
### Continuous Improvement
|
||||
1. **Experiment Design**: Structure process improvements as testable hypotheses
|
||||
2. **Measurement Planning**: Define success criteria before implementing changes
|
||||
3. **Feedback Loops**: Establish regular review cycles for intervention effectiveness
|
||||
4. **Learning Culture**: Model curiosity and mistake tolerance to encourage team experimentation
|
||||
| Metric | Target |
|
||||
|---|---|
|
||||
| Overall Health Score | >80/100 |
|
||||
| Psychological Safety Index | >4.0/5.0 |
|
||||
| Velocity CV (predictability) | <20% |
|
||||
| Commitment Reliability | >85% |
|
||||
| Scope Stability | <15% mid-sprint changes |
|
||||
| Blocker Resolution Time | <3 days |
|
||||
| Ceremony Engagement | >90% |
|
||||
| Retrospective Action Completion | >70% |
|
||||
|
||||
---
|
||||
|
||||
## Advanced Techniques
|
||||
## Limitations
|
||||
|
||||
### Predictive Analytics
|
||||
- **Early Warning Systems**: Identify teams at risk of performance degradation
|
||||
- **Intervention Timing**: Optimize coaching interventions based on team development patterns
|
||||
- **Capacity Forecasting**: Predict team capability changes based on historical patterns
|
||||
- **Dependency Modeling**: Assess cross-team collaboration impacts on performance
|
||||
|
||||
### Behavioral Science Applications
|
||||
- **Cognitive Bias Recognition**: Help teams recognize and mitigate planning fallacy and confirmation bias
|
||||
- **Motivation Optimization**: Apply self-determination theory to enhance team autonomy and mastery
|
||||
- **Social Learning**: Leverage peer modeling and collective efficacy for skill development
|
||||
- **Change Management**: Use behavioral economics principles for sustainable process adoption
|
||||
|
||||
### Advanced Facilitation
|
||||
- **Liberating Structures**: Apply structured facilitation methods for enhanced participation
|
||||
- **Appreciative Inquiry**: Focus team conversations on strengths and possibilities
|
||||
- **Systems Constellation**: Visualize team dynamics and organizational relationships
|
||||
- **Conflict Mediation**: Professional-level conflict resolution for complex team issues
|
||||
- **Sample size**: fewer than 6 sprints reduces Monte Carlo confidence; always state confidence intervals, not point estimates.
|
||||
- **Data completeness**: missing ceremony or story-completion fields suppress affected scoring dimensions — report gaps explicitly.
|
||||
- **Context sensitivity**: script recommendations must be interpreted alongside organisational and team context not captured in JSON data.
|
||||
- **Quantitative bias**: metrics do not replace qualitative observation; combine scores with direct team interaction.
|
||||
- **Team size**: techniques are optimised for 5–9 member teams; larger groups may require adaptation.
|
||||
- **External factors**: cross-team dependencies and organisational constraints are not fully modelled by single-team metrics.
|
||||
|
||||
---
|
||||
|
||||
## Limitations & Considerations
|
||||
|
||||
### Data Quality Dependencies
|
||||
- **Minimum Sample Size**: Statistical significance requires 6+ sprints for meaningful analysis
|
||||
- **Data Completeness**: Missing ceremony data or retrospective information limits insight accuracy
|
||||
- **Context Sensitivity**: Algorithm recommendations must be interpreted within organizational and team context
|
||||
- **External Factors**: Analysis cannot account for all external influences on team performance
|
||||
|
||||
### Psychological Safety Requirements
|
||||
- **Trust Building Time**: Authentic psychological safety development requires sustained effort over months
|
||||
- **Individual Differences**: Team members have varying comfort levels with vulnerability and feedback
|
||||
- **Cultural Considerations**: Organizational and national culture significantly impact safety building approaches
|
||||
- **Leadership Modeling**: Scrum Master psychological safety demonstration is prerequisite for team development
|
||||
|
||||
### Scaling Challenges
|
||||
- **Team Size Limits**: Techniques optimized for 5-9 member teams may require adaptation for larger groups
|
||||
- **Multi-Team Coordination**: Dependencies across teams introduce complexity not fully captured by single-team metrics
|
||||
- **Organizational Alignment**: Team-level improvements may be constrained by broader organizational impediments
|
||||
- **Stakeholder Education**: External stakeholders require education on probabilistic planning and team development concepts
|
||||
|
||||
### Measurement Limitations
|
||||
- **Quantitative Bias**: Over-reliance on metrics may overlook important qualitative team dynamics
|
||||
- **Gaming Potential**: Teams may optimize for measured metrics rather than underlying performance
|
||||
- **Lag Indicators**: Many important outcomes (psychological safety, team cohesion) are delayed relative to interventions
|
||||
- **Individual Privacy**: Balancing team insights with individual confidentiality and psychological safety
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics & Outcomes
|
||||
|
||||
Teams using this advanced Scrum Master approach typically achieve:
|
||||
|
||||
- **40-60% improvement** in velocity predictability (reduced coefficient of variation)
|
||||
- **25-40% increase** in retrospective action item completion rates
|
||||
- **30-50% reduction** in average blocker resolution time
|
||||
- **80%+ teams** reach "performing" stage within 6-9 months
|
||||
- **4.0+ psychological safety scores** sustained across team tenure
|
||||
- **90%+ ceremony engagement** with high-quality participation
|
||||
|
||||
The methodology transforms traditional Scrum mastery through data-driven insights, behavioral science application, and systematic team development practices, resulting in sustainable high-performance teams with strong psychological safety and continuous improvement capabilities.
|
||||
|
||||
---
|
||||
|
||||
*This skill combines traditional Scrum expertise with modern analytics and behavioral science. Success requires commitment to data collection, psychological safety building, and evidence-based coaching approaches. Adapt techniques based on your specific team and organizational context.*
|
||||
*For deep framework references see `references/velocity-forecasting-guide.md` and `references/team-dynamics-framework.md`. For template assets see `assets/sprint_report_template.md` and `assets/team_health_check_template.md`.*
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
name: "senior-pm"
|
||||
description: Senior Project Manager for enterprise software, SaaS, and digital transformation projects. Specializes in portfolio management, quantitative risk analysis, resource optimization, stakeholder alignment, and executive reporting. Uses advanced methodologies including EMV analysis, Monte Carlo simulation, WSJF prioritization, and multi-dimensional health scoring.
|
||||
description: Senior Project Manager for enterprise software, SaaS, and digital transformation projects. Specializes in portfolio management, quantitative risk analysis, resource optimization, stakeholder alignment, and executive reporting. Uses advanced methodologies including EMV analysis, Monte Carlo simulation, WSJF prioritization, and multi-dimensional health scoring. Use when a user needs help with project plans, project status reports, risk assessments, resource allocation, project roadmaps, milestone tracking, team capacity planning, portfolio health reviews, program management, or executive-level project reporting — especially for enterprise-scale initiatives with multiple workstreams, complex dependencies, or multi-million dollar budgets.
|
||||
---
|
||||
|
||||
# Senior Project Management Expert
|
||||
|
||||
## Overview
|
||||
|
||||
Strategic project management for enterprise software, SaaS, and digital transformation initiatives. This skill provides sophisticated portfolio management capabilities, quantitative analysis tools, and executive-level reporting frameworks for managing complex, multi-million dollar project portfolios.
|
||||
Strategic project management for enterprise software, SaaS, and digital transformation initiatives. Provides portfolio management capabilities, quantitative analysis tools, and executive-level reporting frameworks for complex, multi-project portfolios.
|
||||
|
||||
### Core Expertise Areas
|
||||
|
||||
@@ -63,13 +63,30 @@ python3 scripts/risk_matrix_analyzer.py assets/sample_project_data.json
|
||||
1. **Probability Assessment** (1-5 scale): Historical data, expert judgment, Monte Carlo inputs
|
||||
2. **Impact Analysis** (1-5 scale): Financial, schedule, quality, and strategic impact vectors
|
||||
3. **Category Weighting**: Technical (1.2x), Resource (1.1x), Financial (1.4x), Schedule (1.0x)
|
||||
4. **EMV Calculation**: Risk Score = (Probability × Impact × Category Weight)
|
||||
4. **EMV Calculation**:
|
||||
|
||||
**Risk Response Strategies:**
|
||||
- **Avoid** (>18 score): Eliminate through scope/approach changes
|
||||
- **Mitigate** (12-18 score): Reduce probability or impact through active intervention
|
||||
- **Transfer** (8-12 score): Insurance, contracts, partnerships
|
||||
- **Accept** (<8 score): Monitor with contingency planning
|
||||
```python
|
||||
# EMV and risk-adjusted budget calculation
|
||||
def calculate_emv(risks):
|
||||
category_weights = {"Technical": 1.2, "Resource": 1.1, "Financial": 1.4, "Schedule": 1.0}
|
||||
total_emv = 0
|
||||
for risk in risks:
|
||||
score = risk["probability"] * risk["impact"] * category_weights[risk["category"]]
|
||||
emv = risk["probability"] * risk["financial_impact"]
|
||||
total_emv += emv
|
||||
risk["score"] = score
|
||||
return total_emv
|
||||
|
||||
def risk_adjusted_budget(base_budget, portfolio_risk_score, risk_tolerance_factor):
|
||||
risk_premium = portfolio_risk_score * risk_tolerance_factor
|
||||
return base_budget * (1 + risk_premium)
|
||||
```
|
||||
|
||||
**Risk Response Strategies (by score threshold):**
|
||||
- **Avoid** (>18): Eliminate through scope/approach changes
|
||||
- **Mitigate** (12-18): Reduce probability or impact through active intervention
|
||||
- **Transfer** (8-12): Insurance, contracts, partnerships
|
||||
- **Accept** (<8): Monitor with contingency planning
|
||||
|
||||
**Tier 3: Resource Capacity Optimization**
|
||||
Employs `resource_capacity_planner.py` for portfolio resource analysis:
|
||||
@@ -86,83 +103,77 @@ python3 scripts/resource_capacity_planner.py assets/sample_project_data.json
|
||||
|
||||
### Advanced Prioritization Models
|
||||
|
||||
**Weighted Shortest Job First (WSJF) - For Agile Portfolios**
|
||||
```
|
||||
WSJF Score = (User Value + Time Criticality + Risk Reduction) ÷ Job Size
|
||||
Apply each model in the specific context where it provides the most signal:
|
||||
|
||||
Application Context:
|
||||
- Resource-constrained environments
|
||||
- Fast-moving competitive landscapes
|
||||
- Agile/SAFe methodology adoption
|
||||
- Clear cost-of-delay quantification available
|
||||
**Weighted Shortest Job First (WSJF)** — Resource-constrained agile portfolios with quantifiable cost-of-delay
|
||||
```python
|
||||
def wsjf(user_value, time_criticality, risk_reduction, job_size):
|
||||
return (user_value + time_criticality + risk_reduction) / job_size
|
||||
```
|
||||
|
||||
**RICE Framework - For Product Development**
|
||||
```
|
||||
RICE Score = (Reach × Impact × Confidence) ÷ Effort
|
||||
|
||||
Best for:
|
||||
- Customer-facing initiatives
|
||||
- Marketing and growth projects
|
||||
- When reach metrics are quantifiable
|
||||
- Data-driven product decisions
|
||||
**RICE** — Customer-facing initiatives where reach metrics are quantifiable
|
||||
```python
|
||||
def rice(reach, impact, confidence_pct, effort_person_months):
|
||||
return (reach * impact * (confidence_pct / 100)) / effort_person_months
|
||||
```
|
||||
|
||||
**ICE Scoring - For Rapid Decision Making**
|
||||
```
|
||||
ICE Score = (Impact + Confidence + Ease) ÷ 3
|
||||
|
||||
Optimal when:
|
||||
- Quick prioritization needed
|
||||
- Brainstorming and ideation phases
|
||||
- Limited analysis time available
|
||||
- Cross-functional team alignment required
|
||||
**ICE** — Rapid prioritization during brainstorming or when analysis time is limited
|
||||
```python
|
||||
def ice(impact, confidence, ease):
|
||||
return (impact + confidence + ease) / 3
|
||||
```
|
||||
|
||||
**Model Selection — Use this decision logic:**
|
||||
```
|
||||
if resource_constrained and agile_methodology and cost_of_delay_quantifiable:
|
||||
→ WSJF
|
||||
elif customer_facing and reach_metrics_available:
|
||||
→ RICE
|
||||
elif quick_prioritization_needed or ideation_phase:
|
||||
→ ICE
|
||||
elif multiple_stakeholder_groups_with_differing_priorities:
|
||||
→ MoSCoW
|
||||
elif complex_tradeoffs_across_incommensurable_criteria:
|
||||
→ Multi-Criteria Decision Analysis (MCDA)
|
||||
```
|
||||
|
||||
**Decision Tree for Model Selection:**
|
||||
Reference: `references/portfolio-prioritization-models.md`
|
||||
|
||||
- **Resource Constrained?** → WSJF
|
||||
- **Customer Impact Focus?** → RICE
|
||||
- **Need Speed?** → ICE
|
||||
- **Multiple Stakeholder Groups?** → MoSCoW
|
||||
- **Complex Trade-offs?** → Multi-Criteria Decision Analysis (MCDA)
|
||||
|
||||
### Risk Management Framework
|
||||
|
||||
**Quantitative Risk Analysis Process:**
|
||||
Reference: `references/risk-management-framework.md`
|
||||
|
||||
**Step 1: Risk Identification & Classification**
|
||||
- Technical risks: Architecture, integration, performance
|
||||
- Resource risks: Availability, skills, retention
|
||||
- Schedule risks: Dependencies, critical path, external factors
|
||||
- Financial risks: Budget overruns, currency, economic factors
|
||||
- Business risks: Market changes, competitive pressure, strategic shifts
|
||||
**Step 1: Risk Classification by Category**
|
||||
- Technical: Architecture, integration, performance
|
||||
- Resource: Availability, skills, retention
|
||||
- Schedule: Dependencies, critical path, external factors
|
||||
- Financial: Budget overruns, currency, economic factors
|
||||
- Business: Market changes, competitive pressure, strategic shifts
|
||||
|
||||
**Step 2: Probability/Impact Assessment**
|
||||
Uses three-point estimation for Monte Carlo simulation:
|
||||
```
|
||||
Expected Value = (Optimistic + 4×Most Likely + Pessimistic) ÷ 6
|
||||
Standard Deviation = (Pessimistic - Optimistic) ÷ 6
|
||||
**Step 2: Three-Point Estimation for Monte Carlo Inputs**
|
||||
```python
|
||||
def three_point_estimate(optimistic, most_likely, pessimistic):
|
||||
expected = (optimistic + 4 * most_likely + pessimistic) / 6
|
||||
std_dev = (pessimistic - optimistic) / 6
|
||||
return expected, std_dev
|
||||
```
|
||||
|
||||
**Step 3: Expected Monetary Value (EMV) Calculation**
|
||||
```
|
||||
EMV = Σ(Probability × Financial Impact) for all risk scenarios
|
||||
**Step 3: Portfolio Risk Correlation**
|
||||
```python
|
||||
import math
|
||||
|
||||
Risk-Adjusted Budget = Base Budget × (1 + Risk Premium)
|
||||
Risk Premium = Portfolio Risk Score × Risk Tolerance Factor
|
||||
```
|
||||
|
||||
**Step 4: Portfolio Risk Correlation Analysis**
|
||||
```
|
||||
Portfolio Risk = √(Σ Individual Risks² + 2Σ Correlation×Risk1×Risk2)
|
||||
def portfolio_risk(individual_risks, correlations):
|
||||
# individual_risks: list of risk EMV values
|
||||
# correlations: list of (i, j, corr_coefficient) tuples
|
||||
sum_sq = sum(r**2 for r in individual_risks)
|
||||
sum_corr = sum(2 * c * individual_risks[i] * individual_risks[j]
|
||||
for i, j, c in correlations)
|
||||
return math.sqrt(sum_sq + sum_corr)
|
||||
```
|
||||
|
||||
**Risk Appetite Framework:**
|
||||
- **Conservative**: Risk scores 0-8, 25-30% contingency reserves
|
||||
- **Moderate**: Risk scores 8-15, 15-20% contingency reserves
|
||||
- **Moderate**: Risk scores 8-15, 15-20% contingency reserves
|
||||
- **Aggressive**: Risk scores 15+, 10-15% contingency reserves
|
||||
|
||||
## Assets & Templates
|
||||
@@ -178,13 +189,7 @@ Reference: `assets/project_charter_template.md`
|
||||
- Budget breakdown with contingency analysis
|
||||
- Timeline with critical path dependencies
|
||||
|
||||
**Key Features:**
|
||||
- Production-ready for board presentation
|
||||
- Integrated stakeholder management framework
|
||||
- Risk-adjusted financial projections
|
||||
- Change control and governance processes
|
||||
|
||||
### Executive Report Template
|
||||
### Executive Report Template
|
||||
Reference: `assets/executive_report_template.md`
|
||||
|
||||
**Board-level portfolio reporting with:**
|
||||
@@ -194,12 +199,6 @@ Reference: `assets/executive_report_template.md`
|
||||
- Resource utilization and capacity analysis
|
||||
- Forward-looking recommendations with ROI projections
|
||||
|
||||
**Executive Decision Support:**
|
||||
- Critical issues requiring immediate action
|
||||
- Investment recommendations with business cases
|
||||
- Portfolio optimization opportunities
|
||||
- Market/competitive intelligence integration
|
||||
|
||||
### RACI Matrix Template
|
||||
Reference: `assets/raci_matrix_template.md`
|
||||
|
||||
@@ -210,12 +209,6 @@ Reference: `assets/raci_matrix_template.md`
|
||||
- Communication protocols and meeting frameworks
|
||||
- Conflict resolution processes with governance integration
|
||||
|
||||
**Advanced Features:**
|
||||
- Decision-making RACI for strategic vs. operational choices
|
||||
- Risk and issue management responsibility assignment
|
||||
- Performance metrics for RACI effectiveness
|
||||
- Template validation checklist and maintenance procedures
|
||||
|
||||
### Sample Portfolio Data
|
||||
Reference: `assets/sample_project_data.json`
|
||||
|
||||
@@ -227,12 +220,6 @@ Reference: `assets/sample_project_data.json`
|
||||
- Quality metrics and stakeholder satisfaction data
|
||||
- Dependencies and milestone tracking
|
||||
|
||||
**Data Completeness:**
|
||||
- Works with all three analysis scripts
|
||||
- Demonstrates portfolio balance across strategic priorities
|
||||
- Includes both successful and at-risk project examples
|
||||
- Provides historical trend data for analysis
|
||||
|
||||
### Expected Output Examples
|
||||
Reference: `assets/expected_output.json`
|
||||
|
||||
@@ -248,21 +235,21 @@ Reference: `assets/expected_output.json`
|
||||
|
||||
1. **Data Collection & Validation**
|
||||
```bash
|
||||
# Update project data from JIRA, financial systems, team surveys
|
||||
python3 scripts/project_health_dashboard.py current_portfolio.json
|
||||
```
|
||||
⚠️ If any project composite score <60 or a critical data field is missing, STOP and resolve data integrity issues before proceeding.
|
||||
|
||||
2. **Risk Assessment Update**
|
||||
```bash
|
||||
# Refresh risk probabilities and impact assessments
|
||||
python3 scripts/risk_matrix_analyzer.py current_portfolio.json
|
||||
```
|
||||
⚠️ If any risk score >18 (Avoid threshold), STOP and initiate escalation to project sponsor before proceeding.
|
||||
|
||||
3. **Capacity Analysis**
|
||||
```bash
|
||||
# Review resource utilization and bottlenecks
|
||||
```bash
|
||||
python3 scripts/resource_capacity_planner.py current_portfolio.json
|
||||
```
|
||||
⚠️ If any team utilization >90% or <60%, flag for immediate reallocation discussion before step 4.
|
||||
|
||||
4. **Executive Summary Generation**
|
||||
- Synthesize outputs into executive report format
|
||||
@@ -373,25 +360,27 @@ Reference: `assets/expected_output.json`
|
||||
|
||||
## Success Metrics & KPIs
|
||||
|
||||
### Portfolio Performance Indicators
|
||||
- **On-time Delivery Rate**: >80% projects delivered within 10% of planned timeline
|
||||
- **Budget Variance**: <5% average variance across portfolio
|
||||
- **Quality Score**: >85 composite quality rating across all projects
|
||||
- **Risk Mitigation Effectiveness**: >90% risks with active mitigation plans
|
||||
- **Resource Utilization**: 75-85% average utilization across teams
|
||||
Reference: `references/portfolio-kpis.md` for full definitions and measurement guidance.
|
||||
|
||||
### Strategic Value Indicators
|
||||
- **ROI Achievement**: >90% projects meeting ROI projections within 12 months
|
||||
- **Strategic Alignment**: >95% portfolio investment aligned with business priorities
|
||||
- **Innovation Balance**: 70% operational, 20% growth, 10% transformational projects
|
||||
- **Stakeholder Satisfaction**: >8.5/10 average satisfaction across executive stakeholders
|
||||
- **Value Acceleration**: <6 months average time from completion to value realization
|
||||
### Portfolio Performance
|
||||
- On-time Delivery Rate: >80% within 10% of planned timeline
|
||||
- Budget Variance: <5% average across portfolio
|
||||
- Quality Score: >85 composite rating
|
||||
- Risk Mitigation Coverage: >90% risks with active plans
|
||||
- Resource Utilization: 75-85% average
|
||||
|
||||
### Risk Management Indicators
|
||||
- **Risk Exposure Level**: Maintain within approved risk appetite ranges
|
||||
- **Risk Resolution Time**: <30 days average for medium risks, <7 days for high risks
|
||||
- **Mitigation Cost Efficiency**: Mitigation spend <20% of total portfolio risk EMV
|
||||
- **Risk Prediction Accuracy**: >70% accuracy in risk probability assessments
|
||||
### Strategic Value
|
||||
- ROI Achievement: >90% projects meeting projections within 12 months
|
||||
- Strategic Alignment: >95% investment aligned with business priorities
|
||||
- Innovation Balance: 70% operational / 20% growth / 10% transformational
|
||||
- Stakeholder Satisfaction: >8.5/10 executive average
|
||||
- Time-to-Value: <6 months average post-completion
|
||||
|
||||
### Risk Management
|
||||
- Risk Exposure: Maintain within approved appetite ranges
|
||||
- Resolution Time: <30 days (medium), <7 days (high)
|
||||
- Mitigation Cost Efficiency: <20% of total portfolio risk EMV
|
||||
- Risk Prediction Accuracy: >70% probability assessment accuracy
|
||||
|
||||
## Continuous Improvement Framework
|
||||
|
||||
@@ -412,5 +401,3 @@ Reference: `assets/expected_output.json`
|
||||
- Executive interview feedback on decision support quality
|
||||
- Team feedback on process efficiency and effectiveness
|
||||
- Customer impact assessment of portfolio decisions
|
||||
|
||||
This skill represents the pinnacle of enterprise project management capability, providing both strategic oversight and tactical execution support for complex digital transformation initiatives. The combination of quantitative analysis, sophisticated prioritization, and executive-level communication enables senior project managers to drive significant business value while managing enterprise-level risks and complexities.
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "isms-audit-expert"
|
||||
description: Information Security Management System auditing for ISO 27001 compliance, security control assessment, and certification support
|
||||
description: Information Security Management System (ISMS) audit expert for ISO 27001 compliance verification, security control assessment, and certification support. Use when the user mentions ISO 27001, ISMS audit, Annex A controls, Statement of Applicability (SOA), gap analysis, nonconformity management, internal audit, surveillance audit, or security certification preparation. Helps review control implementation evidence, document audit findings, classify nonconformities, generate risk-based audit plans, map controls to Annex A requirements, prepare Stage 1 and Stage 2 audit documentation, and support corrective action workflows.
|
||||
triggers:
|
||||
- ISMS audit
|
||||
- ISO 27001 audit
|
||||
@@ -97,55 +97,10 @@ Internal and external ISMS audit management for ISO 27001 compliance verificatio
|
||||
|
||||
5. **Validation:** All controls in scope assessed with documented evidence
|
||||
|
||||
### Evidence Collection Methods
|
||||
|
||||
| Method | Use Case | Example |
|
||||
|--------|----------|---------|
|
||||
| Inquiry | Process understanding | Interview Security Manager about incident response |
|
||||
| Observation | Operational verification | Watch visitor sign-in process |
|
||||
| Inspection | Documentation review | Check access approval records |
|
||||
| Re-performance | Control testing | Attempt login with weak password |
|
||||
|
||||
---
|
||||
|
||||
## Control Assessment
|
||||
|
||||
### ISO 27002 Control Categories
|
||||
|
||||
**Organizational Controls (A.5):**
|
||||
- Information security policies
|
||||
- Roles and responsibilities
|
||||
- Segregation of duties
|
||||
- Contact with authorities
|
||||
- Threat intelligence
|
||||
- Information security in projects
|
||||
|
||||
**People Controls (A.6):**
|
||||
- Screening and background checks
|
||||
- Employment terms and conditions
|
||||
- Security awareness and training
|
||||
- Disciplinary process
|
||||
- Remote working security
|
||||
|
||||
**Physical Controls (A.7):**
|
||||
- Physical security perimeters
|
||||
- Physical entry controls
|
||||
- Securing offices and facilities
|
||||
- Physical security monitoring
|
||||
- Equipment protection
|
||||
|
||||
**Technological Controls (A.8):**
|
||||
- User endpoint devices
|
||||
- Privileged access rights
|
||||
- Access restriction
|
||||
- Secure authentication
|
||||
- Malware protection
|
||||
- Vulnerability management
|
||||
- Backup and recovery
|
||||
- Logging and monitoring
|
||||
- Network security
|
||||
- Cryptography
|
||||
|
||||
### Control Testing Approach
|
||||
|
||||
1. Identify control objective from ISO 27002
|
||||
@@ -155,6 +110,8 @@ Internal and external ISMS audit management for ISO 27001 compliance verificatio
|
||||
5. Evaluate control effectiveness
|
||||
6. **Validation:** Evidence supports conclusion about control status
|
||||
|
||||
For detailed technical verification procedures by Annex A control, see [security-control-testing.md](references/security-control-testing.md).
|
||||
|
||||
---
|
||||
|
||||
## Finding Management
|
||||
@@ -275,14 +232,3 @@ python scripts/isms_audit_scheduler.py --controls controls.csv --format markdown
|
||||
| Finding closure rate | >90% within SLA | Closed on time vs. total |
|
||||
| Major nonconformities | 0 at certification | Count per certification cycle |
|
||||
| Audit effectiveness | Incidents prevented | Security improvements implemented |
|
||||
|
||||
---
|
||||
|
||||
## Compliance Framework Integration
|
||||
|
||||
| Framework | ISMS Audit Relevance |
|
||||
|-----------|---------------------|
|
||||
| GDPR | A.5.34 Privacy, A.8.10 Information deletion |
|
||||
| HIPAA | Access controls, audit logging, encryption |
|
||||
| PCI DSS | Network security, access control, monitoring |
|
||||
| SOC 2 | Trust Services Criteria mapped to ISO 27002 |
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: "regulatory-affairs-head"
|
||||
description: Senior Regulatory Affairs Manager for HealthTech and MedTech companies. Provides regulatory strategy development, submission management, pathway analysis, global compliance coordination, and cross-functional team leadership.
|
||||
description: Senior Regulatory Affairs Manager for HealthTech and MedTech companies. Prepares FDA 510(k), De Novo, and PMA submission packages; analyzes regulatory pathways for new medical devices; drafts responses to FDA deficiency letters and Notified Body queries; develops CE marking technical documentation under EU MDR 2017/745; coordinates multi-market approval strategies across FDA, EU, Health Canada, PMDA, and NMPA; and maintains regulatory intelligence on evolving standards. Use when users need to plan or execute FDA submissions, navigate 510(k) or PMA approval processes, achieve CE marking, prepare pre-submission meeting materials, write regulatory strategy documents, respond to agency queries, or manage compliance documentation for medical device market access.
|
||||
triggers:
|
||||
- regulatory strategy
|
||||
- FDA submission
|
||||
@@ -75,41 +75,36 @@ Develop regulatory strategy aligned with business objectives and product charact
|
||||
```
|
||||
REGULATORY STRATEGY
|
||||
|
||||
Product: [Name]
|
||||
Version: [X.X]
|
||||
Date: [Date]
|
||||
Product: [Name] Version: [X.X] Date: [Date]
|
||||
|
||||
1. PRODUCT OVERVIEW
|
||||
- Intended use: [Statement]
|
||||
- Device classification: [Class I/II/III]
|
||||
- Technology: [Description]
|
||||
Intended use: [One-sentence statement of intended patient population, body site, and clinical purpose]
|
||||
Device classification: [Class I / II / III]
|
||||
Technology: [Brief description, e.g., "AI-powered wound-imaging software, SaMD"]
|
||||
|
||||
2. TARGET MARKETS
|
||||
| Market | Priority | Timeline |
|
||||
|--------|----------|----------|
|
||||
| USA | 1 | Q1 20XX |
|
||||
| EU | 2 | Q2 20XX |
|
||||
2. TARGET MARKETS & TIMELINE
|
||||
| Market | Pathway | Priority | Target Date |
|
||||
|--------|----------------|----------|-------------|
|
||||
| USA | 510(k) / PMA | 1 | Q1 20XX |
|
||||
| EU | Class [X] MDR | 2 | Q2 20XX |
|
||||
|
||||
3. REGULATORY PATHWAY
|
||||
- FDA: [510(k) / De Novo / PMA]
|
||||
- EU: [Class] via [Conformity route]
|
||||
- Rationale: [Justification]
|
||||
3. REGULATORY PATHWAY RATIONALE
|
||||
FDA: [510(k) / De Novo / PMA] — Predicate: [K-number or "none"]
|
||||
EU: Class [X] via [Annex IX / X / XI] — NB: [Name or TBD]
|
||||
Rationale: [2–3 sentences on key factors driving pathway choice]
|
||||
|
||||
4. CLINICAL EVIDENCE STRATEGY
|
||||
- Requirements: [Summary]
|
||||
- Approach: [Literature / Study / Both]
|
||||
Requirements: [Summarize what each market needs, e.g., "510(k): bench + usability; EU Class IIb: PMCF study"]
|
||||
Approach: [Literature review / Prospective study / Combination]
|
||||
|
||||
5. TIMELINE AND MILESTONES
|
||||
[Gantt or milestone table]
|
||||
5. RISKS AND MITIGATION
|
||||
| Risk | Prob | Impact | Mitigation |
|
||||
|------------------------------|------|--------|-----------------------------------|
|
||||
| Predicate delisted by FDA | Low | High | Identify secondary predicate now |
|
||||
| NB audit backlog | Med | Med | Engage NB 6 months before target |
|
||||
|
||||
6. RISKS AND MITIGATION
|
||||
| Risk | Probability | Impact | Mitigation |
|
||||
|------|-------------|--------|------------|
|
||||
|
||||
7. RESOURCE REQUIREMENTS
|
||||
- Budget: $[Amount]
|
||||
- Personnel: [FTEs]
|
||||
- External support: [Consultants, CRO]
|
||||
6. RESOURCE REQUIREMENTS
|
||||
Budget: $[Amount] Personnel: [FTEs] External: [Consultants / CRO]
|
||||
```
|
||||
|
||||
---
|
||||
@@ -121,40 +116,53 @@ Prepare and submit FDA regulatory applications.
|
||||
### Workflow: 510(k) Submission
|
||||
|
||||
1. Confirm 510(k) pathway suitability:
|
||||
- Predicate device identified
|
||||
- Substantial equivalence supportable
|
||||
- No new intended use or technology concerns
|
||||
2. Schedule and conduct Pre-Submission (Q-Sub) meeting if needed
|
||||
3. Compile submission package:
|
||||
- Cover letter and administrative information
|
||||
- Device description and intended use
|
||||
- Substantial equivalence comparison
|
||||
- Performance testing data
|
||||
- Biocompatibility (if patient contact)
|
||||
- Software documentation (if applicable)
|
||||
- Labeling and IFU
|
||||
4. Conduct internal review and quality check
|
||||
5. Prepare eCopy per FDA format requirements
|
||||
- Predicate device identified (note K-number, e.g., K213456)
|
||||
- Substantial equivalence (SE) argument supportable on intended use and technological characteristics
|
||||
- No new intended use or technology concerns triggering De Novo
|
||||
2. Schedule and conduct Pre-Submission (Q-Sub) meeting if needed (see [Pre-Sub Decision](#pre-submission-meeting-decision))
|
||||
3. Compile submission package checklist:
|
||||
- [ ] Cover letter with device name, product code, and predicate K-number
|
||||
- [ ] Section 1: Administrative information (applicant, contact, 510(k) type)
|
||||
- [ ] Section 2: Device description — include photos, dimensions, materials list
|
||||
- [ ] Section 3: Intended use and indications for use
|
||||
- [ ] Section 4: Substantial equivalence comparison table (see example below)
|
||||
- [ ] Section 5: Performance testing — protocols, standards cited, pass/fail results
|
||||
- [ ] Section 6: Biocompatibility summary (ISO 10993-1 risk assessment, if patient contact)
|
||||
- [ ] Section 7: Software documentation (IEC 62304 level, cybersecurity per FDA guidance, if applicable)
|
||||
- [ ] Section 8: Labeling — final draft IFU, device label
|
||||
- [ ] Section 9: Summary and conclusion
|
||||
4. Conduct internal review and quality check against FDA RTA checklist
|
||||
5. Prepare eCopy per FDA format requirements (PDF bookmarked, eCopy cover page)
|
||||
6. Submit via FDA ESG portal with user fee payment
|
||||
7. Monitor MDUFA clock and respond to AI/RTA requests
|
||||
7. Monitor MDUFA clock and respond to AI/RTA requests within deadlines
|
||||
8. **Validation:** Submission accepted; MDUFA date received; tracking system updated
|
||||
|
||||
#### Substantial Equivalence Comparison Example
|
||||
|
||||
| Characteristic | Predicate (K213456) | Subject Device | Same? | Notes |
|
||||
|----------------|---------------------|----------------|-------|-------|
|
||||
| Intended use | Wound measurement | Wound measurement | ✓ | Identical |
|
||||
| Technology | 2D camera | 2D + AI analysis | ✗ | New TC; address below |
|
||||
| Energy type | Non-energized | Non-energized | ✓ | |
|
||||
| Patient contact | No | No | ✓ | |
|
||||
| SE conclusion | New TC does not raise new safety/effectiveness questions; bench data demonstrates equivalent accuracy (±2mm vs ±3mm predicate) |
|
||||
|
||||
### Workflow: PMA Submission
|
||||
|
||||
1. Confirm PMA pathway:
|
||||
- Class III device or no predicate
|
||||
- Class III device or no suitable predicate
|
||||
- Clinical data strategy defined
|
||||
2. Complete IDE clinical study if required:
|
||||
- IDE approval
|
||||
- Clinical protocol execution
|
||||
- Study report completion
|
||||
3. Conduct Pre-Submission meeting
|
||||
4. Compile PMA submission:
|
||||
- Administrative and device information
|
||||
- Manufacturing information
|
||||
- Nonclinical studies
|
||||
- Clinical studies
|
||||
- Labeling
|
||||
4. Compile PMA submission checklist:
|
||||
- [ ] Volume I: Administrative, device description, manufacturing
|
||||
- [ ] Volume II: Nonclinical studies (bench, animal, biocompatibility)
|
||||
- [ ] Volume III: Clinical studies (IDE protocol, data, statistical analysis)
|
||||
- [ ] Volume IV: Labeling
|
||||
- [ ] Volume V: Manufacturing information, sterilization
|
||||
5. Submit original PMA application
|
||||
6. Address FDA questions and deficiencies
|
||||
7. Prepare for FDA facility inspection
|
||||
@@ -167,18 +175,18 @@ Prepare and submit FDA regulatory applications.
|
||||
| Pre-Sub Meeting | Day -90 | Day -90 | Day -120 |
|
||||
| Submission | Day 0 | Day 0 | Day 0 |
|
||||
| RTA Review | Day 15 | Day 15 | Day 45 |
|
||||
| Substantive Review | Days 15-90 | Days 15-150 | Days 45-180 |
|
||||
| Substantive Review | Days 15–90 | Days 15–150 | Days 45–180 |
|
||||
| Decision | Day 90 | Day 150 | Day 180 |
|
||||
|
||||
### Common FDA Deficiencies
|
||||
### Common FDA Deficiencies and Prevention
|
||||
|
||||
| Category | Common Issues | Prevention |
|
||||
|----------|---------------|------------|
|
||||
| Substantial Equivalence | Weak predicate comparison | Strong SE argument upfront |
|
||||
| Performance Testing | Incomplete test protocols | Follow recognized standards |
|
||||
| Biocompatibility | Missing endpoints | ISO 10993 risk assessment |
|
||||
| Software | Inadequate documentation | IEC 62304 compliance |
|
||||
| Labeling | Inconsistent claims | Early labeling review |
|
||||
| Substantial Equivalence | Weak predicate comparison; no performance data | Build SE table with data column; cite recognized standards |
|
||||
| Performance Testing | Incomplete protocols; missing worst-case rationale | Follow FDA-recognized standards; document worst-case justification |
|
||||
| Biocompatibility | Missing endpoints; no ISO 10993-1 risk assessment | Complete ISO 10993-1 matrix before testing |
|
||||
| Software | Inadequate hazard analysis; no cybersecurity bill of materials | IEC 62304 compliance + FDA cybersecurity guidance checklist |
|
||||
| Labeling | Inconsistent claims vs. IFU; missing symbols standard | Cross-check label against IFU; cite ISO 15223-1 for symbols |
|
||||
|
||||
See: [references/fda-submission-guide.md](references/fda-submission-guide.md)
|
||||
|
||||
@@ -195,39 +203,27 @@ Achieve CE marking under EU MDR 2017/745.
|
||||
- Class I: Self-declaration
|
||||
- Class IIa/IIb: Notified Body involvement
|
||||
- Class III: Full NB assessment
|
||||
3. Select and engage Notified Body (for Class IIa+)
|
||||
4. Compile Technical Documentation per Annex II:
|
||||
- Device description and specifications
|
||||
- Design and manufacturing information
|
||||
- General Safety and Performance Requirements (GSPR) checklist
|
||||
- Benefit-risk analysis and risk management
|
||||
- Clinical evaluation per Annex XIV
|
||||
- Post-market surveillance plan
|
||||
3. Select and engage Notified Body (for Class IIa+) — see selection criteria below
|
||||
4. Compile Technical Documentation per Annex II checklist:
|
||||
- [ ] Annex II §1: Device description, intended purpose, UDI
|
||||
- [ ] Annex II §2: Design and manufacturing information (drawings, BoM, process flows)
|
||||
- [ ] Annex II §3: GSPR checklist — each requirement mapped to evidence (standard, test report, or justification)
|
||||
- [ ] Annex II §4: Benefit-risk analysis and risk management file (ISO 14971)
|
||||
- [ ] Annex II §5: Product verification and validation (test reports)
|
||||
- [ ] Annex II §6: Post-market surveillance plan
|
||||
- [ ] Annex XIV: Clinical evaluation report (CER) — literature, clinical data, equivalence justification
|
||||
5. Establish and document QMS per ISO 13485
|
||||
6. Submit application to Notified Body
|
||||
7. Address NB questions and coordinate audit
|
||||
8. **Validation:** CE certificate issued; Declaration of Conformity signed; EUDAMED registration complete
|
||||
|
||||
### MDR Classification Decision Tree
|
||||
#### GSPR Checklist Row Example
|
||||
|
||||
```
|
||||
Is the device active?
|
||||
│
|
||||
Yes─┴─No
|
||||
│ │
|
||||
▼ ▼
|
||||
Is it an Does it contact
|
||||
implant? the body?
|
||||
│ │
|
||||
Yes─┴─No Yes─┴─No
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
III IIb Check Class I
|
||||
contact (measuring/
|
||||
type sterile if
|
||||
and applicable)
|
||||
duration
|
||||
```
|
||||
| GSPR Ref | Requirement | Standard / Guidance | Evidence Document | Status |
|
||||
|----------|-------------|---------------------|-------------------|--------|
|
||||
| Annex I §1 | Safe design and manufacture | ISO 14971:2019 | Risk Management File v2.1 | Complete |
|
||||
| Annex I §11.1 | Devices with measuring function ±accuracy | EN ISO 15223-1 | Performance Test Report PT-003 | Complete |
|
||||
| Annex I §17 | Cybersecurity | MDCG 2019-16 | Cybersecurity Assessment CS-001 | In progress |
|
||||
|
||||
### Clinical Evidence Requirements by Class
|
||||
|
||||
@@ -240,14 +236,12 @@ III IIb Check Class I
|
||||
|
||||
### Notified Body Selection Criteria
|
||||
|
||||
| Criterion | Consideration |
|
||||
|-----------|---------------|
|
||||
| Scope | Device category expertise |
|
||||
| Capacity | Availability and review timeline |
|
||||
| Experience | Track record in your technology |
|
||||
| Geography | Proximity for audits |
|
||||
| Cost | Fee structure transparency |
|
||||
| Communication | Responsiveness and clarity |
|
||||
- **Scope:** Designated for your specific device category
|
||||
- **Capacity:** Confirmed availability within target timeline
|
||||
- **Experience:** Track record with your technology type
|
||||
- **Geography:** Proximity for on-site audits
|
||||
- **Cost:** Fee structure transparency
|
||||
- **Communication:** Responsiveness and query turnaround
|
||||
|
||||
See: [references/eu-mdr-submission-guide.md](references/eu-mdr-submission-guide.md)
|
||||
|
||||
@@ -280,12 +274,12 @@ Coordinate regulatory approvals across international markets.
|
||||
| Market | Size | Complexity | Recognition | Priority |
|
||||
|--------|------|------------|-------------|----------|
|
||||
| USA | Large | High | N/A | 1 |
|
||||
| EU | Large | High | N/A | 1-2 |
|
||||
| EU | Large | High | N/A | 1–2 |
|
||||
| Canada | Medium | Medium | MDSAP | 2 |
|
||||
| Australia | Medium | Low | EU accepted | 2 |
|
||||
| Japan | Large | High | Local clinical | 3 |
|
||||
| China | Large | Very High | Local testing | 3 |
|
||||
| Brazil | Medium | High | GMP inspection | 3-4 |
|
||||
| Brazil | Medium | High | GMP inspection | 3–4 |
|
||||
|
||||
### Documentation Efficiency Strategy
|
||||
|
||||
@@ -340,35 +334,30 @@ Monitor and respond to regulatory changes affecting product portfolio.
|
||||
```
|
||||
REGULATORY CHANGE IMPACT ASSESSMENT
|
||||
|
||||
Change: [Description]
|
||||
Source: [Regulation/Guidance]
|
||||
Effective Date: [Date]
|
||||
Assessment Date: [Date]
|
||||
Assessed By: [Name]
|
||||
Change: [Description] Source: [Regulation/Guidance]
|
||||
Effective Date: [Date] Assessment Date: [Date] Assessed By: [Name]
|
||||
|
||||
AFFECTED PRODUCTS
|
||||
| Product | Impact | Action Required | Timeline |
|
||||
|---------|--------|-----------------|----------|
|
||||
| [Name] | [H/M/L]| [Description] | [Date] |
|
||||
| Product | Impact (H/M/L) | Action Required | Due Date |
|
||||
|---------|----------------|------------------------|----------|
|
||||
| [Name] | [H/M/L] | [Specific action] | [Date] |
|
||||
|
||||
COMPLIANCE ACTIONS
|
||||
1. [Action 1] - Owner: [Name] - Due: [Date]
|
||||
2. [Action 2] - Owner: [Name] - Due: [Date]
|
||||
1. [Action] — Owner: [Name] — Due: [Date]
|
||||
2. [Action] — Owner: [Name] — Due: [Date]
|
||||
|
||||
RESOURCE REQUIREMENTS
|
||||
- Budget: $[Amount]
|
||||
- Personnel: [Hours/FTEs]
|
||||
RESOURCE REQUIREMENTS: Budget $[X] | Personnel [X] hrs
|
||||
|
||||
APPROVAL
|
||||
Regulatory: _________________ Date: _______
|
||||
Management: _________________ Date: _______
|
||||
APPROVAL: Regulatory _____________ Date _______ / Management _____________ Date _______
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Frameworks
|
||||
|
||||
### Pathway Selection Decision Tree
|
||||
### Pathway Selection and Classification Reference
|
||||
|
||||
**FDA Pathway Selection**
|
||||
|
||||
```
|
||||
Is predicate device available?
|
||||
@@ -388,6 +377,27 @@ Is predicate device available?
|
||||
or PMA
|
||||
```
|
||||
|
||||
**EU MDR Classification**
|
||||
|
||||
```
|
||||
Is the device active?
|
||||
│
|
||||
Yes─┴─No
|
||||
│ │
|
||||
▼ ▼
|
||||
Is it an Does it contact
|
||||
implant? the body?
|
||||
│ │
|
||||
Yes─┴─No Yes─┴─No
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
III IIb Check Class I
|
||||
contact (measuring/
|
||||
type sterile if
|
||||
and applicable)
|
||||
duration
|
||||
```
|
||||
|
||||
### Pre-Submission Meeting Decision
|
||||
|
||||
| Factor | Schedule Pre-Sub | Skip Pre-Sub |
|
||||
@@ -427,6 +437,20 @@ Is predicate device available?
|
||||
- Identify overdue submissions
|
||||
- Generate status reports
|
||||
|
||||
**Example usage:**
|
||||
```bash
|
||||
$ python regulatory_tracker.py --report status
|
||||
Submission Status Report — 2024-11-01
|
||||
┌──────────────────┬──────────┬────────────┬─────────────┬──────────┐
|
||||
│ Product │ Market │ Type │ Target Date │ Status │
|
||||
├──────────────────┼──────────┼────────────┼─────────────┼──────────┤
|
||||
│ WoundScan Pro │ USA │ 510(k) │ 2024-12-01 │ On Track │
|
||||
│ WoundScan Pro │ EU │ MDR IIb │ 2025-03-01 │ At Risk │
|
||||
│ CardioMonitor X1 │ Canada │ Class II │ 2025-01-15 │ On Track │
|
||||
└──────────────────┴──────────┴────────────┴─────────────┴──────────┘
|
||||
1 submission at risk: WoundScan Pro EU — NB engagement not confirmed.
|
||||
```
|
||||
|
||||
### References
|
||||
|
||||
| Document | Content |
|
||||
|
||||
Reference in New Issue
Block a user