Merge pull request #201 from alirezarezvani/dev

Dev
This commit is contained in:
Alireza Rezvani
2026-02-16 14:57:59 +01:00
committed by GitHub
56 changed files with 26920 additions and 651 deletions

View File

@@ -51,7 +51,7 @@
"name": "incident-commander",
"source": "../../engineering-team/incident-commander",
"category": "engineering",
"description": "Production incident management with structured timeline analysis, severity classification (SEV1-4), automated postmortem generation, and SLA tracking. Features communication templates, escalation routing, 5-Whys root cause analysis, and MTTR/MTTD metrics for high-reliability engineering teams."
"description": "Skill from engineering-team"
},
{
"name": "ms365-tenant-manager",

View File

@@ -0,0 +1,252 @@
# Incident Commander Skill
A comprehensive incident response framework providing structured tools for managing technology incidents from detection through resolution and post-incident review.
## Overview
This skill implements battle-tested practices from SRE and DevOps teams at scale, providing:
- **Automated Severity Classification** - Intelligent incident triage
- **Timeline Reconstruction** - Transform scattered events into coherent narratives
- **Post-Incident Review Generation** - Structured PIRs with RCA frameworks
- **Communication Templates** - Pre-built stakeholder communication
- **Comprehensive Documentation** - Reference guides for incident response
## Quick Start
### Classify an Incident
```bash
# From JSON file
python scripts/incident_classifier.py --input incident.json --format text
# From stdin text
echo "Database is down affecting all users" | python scripts/incident_classifier.py --format text
# Interactive mode
python scripts/incident_classifier.py --interactive
```
### Reconstruct Timeline
```bash
# Analyze event timeline
python scripts/timeline_reconstructor.py --input events.json --format text
# With gap analysis
python scripts/timeline_reconstructor.py --input events.json --gap-analysis --format markdown
```
### Generate PIR Document
```bash
# Basic PIR
python scripts/pir_generator.py --incident incident.json --format markdown
# Comprehensive PIR with timeline
python scripts/pir_generator.py --incident incident.json --timeline timeline.json --rca-method fishbone
```
## Scripts
### incident_classifier.py
**Purpose:** Analyzes incident descriptions and provides severity classification, team recommendations, and response templates.
**Input:** JSON object with incident details or plain text description
**Output:** JSON + human-readable classification report
**Example Input:**
```json
{
"description": "Database connection timeouts causing 500 errors",
"service": "payment-api",
"affected_users": "80%",
"business_impact": "high"
}
```
**Key Features:**
- SEV1-4 severity classification
- Recommended response teams
- Initial action prioritization
- Communication templates
- Response timelines
### timeline_reconstructor.py
**Purpose:** Reconstructs incident timelines from timestamped events, identifies phases, and performs gap analysis.
**Input:** JSON array of timestamped events
**Output:** Formatted timeline with phase analysis and metrics
**Example Input:**
```json
[
{
"timestamp": "2024-01-01T12:00:00Z",
"source": "monitoring",
"message": "High error rate detected",
"severity": "critical",
"actor": "system"
}
]
```
**Key Features:**
- Phase detection (detection → triage → mitigation → resolution)
- Duration analysis
- Gap identification
- Communication effectiveness analysis
- Response metrics
### pir_generator.py
**Purpose:** Generates comprehensive Post-Incident Review documents with multiple RCA frameworks.
**Input:** Incident data JSON, optional timeline data
**Output:** Structured PIR document with RCA analysis
**Key Features:**
- Multiple RCA methods (5 Whys, Fishbone, Timeline, Bow Tie)
- Automated action item generation
- Lessons learned categorization
- Follow-up planning
- Completeness assessment
## Sample Data
The `assets/` directory contains sample data files for testing:
- `sample_incident_classification.json` - Database connection pool exhaustion incident
- `sample_timeline_events.json` - Complete timeline with 21 events across phases
- `sample_incident_pir_data.json` - Comprehensive incident data for PIR generation
- `simple_incident.json` - Minimal incident for basic testing
- `simple_timeline_events.json` - Simple 4-event timeline
## Expected Outputs
The `expected_outputs/` directory contains reference outputs showing what each script produces:
- `incident_classification_text_output.txt` - Detailed classification report
- `timeline_reconstruction_text_output.txt` - Complete timeline analysis
- `pir_markdown_output.md` - Full PIR document
- `simple_incident_classification.txt` - Basic classification example
## Reference Documentation
### references/incident_severity_matrix.md
Complete severity classification system with:
- SEV1-4 definitions and criteria
- Response requirements and timelines
- Escalation paths
- Communication requirements
- Decision trees and examples
### references/rca_frameworks_guide.md
Detailed guide for root cause analysis:
- 5 Whys methodology
- Fishbone (Ishikawa) diagram analysis
- Timeline analysis techniques
- Bow Tie analysis for high-risk incidents
- Framework selection guidelines
### references/communication_templates.md
Standardized communication templates:
- Severity-specific notification templates
- Stakeholder-specific messaging
- Escalation communications
- Resolution notifications
- Customer communication guidelines
## Usage Patterns
### End-to-End Incident Workflow
1. **Initial Classification**
```bash
echo "Payment API returning 500 errors for 70% of requests" | \
python scripts/incident_classifier.py --format text
```
2. **Timeline Reconstruction** (after collecting events)
```bash
python scripts/timeline_reconstructor.py \
--input events.json \
--gap-analysis \
--format markdown \
--output timeline.md
```
3. **PIR Generation** (after incident resolution)
```bash
python scripts/pir_generator.py \
--incident incident.json \
--timeline timeline.md \
--rca-method fishbone \
--output pir.md
```
### Integration Examples
**CI/CD Pipeline Integration:**
```bash
# Classify deployment issues
cat deployment_error.log | python scripts/incident_classifier.py --format json
```
**Monitoring Integration:**
```bash
# Process alert events
curl -s "monitoring-api/events" | python scripts/timeline_reconstructor.py --format text
```
**Runbook Generation:**
Use classification output to automatically select appropriate runbooks and escalation procedures.
## Quality Standards
- **Zero External Dependencies** - All scripts use only Python standard library
- **Dual Output Format** - Both JSON (machine-readable) and text (human-readable)
- **Robust Input Handling** - Graceful handling of missing or malformed data
- **Professional Defaults** - Opinionated, battle-tested configurations
- **Comprehensive Testing** - Sample data and expected outputs included
## Technical Requirements
- Python 3.6+
- No external dependencies required
- Works with standard Unix tools (pipes, redirection)
- Cross-platform compatible
## Severity Classification Reference
| Severity | Description | Response Time | Update Frequency |
|----------|-------------|---------------|------------------|
| **SEV1** | Complete outage | 5 minutes | Every 15 minutes |
| **SEV2** | Major degradation | 15 minutes | Every 30 minutes |
| **SEV3** | Minor impact | 2 hours | At milestones |
| **SEV4** | Low impact | 1-2 days | Weekly |
## Getting Help
Each script includes comprehensive help:
```bash
python scripts/incident_classifier.py --help
python scripts/timeline_reconstructor.py --help
python scripts/pir_generator.py --help
```
For methodology questions, refer to the reference documentation in the `references/` directory.
## Contributing
When adding new features:
1. Maintain zero external dependencies
2. Add comprehensive examples to `assets/`
3. Update expected outputs in `expected_outputs/`
4. Follow the established patterns for argument parsing and output formatting
## License
This skill is part of the claude-skills repository. See the main repository LICENSE for details.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
{
"description": "Database connection timeouts causing 500 errors for payment processing API. Users unable to complete checkout. Error rate spiked from 0.1% to 45% starting at 14:30 UTC. Database monitoring shows connection pool exhaustion with 200/200 connections active.",
"service": "payment-api",
"affected_users": "80%",
"business_impact": "high",
"duration_minutes": 95,
"metadata": {
"error_rate": "45%",
"connection_pool_utilization": "100%",
"affected_regions": ["us-west", "us-east", "eu-west"],
"detection_method": "monitoring_alert",
"customer_escalations": 12
}
}

View File

@@ -0,0 +1,74 @@
{
"incident_id": "INC-2024-0315-001",
"title": "Payment API Database Connection Pool Exhaustion",
"description": "Database connection pool exhaustion caused widespread 500 errors in payment processing API, preventing users from completing purchases. Root cause was an inefficient database query introduced in deployment v2.3.1.",
"severity": "sev2",
"start_time": "2024-03-15T14:30:00Z",
"end_time": "2024-03-15T15:35:00Z",
"duration": "1h 5m",
"affected_services": ["payment-api", "checkout-service", "subscription-billing"],
"customer_impact": "80% of users unable to complete payments or checkout. Approximately 2,400 failed payment attempts during the incident. Users experienced immediate 500 errors when attempting to pay.",
"business_impact": "Estimated revenue loss of $45,000 during outage period. No SLA breaches as resolution was within 2-hour window. 12 customer escalations through support channels.",
"incident_commander": "Mike Rodriguez",
"responders": [
"Sarah Chen - On-call Engineer, Primary Responder",
"Tom Wilson - Database Team Lead",
"Lisa Park - Database Engineer",
"Mike Rodriguez - Incident Commander",
"David Kumar - DevOps Engineer"
],
"status": "resolved",
"detection_details": {
"detection_method": "automated_monitoring",
"detection_time": "2024-03-15T14:30:00Z",
"alert_source": "Datadog error rate threshold",
"time_to_detection": "immediate"
},
"response_details": {
"time_to_response": "5 minutes",
"time_to_escalation": "10 minutes",
"time_to_resolution": "65 minutes",
"war_room_established": "2024-03-15T14:45:00Z",
"executives_notified": false,
"status_page_updated": true
},
"technical_details": {
"root_cause": "Inefficient database query introduced in deployment v2.3.1 caused each payment validation to take 15 seconds instead of normal 0.1 seconds, exhausting the 200-connection database pool",
"affected_regions": ["us-west", "us-east", "eu-west"],
"error_metrics": {
"peak_error_rate": "45%",
"normal_error_rate": "0.1%",
"connection_pool_max": 200,
"connections_exhausted_at": "100%"
},
"resolution_method": "rollback",
"rollback_target": "v2.2.9",
"rollback_duration": "7 minutes"
},
"communication_log": [
{
"timestamp": "2024-03-15T14:50:00Z",
"type": "status_page",
"message": "Investigating payment processing issues",
"audience": "customers"
},
{
"timestamp": "2024-03-15T15:35:00Z",
"type": "status_page",
"message": "Payment processing issues resolved",
"audience": "customers"
}
],
"lessons_learned_preview": [
"Deployment v2.3.1 code review missed performance implications of query change",
"Load testing didn't include realistic database query patterns",
"Connection pool monitoring could have provided earlier warning",
"Rollback procedure worked effectively - 7 minute rollback time"
],
"preliminary_action_items": [
"Fix inefficient query for v2.3.2 deployment",
"Add database query performance checks to CI pipeline",
"Improve load testing to include database performance scenarios",
"Add connection pool utilization alerts"
]
}

View File

@@ -0,0 +1,263 @@
[
{
"timestamp": "2024-03-15T14:30:00Z",
"source": "datadog",
"type": "alert",
"message": "High error rate detected on payment-api: 45% error rate (threshold: 5%)",
"severity": "critical",
"actor": "monitoring-system",
"metadata": {
"alert_id": "ALT-001",
"metric_value": "45%",
"threshold": "5%"
}
},
{
"timestamp": "2024-03-15T14:32:00Z",
"source": "pagerduty",
"type": "escalation",
"message": "Paged on-call engineer Sarah Chen for payment-api alerts",
"severity": "high",
"actor": "pagerduty-system",
"metadata": {
"incident_id": "PD-12345",
"responder": "sarah.chen@company.com"
}
},
{
"timestamp": "2024-03-15T14:35:00Z",
"source": "slack",
"type": "communication",
"message": "Sarah Chen acknowledged the alert and is investigating payment-api issues",
"severity": "medium",
"actor": "sarah.chen",
"metadata": {
"channel": "#incidents",
"message_id": "1234567890.123456"
}
},
{
"timestamp": "2024-03-15T14:38:00Z",
"source": "application_logs",
"type": "log",
"message": "Database connection pool exhausted: 200/200 connections active, unable to acquire new connections",
"severity": "critical",
"actor": "payment-api",
"metadata": {
"log_level": "ERROR",
"component": "database_pool",
"connection_count": 200,
"max_connections": 200
}
},
{
"timestamp": "2024-03-15T14:40:00Z",
"source": "slack",
"type": "escalation",
"message": "Sarah Chen: Escalating to incident commander - database connection pool exhausted, need database team",
"severity": "high",
"actor": "sarah.chen",
"metadata": {
"channel": "#incidents",
"escalation_reason": "database_expertise_needed"
}
},
{
"timestamp": "2024-03-15T14:42:00Z",
"source": "pagerduty",
"type": "escalation",
"message": "Incident commander Mike Rodriguez assigned to incident PD-12345",
"severity": "high",
"actor": "pagerduty-system",
"metadata": {
"incident_commander": "mike.rodriguez@company.com",
"role": "incident_commander"
}
},
{
"timestamp": "2024-03-15T14:45:00Z",
"source": "slack",
"type": "communication",
"message": "Mike Rodriguez: War room established in #war-room-payment-api. Engaging database team.",
"severity": "high",
"actor": "mike.rodriguez",
"metadata": {
"channel": "#incidents",
"war_room": "#war-room-payment-api"
}
},
{
"timestamp": "2024-03-15T14:47:00Z",
"source": "pagerduty",
"type": "escalation",
"message": "Database team engineers paged: Tom Wilson, Lisa Park",
"severity": "medium",
"actor": "pagerduty-system",
"metadata": {
"team": "database-team",
"responders": ["tom.wilson@company.com", "lisa.park@company.com"]
}
},
{
"timestamp": "2024-03-15T14:50:00Z",
"source": "statuspage",
"type": "communication",
"message": "Status page updated: Investigating payment processing issues",
"severity": "medium",
"actor": "mike.rodriguez",
"metadata": {
"status": "investigating",
"affected_systems": ["payment-api"]
}
},
{
"timestamp": "2024-03-15T14:52:00Z",
"source": "slack",
"type": "communication",
"message": "Tom Wilson: Joining war room. Looking at database metrics now. Seeing unusual query patterns from recent deployment.",
"severity": "medium",
"actor": "tom.wilson",
"metadata": {
"channel": "#war-room-payment-api",
"investigation_focus": "database_metrics"
}
},
{
"timestamp": "2024-03-15T14:55:00Z",
"source": "database_monitoring",
"type": "log",
"message": "Identified slow query introduced in deployment v2.3.1: payment validation taking 15s per request",
"severity": "critical",
"actor": "database-monitor",
"metadata": {
"deployment_version": "v2.3.1",
"query_time": "15s",
"normal_query_time": "0.1s"
}
},
{
"timestamp": "2024-03-15T15:00:00Z",
"source": "slack",
"type": "communication",
"message": "Tom Wilson: Root cause identified - inefficient query in v2.3.1 deployment. Recommending immediate rollback.",
"severity": "high",
"actor": "tom.wilson",
"metadata": {
"channel": "#war-room-payment-api",
"root_cause": "inefficient_query",
"recommendation": "rollback"
}
},
{
"timestamp": "2024-03-15T15:02:00Z",
"source": "slack",
"type": "communication",
"message": "Mike Rodriguez: Approved rollback to v2.2.9. Sarah initiating rollback procedure.",
"severity": "high",
"actor": "mike.rodriguez",
"metadata": {
"channel": "#war-room-payment-api",
"decision": "rollback_approved",
"target_version": "v2.2.9"
}
},
{
"timestamp": "2024-03-15T15:05:00Z",
"source": "deployment_system",
"type": "action",
"message": "Rollback initiated: payment-api v2.3.1 → v2.2.9",
"severity": "medium",
"actor": "sarah.chen",
"metadata": {
"from_version": "v2.3.1",
"to_version": "v2.2.9",
"deployment_type": "rollback"
}
},
{
"timestamp": "2024-03-15T15:12:00Z",
"source": "deployment_system",
"type": "action",
"message": "Rollback completed successfully: payment-api now running v2.2.9 across all regions",
"severity": "medium",
"actor": "deployment-system",
"metadata": {
"deployment_status": "completed",
"regions": ["us-west", "us-east", "eu-west"]
}
},
{
"timestamp": "2024-03-15T15:15:00Z",
"source": "datadog",
"type": "log",
"message": "Error rate decreasing: payment-api error rate dropped to 8% and continuing to decline",
"severity": "medium",
"actor": "monitoring-system",
"metadata": {
"error_rate": "8%",
"trend": "decreasing"
}
},
{
"timestamp": "2024-03-15T15:18:00Z",
"source": "database_monitoring",
"type": "log",
"message": "Connection pool utilization normalizing: 45/200 connections active",
"severity": "low",
"actor": "database-monitor",
"metadata": {
"connection_count": 45,
"max_connections": 200,
"utilization": "22.5%"
}
},
{
"timestamp": "2024-03-15T15:25:00Z",
"source": "datadog",
"type": "log",
"message": "Error rate returned to normal: payment-api error rate now 0.2% (within normal range)",
"severity": "low",
"actor": "monitoring-system",
"metadata": {
"error_rate": "0.2%",
"status": "normal"
}
},
{
"timestamp": "2024-03-15T15:30:00Z",
"source": "slack",
"type": "communication",
"message": "Mike Rodriguez: All metrics returned to normal. Declaring incident resolved. Thanks to all responders.",
"severity": "low",
"actor": "mike.rodriguez",
"metadata": {
"channel": "#war-room-payment-api",
"status": "resolved"
}
},
{
"timestamp": "2024-03-15T15:35:00Z",
"source": "statuspage",
"type": "communication",
"message": "Status page updated: Payment processing issues resolved. All systems operational.",
"severity": "low",
"actor": "mike.rodriguez",
"metadata": {
"status": "resolved",
"duration": "65 minutes"
}
},
{
"timestamp": "2024-03-15T15:40:00Z",
"source": "slack",
"type": "communication",
"message": "Mike Rodriguez: PIR scheduled for tomorrow 10am. Action item: fix the inefficient query in v2.3.2",
"severity": "low",
"actor": "mike.rodriguez",
"metadata": {
"channel": "#incidents",
"pir_time": "2024-03-16T10:00:00Z",
"action_item": "fix_query_v2.3.2"
}
}
]

View File

@@ -0,0 +1,6 @@
{
"description": "Users reporting slow page loads on the main website",
"service": "web-frontend",
"affected_users": "25%",
"business_impact": "medium"
}

View File

@@ -0,0 +1,30 @@
[
{
"timestamp": "2024-03-10T09:00:00Z",
"source": "monitoring",
"message": "High CPU utilization detected on web servers",
"severity": "medium",
"actor": "system"
},
{
"timestamp": "2024-03-10T09:05:00Z",
"source": "slack",
"message": "Engineer investigating high CPU alerts",
"severity": "medium",
"actor": "john.doe"
},
{
"timestamp": "2024-03-10T09:15:00Z",
"source": "deployment",
"message": "Deployed hotfix to reduce CPU usage",
"severity": "low",
"actor": "john.doe"
},
{
"timestamp": "2024-03-10T09:25:00Z",
"source": "monitoring",
"message": "CPU utilization returned to normal levels",
"severity": "low",
"actor": "system"
}
]

View File

@@ -0,0 +1,44 @@
============================================================
INCIDENT CLASSIFICATION REPORT
============================================================
CLASSIFICATION:
Severity: SEV1
Confidence: 100.0%
Reasoning: Classified as SEV1 based on: keywords: timeout, 500 error; user impact: 80%
Timestamp: 2026-02-16T12:41:46.644096+00:00
RECOMMENDED RESPONSE:
Primary Team: Analytics Team
Supporting Teams: SRE, API Team, Backend Engineering, Finance Engineering, Payments Team, DevOps, Compliance Team, Database Team, Platform Team, Data Engineering
Response Time: 5 minutes
INITIAL ACTIONS:
1. Establish incident command (Priority 1)
Timeout: 5 minutes
Page incident commander and establish war room
2. Create incident ticket (Priority 1)
Timeout: 2 minutes
Create tracking ticket with all known details
3. Update status page (Priority 2)
Timeout: 15 minutes
Post initial status page update acknowledging incident
4. Notify executives (Priority 2)
Timeout: 15 minutes
Alert executive team of customer-impacting outage
5. Engage subject matter experts (Priority 3)
Timeout: 10 minutes
Page relevant SMEs based on affected systems
COMMUNICATION:
Subject: 🚨 [SEV1] payment-api - Database connection timeouts causing 500 errors fo...
Urgency: SEV1
Recipients: on-call, engineering-leadership, executives, customer-success
Channels: pager, phone, slack, email, status-page
Update Frequency: Every 15 minutes
============================================================

View File

@@ -0,0 +1,88 @@
# Post-Incident Review: Payment API Database Connection Pool Exhaustion
## Executive Summary
On March 15, 2024, we experienced a sev2 incident affecting ['payment-api', 'checkout-service', 'subscription-billing']. The incident lasted 1h 5m and had the following impact: 80% of users unable to complete payments or checkout. Approximately 2,400 failed payment attempts during the incident. Users experienced immediate 500 errors when attempting to pay. The incident has been resolved and we have identified specific actions to prevent recurrence.
## Incident Overview
- **Incident ID:** INC-2024-0315-001
- **Date & Time:** 2024-03-15 14:30:00 UTC
- **Duration:** 1h 5m
- **Severity:** SEV2
- **Status:** Resolved
- **Incident Commander:** Mike Rodriguez
- **Responders:** Sarah Chen - On-call Engineer, Primary Responder, Tom Wilson - Database Team Lead, Lisa Park - Database Engineer, Mike Rodriguez - Incident Commander, David Kumar - DevOps Engineer
### Customer Impact
80% of users unable to complete payments or checkout. Approximately 2,400 failed payment attempts during the incident. Users experienced immediate 500 errors when attempting to pay.
### Business Impact
Estimated revenue loss of $45,000 during outage period. No SLA breaches as resolution was within 2-hour window. 12 customer escalations through support channels.
## Timeline
No detailed timeline available.
## Root Cause Analysis
### Analysis Method: 5 Whys Analysis
#### Why Analysis
**Why 1:** Why did Database connection pool exhaustion caused widespread 500 errors in payment processing API, preventing users from completing purchases. Root cause was an inefficient database query introduced in deployment v2.3.1.?
**Answer:** New deployment introduced a regression
**Why 2:** Why wasn't this detected earlier?
**Answer:** Code review process missed the issue
**Why 3:** Why didn't existing safeguards prevent this?
**Answer:** Testing environment didn't match production
**Why 4:** Why wasn't there a backup mechanism?
**Answer:** Further investigation needed
**Why 5:** Why wasn't this scenario anticipated?
**Answer:** Further investigation needed
## What Went Well
- The incident was successfully resolved
- Incident command was established
- Multiple team members collaborated on resolution
## What Didn't Go Well
- Analysis in progress
## Lessons Learned
Lessons learned to be documented following detailed analysis.
## Action Items
Action items to be defined.
## Follow-up and Prevention
### Prevention Measures
Based on the root cause analysis, the following preventive measures have been identified:
- Implement comprehensive testing for similar scenarios
- Improve monitoring and alerting coverage
- Enhance error handling and resilience patterns
### Follow-up Schedule
- 1 week: Review action item progress
- 1 month: Evaluate effectiveness of implemented changes
- 3 months: Conduct follow-up assessment and update preventive measures
## Appendix
### Additional Information
- Incident ID: INC-2024-0315-001
- Severity Classification: sev2
- Affected Services: payment-api, checkout-service, subscription-billing
### References
- Incident tracking ticket: [Link TBD]
- Monitoring dashboards: [Link TBD]
- Communication thread: [Link TBD]
---
*Generated on 2026-02-16 by PIR Generator*

View File

@@ -0,0 +1,44 @@
============================================================
INCIDENT CLASSIFICATION REPORT
============================================================
CLASSIFICATION:
Severity: SEV2
Confidence: 100.0%
Reasoning: Classified as SEV2 based on: keywords: slow; user impact: 25%
Timestamp: 2026-02-16T12:42:41.889774+00:00
RECOMMENDED RESPONSE:
Primary Team: UX Engineering
Supporting Teams: Product Engineering, Frontend Team
Response Time: 15 minutes
INITIAL ACTIONS:
1. Assign incident commander (Priority 1)
Timeout: 30 minutes
Assign IC and establish coordination channel
2. Create incident tracking (Priority 1)
Timeout: 5 minutes
Create incident ticket with details and timeline
3. Assess customer impact (Priority 2)
Timeout: 15 minutes
Determine scope and severity of user impact
4. Engage response team (Priority 2)
Timeout: 30 minutes
Page appropriate technical responders
5. Begin investigation (Priority 3)
Timeout: 15 minutes
Start technical analysis and debugging
COMMUNICATION:
Subject: ⚠️ [SEV2] web-frontend - Users reporting slow page loads on the main websit...
Urgency: SEV2
Recipients: on-call, engineering-leadership, product-team
Channels: pager, slack, email
Update Frequency: Every 30 minutes
============================================================

View File

@@ -0,0 +1,110 @@
================================================================================
INCIDENT TIMELINE RECONSTRUCTION
================================================================================
OVERVIEW:
Time Range: 2024-03-15T14:30:00+00:00 to 2024-03-15T15:40:00+00:00
Total Duration: 70 minutes
Total Events: 21
Phases Detected: 12
PHASES:
DETECTION:
Start: 2024-03-15T14:30:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Initial detection of the incident through monitoring or observation
ESCALATION:
Start: 2024-03-15T14:32:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Escalation to additional resources or higher severity response
TRIAGE:
Start: 2024-03-15T14:35:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Assessment and initial investigation of the incident
ESCALATION:
Start: 2024-03-15T14:38:00+00:00
Duration: 9.0 minutes
Events: 5
Description: Escalation to additional resources or higher severity response
TRIAGE:
Start: 2024-03-15T14:50:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Assessment and initial investigation of the incident
ESCALATION:
Start: 2024-03-15T14:52:00+00:00
Duration: 10.0 minutes
Events: 4
Description: Escalation to additional resources or higher severity response
TRIAGE:
Start: 2024-03-15T15:05:00+00:00
Duration: 7.0 minutes
Events: 2
Description: Assessment and initial investigation of the incident
DETECTION:
Start: 2024-03-15T15:15:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Initial detection of the incident through monitoring or observation
RESOLUTION:
Start: 2024-03-15T15:18:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Confirmation that the incident has been resolved
DETECTION:
Start: 2024-03-15T15:25:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Initial detection of the incident through monitoring or observation
RESOLUTION:
Start: 2024-03-15T15:30:00+00:00
Duration: 5.0 minutes
Events: 2
Description: Confirmation that the incident has been resolved
TRIAGE:
Start: 2024-03-15T15:40:00+00:00
Duration: 0.0 minutes
Events: 1
Description: Assessment and initial investigation of the incident
KEY METRICS:
Time to Mitigation: 0 minutes
Time to Resolution: 48.0 minutes
Events per Hour: 18.0
Unique Sources: 7
INCIDENT NARRATIVE:
Incident Timeline Summary:
The incident began at 2024-03-15 14:30:00 UTC and concluded at 2024-03-15 15:40:00 UTC, lasting approximately 70 minutes.
The incident progressed through 12 distinct phases: detection, escalation, triage, escalation, triage, escalation, triage, detection, resolution, detection, resolution, triage.
Key milestones:
- Detection: 14:30 (0 min)
- Escalation: 14:32 (0 min)
- Triage: 14:35 (0 min)
- Escalation: 14:38 (9 min)
- Triage: 14:50 (0 min)
- Escalation: 14:52 (10 min)
- Triage: 15:05 (7 min)
- Detection: 15:15 (0 min)
- Resolution: 15:18 (0 min)
- Detection: 15:25 (0 min)
- Resolution: 15:30 (5 min)
- Triage: 15:40 (0 min)
================================================================================

View File

@@ -0,0 +1,591 @@
# Incident Communication Templates
## Overview
This document provides standardized communication templates for incident response. These templates ensure consistent, clear communication across different severity levels and stakeholder groups.
## Template Usage Guidelines
### General Principles
1. **Be Clear and Concise** - Use simple language, avoid jargon
2. **Be Factual** - Only state what is known, avoid speculation
3. **Be Timely** - Send updates at committed intervals
4. **Be Actionable** - Include next steps and expected timelines
5. **Be Accountable** - Include contact information for follow-up
### Template Selection
- Choose templates based on incident severity and audience
- Customize templates with specific incident details
- Always include next update time and contact information
- Escalate template types as severity increases
---
## SEV1 Templates
### Initial Alert - Internal Teams
**Subject:** 🚨 [SEV1] CRITICAL: {Service} Complete Outage - Immediate Response Required
```
CRITICAL INCIDENT ALERT - IMMEDIATE ATTENTION REQUIRED
Incident Summary:
- Service: {Service Name}
- Status: Complete Outage
- Start Time: {Timestamp}
- Customer Impact: {Impact Description}
- Estimated Affected Users: {Number/Percentage}
Immediate Actions Needed:
✓ Incident Commander: {Name} - ASSIGNED
✓ War Room: {Bridge/Chat Link} - JOIN NOW
✓ On-Call Response: {Team} - PAGED
⏳ Executive Notification: In progress
⏳ Status Page Update: Within 15 minutes
Current Situation:
{Brief description of what we know}
What We're Doing:
{Immediate response actions being taken}
Next Update: {Timestamp - 15 minutes from now}
Incident Commander: {Name}
Contact: {Phone/Slack}
THIS IS A CUSTOMER-IMPACTING INCIDENT REQUIRING IMMEDIATE ATTENTION
```
### Executive Notification - SEV1
**Subject:** 🚨 URGENT: Customer-Impacting Outage - {Service}
```
EXECUTIVE ALERT: Critical customer-facing incident
Service: {Service Name}
Impact: {Customer impact description}
Duration: {Current duration} (started {start time})
Business Impact: {Revenue/SLA/compliance implications}
Customer Impact Summary:
- Affected Users: {Number/percentage}
- Revenue Impact: {$ amount if known}
- SLA Status: {Breach status}
- Customer Escalations: {Number if any}
Response Status:
- Incident Commander: {Name} ({contact})
- Response Team Size: {Number of engineers}
- Root Cause: {If known, otherwise "Under investigation"}
- ETA to Resolution: {If known, otherwise "Investigating"}
Executive Actions Required:
- [ ] Customer communication approval needed
- [ ] Legal/compliance notification: {If applicable}
- [ ] PR/Media response preparation: {If needed}
- [ ] Resource allocation decisions: {If escalation needed}
War Room: {Link}
Next Update: {15 minutes from now}
This incident meets SEV1 criteria and requires executive oversight.
{Incident Commander contact information}
```
### Customer Communication - SEV1
**Subject:** Service Disruption - Immediate Action Being Taken
```
We are currently experiencing a service disruption affecting {service description}.
What's Happening:
{Clear, customer-friendly description of the issue}
Impact:
{What customers are experiencing - be specific}
What We're Doing:
We detected this issue at {time} and immediately mobilized our engineering team. We are actively working to resolve this issue and will provide updates every 15 minutes.
Current Actions:
• {Action 1 - customer-friendly description}
• {Action 2 - customer-friendly description}
• {Action 3 - customer-friendly description}
Workaround:
{If available, provide clear steps}
{If not available: "We are working on alternative solutions and will share them as soon as available."}
Next Update: {Timestamp}
Status Page: {Link}
Support: {Contact information if different from usual}
We sincerely apologize for the inconvenience and are committed to resolving this as quickly as possible.
{Company Name} Team
```
### Status Page Update - SEV1
**Status:** Major Outage
```
{Timestamp} - Investigating
We are currently investigating reports of {service} being unavailable. Our team has been alerted and is actively investigating the cause.
Affected Services: {List of affected services}
Impact: {Customer-facing impact description}
We will provide an update within 15 minutes.
```
```
{Timestamp} - Identified
We have identified the cause of the {service} outage. Our engineering team is implementing a fix.
Root Cause: {Brief, customer-friendly explanation}
Expected Resolution: {Timeline if known}
Next update in 15 minutes.
```
```
{Timestamp} - Monitoring
The fix has been implemented and we are monitoring the service recovery.
Current Status: {Recovery progress}
Next Steps: {What we're monitoring}
We expect full service restoration within {timeframe}.
```
```
{Timestamp} - Resolved
{Service} is now fully operational. We have confirmed that all functionality is working as expected.
Total Duration: {Duration}
Root Cause: {Brief summary}
We apologize for the inconvenience. A full post-incident review will be conducted and shared within 24 hours.
```
---
## SEV2 Templates
### Team Notification - SEV2
**Subject:** ⚠️ [SEV2] {Service} Performance Issues - Response Team Mobilizing
```
SEV2 INCIDENT: Performance degradation requiring active response
Incident Details:
- Service: {Service Name}
- Issue: {Description of performance issue}
- Start Time: {Timestamp}
- Affected Users: {Percentage/description}
- Business Impact: {Impact on business operations}
Current Status:
{What we know about the issue}
Response Team:
- Incident Commander: {Name} ({contact})
- Primary Responder: {Name} ({team})
- Supporting Teams: {List of engaged teams}
Immediate Actions:
✓ {Action 1 - completed}
⏳ {Action 2 - in progress}
⏳ {Action 3 - next step}
Metrics:
- Error Rate: {Current vs normal}
- Response Time: {Current vs normal}
- Throughput: {Current vs normal}
Communication Plan:
- Internal Updates: Every 30 minutes
- Stakeholder Notification: {If needed}
- Status Page Update: {Planned/not needed}
Coordination Channel: {Slack channel}
Next Update: {30 minutes from now}
Incident Commander: {Name} | {Contact}
```
### Stakeholder Update - SEV2
**Subject:** [SEV2] Service Performance Update - {Service}
```
Service Performance Incident Update
Service: {Service Name}
Duration: {Current duration}
Impact: {Description of user impact}
Current Status:
{Brief status of the incident and response efforts}
What We Know:
• {Key finding 1}
• {Key finding 2}
• {Key finding 3}
What We're Doing:
• {Response action 1}
• {Response action 2}
• {Monitoring/verification steps}
Customer Impact:
{Realistic assessment of what users are experiencing}
Workaround:
{If available, provide steps}
Expected Resolution:
{Timeline if known, otherwise "Continuing investigation"}
Next Update: {30 minutes}
Contact: {Incident Commander information}
This incident is being actively managed and does not currently require escalation.
```
### Customer Communication - SEV2 (Optional)
**Subject:** Temporary Service Performance Issues
```
We are currently experiencing performance issues with {service name} that may affect your experience.
What You Might Notice:
{Specific symptoms users might experience}
What We're Doing:
Our team identified this issue at {time} and is actively working on a resolution. We expect to have this resolved within {timeframe}.
Workaround:
{If applicable, provide simple workaround steps}
We will update our status page at {link} with progress information.
Thank you for your patience as we work to resolve this issue quickly.
{Company Name} Support Team
```
---
## SEV3 Templates
### Team Assignment - SEV3
**Subject:** [SEV3] Issue Assignment - {Component} Issue
```
SEV3 Issue Assignment
Service/Component: {Affected component}
Issue: {Description}
Reported: {Timestamp}
Reporter: {Person/system that reported}
Issue Details:
{Detailed description of the problem}
Impact Assessment:
- Affected Users: {Scope}
- Business Impact: {Assessment}
- Urgency: {Business hours response appropriate}
Assignment:
- Primary: {Engineer name}
- Team: {Responsible team}
- Expected Response: {Within 2-4 hours}
Investigation Plan:
1. {Investigation step 1}
2. {Investigation step 2}
3. {Communication checkpoint}
Workaround:
{If known, otherwise "Investigating alternatives"}
This issue will be tracked in {ticket system} as {ticket number}.
Team Lead: {Name} | {Contact}
```
### Status Update - SEV3
**Subject:** [SEV3] Progress Update - {Component}
```
SEV3 Issue Progress Update
Issue: {Brief description}
Assigned to: {Engineer/Team}
Investigation Status: {Current progress}
Findings So Far:
{What has been discovered during investigation}
Next Steps:
{Planned actions and timeline}
Impact Update:
{Any changes to scope or urgency}
Expected Resolution:
{Timeline if known}
This issue continues to be tracked as SEV3 with no escalation required.
Contact: {Assigned engineer} | {Team lead}
```
---
## SEV4 Templates
### Issue Documentation - SEV4
**Subject:** [SEV4] Issue Documented - {Description}
```
SEV4 Issue Logged
Description: {Clear description of the issue}
Reporter: {Name/system}
Date: {Date reported}
Impact:
{Minimal impact description}
Priority Assessment:
This issue has been classified as SEV4 and will be addressed in the normal development cycle.
Assignment:
- Team: {Responsible team}
- Sprint: {Target sprint}
- Estimated Effort: {Story points/hours}
This issue is tracked as {ticket number} in {system}.
Product Owner: {Name}
```
---
## Escalation Templates
### Severity Escalation
**Subject:** ESCALATION: {Original Severity} → {New Severity} - {Service}
```
SEVERITY ESCALATION NOTIFICATION
Original Classification: {Original severity}
New Classification: {New severity}
Escalation Time: {Timestamp}
Escalated By: {Name and role}
Escalation Reasons:
• {Reason 1 - scope expansion/duration/impact}
• {Reason 2}
• {Reason 3}
Updated Impact:
{New assessment of customer/business impact}
Updated Response Requirements:
{New response team, communication frequency, etc.}
Previous Response Actions:
{Summary of actions taken under previous severity}
New Incident Commander: {If changed}
Updated Communication Plan: {New frequency/recipients}
All stakeholders should adjust response according to {new severity} protocols.
Incident Commander: {Name} | {Contact}
```
### Management Escalation
**Subject:** MANAGEMENT ESCALATION: Extended {Severity} Incident - {Service}
```
Management Escalation Required
Incident: {Service} {brief description}
Original Severity: {Severity}
Duration: {Current duration}
Escalation Trigger: {Duration threshold/scope change/customer escalation}
Current Status:
{Brief status of incident response}
Challenges Encountered:
• {Challenge 1}
• {Challenge 2}
• {Resource/expertise needs}
Business Impact:
{Updated assessment of business implications}
Management Decision Required:
• {Decision 1 - resource allocation/external expertise/communication}
• {Decision 2}
Recommended Actions:
{Incident Commander's recommendations}
This escalation follows standard procedures for {trigger type}.
Incident Commander: {Name}
Contact: {Phone/Slack}
War Room: {Link}
```
---
## Resolution Templates
### Resolution Confirmation - All Severities
**Subject:** RESOLVED: [{Severity}] {Service} Incident - {Brief Description}
```
INCIDENT RESOLVED
Service: {Service Name}
Issue: {Brief description}
Duration: {Total duration}
Resolution Time: {Timestamp}
Resolution Summary:
{Brief description of how the issue was resolved}
Root Cause:
{Brief explanation - detailed PIR to follow}
Impact Summary:
- Users Affected: {Final count/percentage}
- Business Impact: {Final assessment}
- Services Affected: {List}
Resolution Actions Taken:
• {Action 1}
• {Action 2}
• {Verification steps}
Monitoring:
We will continue monitoring {service} for {duration} to ensure stability.
Next Steps:
• Post-incident review scheduled for {date}
• Action items to be tracked in {system}
• Follow-up communication: {If needed}
Thank you to everyone who participated in the incident response.
Incident Commander: {Name}
```
### Customer Resolution Communication
**Subject:** Service Restored - Thank You for Your Patience
```
Service Update: Issue Resolved
We're pleased to report that the {service} issues have been fully resolved as of {timestamp}.
What Was Fixed:
{Customer-friendly explanation of the resolution}
Duration:
The issue lasted {duration} from {start time} to {end time}.
What We Learned:
{Brief, high-level takeaway}
Our Commitment:
We are conducting a thorough review of this incident and will implement improvements to prevent similar issues in the future. A summary of our findings and improvements will be shared {timeframe}.
We sincerely apologize for any inconvenience this may have caused and appreciate your patience while we worked to resolve the issue.
If you continue to experience any problems, please contact our support team at {contact information}.
Thank you,
{Company Name} Team
```
---
## Template Customization Guidelines
### Placeholders to Always Replace
- `{Service}` / `{Service Name}` - Specific service or component
- `{Timestamp}` - Specific date/time in consistent format
- `{Name}` / `{Contact}` - Actual names and contact information
- `{Duration}` - Actual time durations
- `{Link}` - Real URLs to war rooms, status pages, etc.
### Language Guidelines
- Use active voice ("We are investigating" not "The issue is being investigated")
- Be specific about timelines ("within 30 minutes" not "soon")
- Avoid technical jargon in customer communications
- Include empathy in customer-facing messages
- Use consistent terminology throughout incident lifecycle
### Timing Guidelines
| Severity | Initial Notification | Update Frequency | Resolution Notification |
|----------|---------------------|------------------|------------------------|
| SEV1 | Immediate (< 5 min) | Every 15 minutes | Immediate |
| SEV2 | Within 15 minutes | Every 30 minutes | Within 15 minutes |
| SEV3 | Within 2 hours | At milestones | Within 1 hour |
| SEV4 | Within 1 business day | Weekly | When resolved |
### Audience-Specific Considerations
#### Engineering Teams
- Include technical details
- Provide specific metrics and logs
- Include coordination channels
- List specific actions and owners
#### Executive/Business
- Focus on business impact
- Include customer and revenue implications
- Provide clear timeline and resource needs
- Highlight any external factors (PR, legal, compliance)
#### Customers
- Use plain language
- Focus on customer impact and workarounds
- Provide realistic timelines
- Include support contact information
- Show empathy and accountability
---
**Last Updated:** February 2026
**Next Review:** May 2026
**Owner:** Incident Management Team

View File

@@ -0,0 +1,292 @@
# Incident Severity Classification Matrix
## Overview
This document defines the severity classification system used for incident response. The classification determines response requirements, escalation paths, and communication frequency.
## Severity Levels
### SEV1 - Critical Outage
**Definition:** Complete service failure affecting all users or critical business functions
#### Impact Criteria
- Customer-facing services completely unavailable
- Data loss or corruption affecting users
- Security breaches with customer data exposure
- Revenue-generating systems down
- SLA violations with financial penalties
- > 75% of users affected
#### Response Requirements
| Metric | Requirement |
|--------|-------------|
| **Response Time** | Immediate (0-5 minutes) |
| **Incident Commander** | Assigned within 5 minutes |
| **War Room** | Established within 10 minutes |
| **Executive Notification** | Within 15 minutes |
| **Public Status Page** | Updated within 15 minutes |
| **Customer Communication** | Within 30 minutes |
#### Escalation Path
1. **Immediate**: On-call Engineer → Incident Commander
2. **15 minutes**: VP Engineering + Customer Success VP
3. **30 minutes**: CTO
4. **60 minutes**: CEO + Full Executive Team
#### Communication Requirements
- **Frequency**: Every 15 minutes until resolution
- **Channels**: PagerDuty, Phone, Slack, Email, Status Page
- **Recipients**: All engineering, executives, customer success
- **Template**: SEV1 Executive Alert Template
---
### SEV2 - Major Impact
**Definition:** Significant degradation affecting subset of users or non-critical functions
#### Impact Criteria
- Partial service degradation (25-75% of users affected)
- Performance issues causing user frustration
- Non-critical features unavailable
- Internal tools impacting productivity
- Data inconsistencies not affecting user experience
- API errors affecting integrations
#### Response Requirements
| Metric | Requirement |
|--------|-------------|
| **Response Time** | 15 minutes |
| **Incident Commander** | Assigned within 30 minutes |
| **Status Page Update** | Within 30 minutes |
| **Stakeholder Notification** | Within 1 hour |
| **Team Assembly** | Within 30 minutes |
#### Escalation Path
1. **Immediate**: On-call Engineer → Team Lead
2. **30 minutes**: Engineering Manager
3. **2 hours**: VP Engineering
4. **4 hours**: CTO (if unresolved)
#### Communication Requirements
- **Frequency**: Every 30 minutes during active response
- **Channels**: PagerDuty, Slack, Email
- **Recipients**: Engineering team, product team, relevant stakeholders
- **Template**: SEV2 Major Impact Template
---
### SEV3 - Minor Impact
**Definition:** Limited impact with workarounds available
#### Impact Criteria
- Single feature or component affected
- < 25% of users impacted
- Workarounds available
- Performance degradation not significantly impacting UX
- Non-urgent monitoring alerts
- Development/test environment issues
#### Response Requirements
| Metric | Requirement |
|--------|-------------|
| **Response Time** | 2 hours (business hours) |
| **After Hours Response** | Next business day |
| **Team Assignment** | Within 4 hours |
| **Status Page Update** | Optional |
| **Internal Notification** | Within 2 hours |
#### Escalation Path
1. **Immediate**: Assigned Engineer
2. **4 hours**: Team Lead
3. **1 business day**: Engineering Manager (if needed)
#### Communication Requirements
- **Frequency**: At key milestones only
- **Channels**: Slack, Email
- **Recipients**: Assigned team, team lead
- **Template**: SEV3 Minor Impact Template
---
### SEV4 - Low Impact
**Definition:** Minimal impact, cosmetic issues, or planned maintenance
#### Impact Criteria
- Cosmetic bugs
- Documentation issues
- Logging or monitoring gaps
- Performance issues with no user impact
- Development/test environment issues
- Feature requests or enhancements
#### Response Requirements
| Metric | Requirement |
|--------|-------------|
| **Response Time** | 1-2 business days |
| **Assignment** | Next sprint planning |
| **Tracking** | Standard ticket system |
| **Escalation** | None required |
#### Communication Requirements
- **Frequency**: Standard development cycle updates
- **Channels**: Ticket system
- **Recipients**: Product owner, assigned developer
- **Template**: Standard issue template
## Classification Guidelines
### User Impact Assessment
| Impact Scope | Description | Typical Severity |
|--------------|-------------|------------------|
| **All Users** | 100% of users affected | SEV1 |
| **Major Subset** | 50-75% of users affected | SEV1/SEV2 |
| **Significant Subset** | 25-50% of users affected | SEV2 |
| **Limited Users** | 5-25% of users affected | SEV2/SEV3 |
| **Few Users** | < 5% of users affected | SEV3/SEV4 |
| **No User Impact** | Internal only | SEV4 |
### Business Impact Assessment
| Business Impact | Description | Severity Boost |
|-----------------|-------------|----------------|
| **Revenue Loss** | Direct revenue impact | +1 severity level |
| **SLA Breach** | Contract violations | +1 severity level |
| **Regulatory** | Compliance implications | +1 severity level |
| **Brand Damage** | Public-facing issues | +1 severity level |
| **Security** | Data or system security | +2 severity levels |
### Duration Considerations
| Duration | Impact on Classification |
|----------|--------------------------|
| **< 15 minutes** | May reduce severity by 1 level |
| **15-60 minutes** | Standard classification |
| **1-4 hours** | May increase severity by 1 level |
| **> 4 hours** | Significant severity increase |
## Decision Tree
```
1. Is this a security incident with data exposure?
→ YES: SEV1 (regardless of user count)
→ NO: Continue to step 2
2. Are revenue-generating services completely down?
→ YES: SEV1
→ NO: Continue to step 3
3. What percentage of users are affected?
→ > 75%: SEV1
→ 25-75%: SEV2
→ 5-25%: SEV3
→ < 5%: SEV4
4. Apply business impact modifiers
5. Consider duration factors
6. When in doubt, err on higher severity
```
## Examples
### SEV1 Examples
- Payment processing system completely down
- All user authentication failing
- Database corruption causing data loss
- Security breach with customer data exposed
- Website returning 500 errors for all users
### SEV2 Examples
- Payment processing slow (30-second delays)
- Search functionality returning incomplete results
- API rate limits causing partner integration issues
- Dashboard displaying stale data (> 1 hour old)
- Mobile app crashing for 40% of users
### SEV3 Examples
- Single feature in admin panel not working
- Email notifications delayed by 1 hour
- Non-critical API endpoint returning errors
- Cosmetic UI bug in settings page
- Development environment deployment failing
### SEV4 Examples
- Typo in help documentation
- Log format change needed for analysis
- Non-critical performance optimization
- Internal tool enhancement request
- Test data cleanup needed
## Escalation Triggers
### Automatic Escalation
- SEV1 incidents automatically escalate every 30 minutes if unresolved
- SEV2 incidents escalate after 2 hours without significant progress
- Any incident with expanding scope increases severity
- Customer escalation to support triggers severity review
### Manual Escalation
- Incident Commander can escalate at any time
- Technical leads can request escalation
- Business stakeholders can request severity review
- External factors (media attention, regulatory) trigger escalation
## Communication Templates
### SEV1 Executive Alert
```
Subject: 🚨 CRITICAL INCIDENT - [Service] Complete Outage
URGENT: Customer-facing service outage requiring immediate attention
Service: [Service Name]
Start Time: [Timestamp]
Impact: [Description of customer impact]
Estimated Affected Users: [Number/Percentage]
Business Impact: [Revenue/SLA/Brand implications]
Incident Commander: [Name] ([Contact])
Response Team: [Team members engaged]
Current Status: [Brief status update]
Next Update: [Timestamp - 15 minutes from now]
War Room: [Bridge/Chat link]
This is a customer-impacting incident requiring executive awareness.
```
### SEV2 Major Impact
```
Subject: ⚠️ [SEV2] [Service] - Major Performance Impact
Major service degradation affecting user experience
Service: [Service Name]
Start Time: [Timestamp]
Impact: [Description of user impact]
Scope: [Affected functionality/users]
Response Team: [Team Lead] + [Team members]
Status: [Current mitigation efforts]
Workaround: [If available]
Next Update: 30 minutes
Status Page: [Link if updated]
```
## Review and Updates
This severity matrix should be reviewed quarterly and updated based on:
- Incident response learnings
- Business priority changes
- Service architecture evolution
- Regulatory requirement changes
- Customer feedback and SLA updates
**Last Updated:** February 2026
**Next Review:** May 2026
**Owner:** Engineering Leadership

View File

@@ -0,0 +1,562 @@
# Root Cause Analysis (RCA) Frameworks Guide
## Overview
This guide provides detailed instructions for applying various Root Cause Analysis frameworks during Post-Incident Reviews. Each framework offers a different perspective and approach to identifying underlying causes of incidents.
## Framework Selection Guidelines
| Incident Type | Recommended Framework | Why |
|---------------|----------------------|-----|
| **Process Failure** | 5 Whys | Simple, direct cause-effect chain |
| **Complex System Failure** | Fishbone + Timeline | Multiple contributing factors |
| **Human Error** | Fishbone | Systematic analysis of contributing factors |
| **Extended Incidents** | Timeline Analysis | Understanding decision points |
| **High-Risk Incidents** | Bow Tie | Comprehensive barrier analysis |
| **Recurring Issues** | 5 Whys + Fishbone | Deep dive into systemic issues |
---
## 5 Whys Analysis Framework
### Purpose
Iteratively drill down through cause-effect relationships to identify root causes.
### When to Use
- Simple, linear cause-effect chains
- Time-pressured analysis
- Process-related failures
- Individual component failures
### Process Steps
#### Step 1: Problem Statement
Write a clear, specific problem statement.
**Good Example:**
> "The payment API returned 500 errors for 2 hours on March 15, affecting 80% of checkout attempts."
**Poor Example:**
> "The system was broken."
#### Step 2: First Why
Ask why the problem occurred. Focus on immediate, observable causes.
**Example:**
- **Why 1:** Why did the payment API return 500 errors?
- **Answer:** The database connection pool was exhausted.
#### Step 3: Subsequent Whys
For each answer, ask "why" again. Continue until you reach a root cause.
**Example Chain:**
- **Why 2:** Why was the database connection pool exhausted?
- **Answer:** The application was creating more connections than usual.
- **Why 3:** Why was the application creating more connections?
- **Answer:** A new feature wasn't properly closing connections.
- **Why 4:** Why wasn't the feature properly closing connections?
- **Answer:** Code review missed the connection leak pattern.
- **Why 5:** Why did code review miss this pattern?
- **Answer:** We don't have automated checks for connection pooling best practices.
#### Step 4: Validation
Verify that addressing the root cause would prevent the original problem.
### Best Practices
1. **Ask at least 3 "whys"** - Surface causes are rarely root causes
2. **Focus on process failures, not people** - Avoid blame, focus on system improvements
3. **Use evidence** - Support each answer with data or observations
4. **Consider multiple paths** - Some problems have multiple root causes
5. **Test the logic** - Work backwards from root cause to problem
### Common Pitfalls
- **Stopping too early** - First few whys often reveal symptoms, not causes
- **Single-cause assumption** - Complex systems often have multiple contributing factors
- **Blame focus** - Focusing on individual mistakes rather than system failures
- **Vague answers** - Use specific, actionable answers
### 5 Whys Template
```markdown
## 5 Whys Analysis
**Problem Statement:** [Clear description of the incident]
**Why 1:** [First why question]
**Answer:** [Specific, evidence-based answer]
**Evidence:** [Supporting data, logs, observations]
**Why 2:** [Second why question]
**Answer:** [Specific answer based on Why 1]
**Evidence:** [Supporting evidence]
[Continue for 3-7 iterations]
**Root Cause(s) Identified:**
1. [Primary root cause]
2. [Secondary root cause if applicable]
**Validation:** [Confirm that addressing root causes would prevent recurrence]
```
---
## Fishbone (Ishikawa) Diagram Framework
### Purpose
Systematically analyze potential causes across multiple categories to identify contributing factors.
### When to Use
- Complex incidents with multiple potential causes
- When human factors are suspected
- Systemic or organizational issues
- When 5 Whys doesn't reveal clear root causes
### Categories
#### People (Human Factors)
- **Training and Skills**
- Insufficient training on new systems
- Lack of domain expertise
- Skill gaps in team
- Knowledge not shared across team
- **Communication**
- Poor communication between teams
- Unclear responsibilities
- Information not reaching right people
- Language/cultural barriers
- **Decision Making**
- Decisions made under pressure
- Insufficient information for decisions
- Risk assessment inadequate
- Approval processes bypassed
#### Process (Procedures and Workflows)
- **Documentation**
- Outdated procedures
- Missing runbooks
- Unclear instructions
- Process not documented
- **Change Management**
- Inadequate change review
- Rushed deployments
- Insufficient testing
- Rollback procedures unclear
- **Review and Approval**
- Code review gaps
- Architecture review skipped
- Security review insufficient
- Performance review missing
#### Technology (Systems and Tools)
- **Architecture**
- Single points of failure
- Insufficient redundancy
- Scalability limitations
- Tight coupling between systems
- **Monitoring and Alerting**
- Missing monitoring
- Alert fatigue
- Inadequate thresholds
- Poor alert routing
- **Tools and Automation**
- Manual processes prone to error
- Tool limitations
- Automation gaps
- Integration issues
#### Environment (External Factors)
- **Infrastructure**
- Hardware failures
- Network issues
- Capacity limitations
- Geographic dependencies
- **Dependencies**
- Third-party service failures
- External API changes
- Vendor issues
- Supply chain problems
- **External Pressure**
- Time pressure from business
- Resource constraints
- Regulatory changes
- Market conditions
### Process Steps
#### Step 1: Define the Problem
Place the incident at the "head" of the fishbone diagram.
#### Step 2: Brainstorm Causes
For each category, brainstorm potential contributing factors.
#### Step 3: Drill Down
For each factor, ask what caused that factor (sub-causes).
#### Step 4: Identify Primary Causes
Mark the most likely contributing factors based on evidence.
#### Step 5: Validate
Gather evidence to support or refute each suspected cause.
### Fishbone Template
```markdown
## Fishbone Analysis
**Problem:** [Incident description]
### People
**Training/Skills:**
- [Factor 1]: [Evidence/likelihood]
- [Factor 2]: [Evidence/likelihood]
**Communication:**
- [Factor 1]: [Evidence/likelihood]
**Decision Making:**
- [Factor 1]: [Evidence/likelihood]
### Process
**Documentation:**
- [Factor 1]: [Evidence/likelihood]
**Change Management:**
- [Factor 1]: [Evidence/likelihood]
**Review/Approval:**
- [Factor 1]: [Evidence/likelihood]
### Technology
**Architecture:**
- [Factor 1]: [Evidence/likelihood]
**Monitoring:**
- [Factor 1]: [Evidence/likelihood]
**Tools:**
- [Factor 1]: [Evidence/likelihood]
### Environment
**Infrastructure:**
- [Factor 1]: [Evidence/likelihood]
**Dependencies:**
- [Factor 1]: [Evidence/likelihood]
**External Factors:**
- [Factor 1]: [Evidence/likelihood]
### Primary Contributing Factors
1. [Factor with highest evidence/impact]
2. [Second most significant factor]
3. [Third most significant factor]
### Root Cause Hypothesis
[Synthesized explanation of how factors combined to cause incident]
```
---
## Timeline Analysis Framework
### Purpose
Analyze the chronological sequence of events to identify decision points, missed opportunities, and process gaps.
### When to Use
- Extended incidents (> 1 hour)
- Complex multi-phase incidents
- When response effectiveness is questioned
- Communication or coordination failures
### Analysis Dimensions
#### Detection Analysis
- **Time to Detection:** How long from onset to first alert?
- **Detection Method:** How was the incident first identified?
- **Alert Effectiveness:** Were the right people notified quickly?
- **False Negatives:** What signals were missed?
#### Response Analysis
- **Time to Response:** How long from detection to first response action?
- **Escalation Timing:** Were escalations timely and appropriate?
- **Resource Mobilization:** How quickly were the right people engaged?
- **Decision Points:** What key decisions were made and when?
#### Communication Analysis
- **Internal Communication:** How effective was team coordination?
- **External Communication:** Were stakeholders informed appropriately?
- **Communication Gaps:** Where did information flow break down?
- **Update Frequency:** Were updates provided at appropriate intervals?
#### Resolution Analysis
- **Mitigation Strategy:** Was the chosen approach optimal?
- **Alternative Paths:** What other options were considered?
- **Resource Allocation:** Were resources used effectively?
- **Verification:** How was resolution confirmed?
### Process Steps
#### Step 1: Event Reconstruction
Create comprehensive timeline with all available events.
#### Step 2: Phase Identification
Identify distinct phases (detection, triage, escalation, mitigation, resolution).
#### Step 3: Gap Analysis
Identify time gaps and analyze their causes.
#### Step 4: Decision Point Analysis
Examine key decision points and alternative paths.
#### Step 5: Effectiveness Assessment
Evaluate the overall effectiveness of the response.
### Timeline Template
```markdown
## Timeline Analysis
### Incident Phases
1. **Detection** ([start] - [end], [duration])
2. **Triage** ([start] - [end], [duration])
3. **Escalation** ([start] - [end], [duration])
4. **Mitigation** ([start] - [end], [duration])
5. **Resolution** ([start] - [end], [duration])
### Key Decision Points
**[Timestamp]:** [Decision made]
- **Context:** [Situation at time of decision]
- **Alternatives:** [Other options considered]
- **Outcome:** [Result of decision]
- **Assessment:** [Was this optimal?]
### Communication Timeline
**[Timestamp]:** [Communication event]
- **Channel:** [Slack/Email/Phone/etc.]
- **Audience:** [Who was informed]
- **Content:** [What was communicated]
- **Effectiveness:** [Assessment]
### Gaps and Delays
**[Time Period]:** [Description of gap]
- **Duration:** [Length of gap]
- **Cause:** [Why did gap occur]
- **Impact:** [Effect on incident response]
### Response Effectiveness
**Strengths:**
- [What went well]
- [Effective decisions/actions]
**Weaknesses:**
- [What could be improved]
- [Missed opportunities]
### Root Causes from Timeline
1. [Process-based root cause]
2. [Communication-based root cause]
3. [Decision-making root cause]
```
---
## Bow Tie Analysis Framework
### Purpose
Analyze both preventive measures (left side) and protective measures (right side) around an incident.
### When to Use
- High-severity incidents (SEV1)
- Security incidents
- Safety-critical systems
- When comprehensive barrier analysis is needed
### Components
#### Hazards
What conditions create the potential for incidents?
**Examples:**
- High traffic loads
- Software deployments
- Human interactions with critical systems
- Third-party dependencies
#### Top Event
What actually went wrong? This is the center of the bow tie.
**Examples:**
- "Database became unresponsive"
- "Payment processing failed"
- "User authentication service crashed"
#### Threats (Left Side)
What specific causes could lead to the top event?
**Examples:**
- Code defects in new deployment
- Database connection pool exhaustion
- Network connectivity issues
- DDoS attack
#### Consequences (Right Side)
What are the potential impacts of the top event?
**Examples:**
- Revenue loss
- Customer churn
- Regulatory violations
- Brand damage
- Data loss
#### Barriers
What controls exist (or could exist) to prevent threats or mitigate consequences?
**Preventive Barriers (Left Side):**
- Code reviews
- Automated testing
- Load testing
- Input validation
- Rate limiting
**Protective Barriers (Right Side):**
- Circuit breakers
- Failover systems
- Backup procedures
- Customer communication
- Rollback capabilities
### Process Steps
#### Step 1: Define the Top Event
Clearly state what went wrong.
#### Step 2: Identify Threats
Brainstorm all possible causes that could lead to the top event.
#### Step 3: Identify Consequences
List all potential impacts of the top event.
#### Step 4: Map Existing Barriers
Identify current controls for each threat and consequence.
#### Step 5: Assess Barrier Effectiveness
Evaluate how well each barrier worked (or failed).
#### Step 6: Recommend Additional Barriers
Identify new controls needed to prevent recurrence.
### Bow Tie Template
```markdown
## Bow Tie Analysis
**Top Event:** [What went wrong]
### Threats (Potential Causes)
1. **[Threat 1]**
- Likelihood: [High/Medium/Low]
- Current Barriers: [Preventive controls]
- Barrier Effectiveness: [Assessment]
2. **[Threat 2]**
- Likelihood: [High/Medium/Low]
- Current Barriers: [Preventive controls]
- Barrier Effectiveness: [Assessment]
### Consequences (Potential Impacts)
1. **[Consequence 1]**
- Severity: [High/Medium/Low]
- Current Barriers: [Protective controls]
- Barrier Effectiveness: [Assessment]
2. **[Consequence 2]**
- Severity: [High/Medium/Low]
- Current Barriers: [Protective controls]
- Barrier Effectiveness: [Assessment]
### Barrier Analysis
**Effective Barriers:**
- [Barrier that worked well]
- [Why it was effective]
**Failed Barriers:**
- [Barrier that failed]
- [Why it failed]
- [How to improve]
**Missing Barriers:**
- [Needed preventive control]
- [Needed protective control]
### Recommendations
**Preventive Measures:**
1. [New barrier to prevent threat]
2. [Improvement to existing barrier]
**Protective Measures:**
1. [New barrier to mitigate consequence]
2. [Improvement to existing barrier]
```
---
## Framework Comparison
| Framework | Time Required | Complexity | Best For | Output |
|-----------|---------------|------------|----------|---------|
| **5 Whys** | 30-60 minutes | Low | Simple, linear causes | Clear cause chain |
| **Fishbone** | 1-2 hours | Medium | Complex, multi-factor | Comprehensive factor map |
| **Timeline** | 2-3 hours | Medium | Extended incidents | Process improvements |
| **Bow Tie** | 2-4 hours | High | High-risk incidents | Barrier strategy |
## Combining Frameworks
### 5 Whys + Fishbone
Use 5 Whys for initial analysis, then Fishbone to explore contributing factors.
### Timeline + 5 Whys
Use Timeline to identify key decision points, then 5 Whys on critical failures.
### Fishbone + Bow Tie
Use Fishbone to identify causes, then Bow Tie to develop comprehensive prevention strategy.
## Quality Checklist
- [ ] Root causes address systemic issues, not symptoms
- [ ] Analysis is backed by evidence, not assumptions
- [ ] Multiple perspectives considered (technical, process, human)
- [ ] Recommendations are specific and actionable
- [ ] Analysis focuses on prevention, not blame
- [ ] Findings are validated against incident timeline
- [ ] Contributing factors are prioritized by impact
- [ ] Root causes link clearly to preventive actions
## Common Anti-Patterns
- **Human Error as Root Cause** - Dig deeper into why human error occurred
- **Single Root Cause** - Complex systems usually have multiple contributing factors
- **Technology-Only Focus** - Consider process and organizational factors
- **Blame Assignment** - Focus on system improvements, not individual fault
- **Generic Recommendations** - Provide specific, measurable actions
- **Surface-Level Analysis** - Ensure you've reached true root causes
---
**Last Updated:** February 2026
**Next Review:** August 2026
**Owner:** SRE Team + Engineering Leadership

View File

@@ -0,0 +1,914 @@
#!/usr/bin/env python3
"""
Incident Classifier
Analyzes incident descriptions and outputs severity levels, recommended response teams,
initial actions, and communication templates.
This tool uses pattern matching and keyword analysis to classify incidents according to
SEV1-4 criteria and provide structured response guidance.
Usage:
python incident_classifier.py --input incident.json
echo "Database is down" | python incident_classifier.py --format text
python incident_classifier.py --interactive
"""
import argparse
import json
import sys
import re
from datetime import datetime, timezone
from typing import Dict, List, Tuple, Optional, Any
class IncidentClassifier:
"""
Classifies incidents based on description, impact metrics, and business context.
Provides severity assessment, team recommendations, and response templates.
"""
def __init__(self):
"""Initialize the classifier with rules and templates."""
self.severity_rules = self._load_severity_rules()
self.team_mappings = self._load_team_mappings()
self.communication_templates = self._load_communication_templates()
self.action_templates = self._load_action_templates()
def _load_severity_rules(self) -> Dict[str, Dict]:
"""Load severity classification rules and keywords."""
return {
"sev1": {
"keywords": [
"down", "outage", "offline", "unavailable", "crashed", "failed",
"critical", "emergency", "dead", "broken", "timeout", "500 error",
"data loss", "corrupted", "breach", "security incident",
"revenue impact", "customer facing", "all users", "complete failure"
],
"impact_indicators": [
"100%", "all users", "entire service", "complete",
"revenue loss", "sla violation", "customer churn",
"security breach", "data corruption", "regulatory"
],
"duration_threshold": 0, # Immediate classification
"response_time": 300, # 5 minutes
"description": "Complete service failure affecting all users or critical business functions"
},
"sev2": {
"keywords": [
"degraded", "slow", "performance", "errors", "partial",
"intermittent", "high latency", "timeouts", "some users",
"feature broken", "api errors", "database slow"
],
"impact_indicators": [
"50%", "25-75%", "many users", "significant",
"performance degradation", "feature unavailable",
"support tickets", "user complaints"
],
"duration_threshold": 300, # 5 minutes
"response_time": 900, # 15 minutes
"description": "Significant degradation affecting subset of users or non-critical functions"
},
"sev3": {
"keywords": [
"minor", "cosmetic", "single feature", "workaround available",
"edge case", "rare issue", "non-critical", "internal tool",
"logging issue", "monitoring gap"
],
"impact_indicators": [
"<25%", "few users", "limited impact",
"workaround exists", "internal only",
"development environment"
],
"duration_threshold": 3600, # 1 hour
"response_time": 7200, # 2 hours
"description": "Limited impact with workarounds available"
},
"sev4": {
"keywords": [
"cosmetic", "documentation", "typo", "minor bug",
"enhancement", "nice to have", "low priority",
"test environment", "dev tools"
],
"impact_indicators": [
"no impact", "cosmetic only", "documentation",
"development", "testing", "non-production"
],
"duration_threshold": 86400, # 24 hours
"response_time": 172800, # 2 days
"description": "Minimal impact, cosmetic issues, or planned maintenance"
}
}
def _load_team_mappings(self) -> Dict[str, List[str]]:
"""Load team assignment rules based on service/component keywords."""
return {
"database": ["Database Team", "SRE", "Backend Engineering"],
"frontend": ["Frontend Team", "UX Engineering", "Product Engineering"],
"api": ["API Team", "Backend Engineering", "Platform Team"],
"infrastructure": ["SRE", "DevOps", "Platform Team"],
"security": ["Security Team", "SRE", "Compliance Team"],
"network": ["Network Engineering", "SRE", "Infrastructure Team"],
"authentication": ["Identity Team", "Security Team", "Backend Engineering"],
"payment": ["Payments Team", "Finance Engineering", "Compliance Team"],
"mobile": ["Mobile Team", "API Team", "QA Engineering"],
"monitoring": ["SRE", "Platform Team", "DevOps"],
"deployment": ["DevOps", "Release Engineering", "SRE"],
"data": ["Data Engineering", "Analytics Team", "Backend Engineering"]
}
def _load_communication_templates(self) -> Dict[str, Dict]:
"""Load communication templates for each severity level."""
return {
"sev1": {
"subject": "🚨 [SEV1] {service} - {brief_description}",
"body": """CRITICAL INCIDENT ALERT
Incident Details:
- Start Time: {timestamp}
- Severity: SEV1 - Critical Outage
- Service: {service}
- Impact: {impact_description}
- Current Status: Investigating
Customer Impact:
{customer_impact}
Response Team:
- Incident Commander: TBD (assigning now)
- Primary Responder: {primary_responder}
- SMEs Required: {subject_matter_experts}
Immediate Actions Taken:
{initial_actions}
War Room: {war_room_link}
Status Page: Will be updated within 15 minutes
Next Update: {next_update_time}
This is a customer-impacting incident requiring immediate attention.
{incident_commander_contact}"""
},
"sev2": {
"subject": "⚠️ [SEV2] {service} - {brief_description}",
"body": """MAJOR INCIDENT NOTIFICATION
Incident Details:
- Start Time: {timestamp}
- Severity: SEV2 - Major Impact
- Service: {service}
- Impact: {impact_description}
- Current Status: Investigating
User Impact:
{customer_impact}
Response Team:
- Primary Responder: {primary_responder}
- Supporting Team: {supporting_teams}
- Incident Commander: {incident_commander}
Initial Assessment:
{initial_assessment}
Next Steps:
{next_steps}
Updates will be provided every 30 minutes.
Status page: {status_page_link}
{contact_information}"""
},
"sev3": {
"subject": " [SEV3] {service} - {brief_description}",
"body": """MINOR INCIDENT NOTIFICATION
Incident Details:
- Start Time: {timestamp}
- Severity: SEV3 - Minor Impact
- Service: {service}
- Impact: {impact_description}
- Status: {current_status}
Details:
{incident_details}
Assigned Team: {assigned_team}
Estimated Resolution: {eta}
Workaround: {workaround}
This incident has limited customer impact and is being addressed during normal business hours.
{team_contact}"""
},
"sev4": {
"subject": "[SEV4] {service} - {brief_description}",
"body": """LOW PRIORITY ISSUE
Issue Details:
- Reported: {timestamp}
- Severity: SEV4 - Low Impact
- Component: {service}
- Description: {description}
This issue will be addressed in the normal development cycle.
Assigned to: {assigned_team}
Target Resolution: {target_date}
{standard_contact}"""
}
}
def _load_action_templates(self) -> Dict[str, List[Dict]]:
"""Load initial action templates for each severity level."""
return {
"sev1": [
{
"action": "Establish incident command",
"priority": 1,
"timeout_minutes": 5,
"description": "Page incident commander and establish war room"
},
{
"action": "Create incident ticket",
"priority": 1,
"timeout_minutes": 2,
"description": "Create tracking ticket with all known details"
},
{
"action": "Update status page",
"priority": 2,
"timeout_minutes": 15,
"description": "Post initial status page update acknowledging incident"
},
{
"action": "Notify executives",
"priority": 2,
"timeout_minutes": 15,
"description": "Alert executive team of customer-impacting outage"
},
{
"action": "Engage subject matter experts",
"priority": 3,
"timeout_minutes": 10,
"description": "Page relevant SMEs based on affected systems"
},
{
"action": "Begin technical investigation",
"priority": 3,
"timeout_minutes": 5,
"description": "Start technical diagnosis and mitigation efforts"
}
],
"sev2": [
{
"action": "Assign incident commander",
"priority": 1,
"timeout_minutes": 30,
"description": "Assign IC and establish coordination channel"
},
{
"action": "Create incident tracking",
"priority": 1,
"timeout_minutes": 5,
"description": "Create incident ticket with details and timeline"
},
{
"action": "Assess customer impact",
"priority": 2,
"timeout_minutes": 15,
"description": "Determine scope and severity of user impact"
},
{
"action": "Engage response team",
"priority": 2,
"timeout_minutes": 30,
"description": "Page appropriate technical responders"
},
{
"action": "Begin investigation",
"priority": 3,
"timeout_minutes": 15,
"description": "Start technical analysis and debugging"
},
{
"action": "Plan status communication",
"priority": 3,
"timeout_minutes": 30,
"description": "Determine if status page update is needed"
}
],
"sev3": [
{
"action": "Assign to appropriate team",
"priority": 1,
"timeout_minutes": 120,
"description": "Route to team with relevant expertise"
},
{
"action": "Create tracking ticket",
"priority": 1,
"timeout_minutes": 30,
"description": "Document issue in standard ticketing system"
},
{
"action": "Assess scope and impact",
"priority": 2,
"timeout_minutes": 60,
"description": "Understand full scope of the issue"
},
{
"action": "Identify workarounds",
"priority": 2,
"timeout_minutes": 60,
"description": "Find temporary solutions if possible"
},
{
"action": "Plan resolution approach",
"priority": 3,
"timeout_minutes": 120,
"description": "Develop plan for permanent fix"
}
],
"sev4": [
{
"action": "Create backlog item",
"priority": 1,
"timeout_minutes": 1440, # 24 hours
"description": "Add to team backlog for future sprint planning"
},
{
"action": "Triage and prioritize",
"priority": 2,
"timeout_minutes": 2880, # 2 days
"description": "Review and prioritize against other work"
},
{
"action": "Assign owner",
"priority": 3,
"timeout_minutes": 4320, # 3 days
"description": "Assign to appropriate developer when capacity allows"
}
]
}
def classify_incident(self, incident_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Main classification method that analyzes incident data and returns
comprehensive response recommendations.
Args:
incident_data: Dictionary containing incident information
Returns:
Dictionary with classification results and recommendations
"""
# Extract key information from incident data
description = incident_data.get('description', '').lower()
affected_users = incident_data.get('affected_users', '0%')
business_impact = incident_data.get('business_impact', 'unknown')
service = incident_data.get('service', 'unknown service')
duration = incident_data.get('duration_minutes', 0)
# Classify severity
severity = self._classify_severity(description, affected_users, business_impact, duration)
# Determine response teams
response_teams = self._determine_teams(description, service)
# Generate initial actions
initial_actions = self._generate_initial_actions(severity, incident_data)
# Create communication template
communication = self._generate_communication(severity, incident_data)
# Calculate response timeline
timeline = self._generate_timeline(severity)
# Determine escalation path
escalation = self._determine_escalation(severity, business_impact)
return {
"classification": {
"severity": severity.upper(),
"confidence": self._calculate_confidence(description, affected_users, business_impact),
"reasoning": self._explain_classification(severity, description, affected_users),
"timestamp": datetime.now(timezone.utc).isoformat()
},
"response": {
"primary_team": response_teams[0] if response_teams else "General Engineering",
"supporting_teams": response_teams[1:] if len(response_teams) > 1 else [],
"all_teams": response_teams,
"response_time_minutes": self.severity_rules[severity]["response_time"] // 60
},
"initial_actions": initial_actions,
"communication": communication,
"timeline": timeline,
"escalation": escalation,
"incident_data": {
"service": service,
"description": incident_data.get('description', ''),
"affected_users": affected_users,
"business_impact": business_impact,
"duration_minutes": duration
}
}
def _classify_severity(self, description: str, affected_users: str,
business_impact: str, duration: int) -> str:
"""Classify incident severity based on multiple factors."""
scores = {"sev1": 0, "sev2": 0, "sev3": 0, "sev4": 0}
# Keyword analysis
for severity, rules in self.severity_rules.items():
for keyword in rules["keywords"]:
if keyword in description:
scores[severity] += 2
for indicator in rules["impact_indicators"]:
if indicator.lower() in description or indicator.lower() in affected_users.lower():
scores[severity] += 3
# Business impact weighting
if business_impact.lower() in ['critical', 'high', 'severe']:
scores["sev1"] += 5
scores["sev2"] += 3
elif business_impact.lower() in ['medium', 'moderate']:
scores["sev2"] += 3
scores["sev3"] += 2
elif business_impact.lower() in ['low', 'minimal']:
scores["sev3"] += 2
scores["sev4"] += 3
# User impact analysis
if '%' in affected_users:
try:
percentage = float(re.findall(r'\d+', affected_users)[0])
if percentage >= 75:
scores["sev1"] += 4
elif percentage >= 25:
scores["sev2"] += 4
elif percentage >= 5:
scores["sev3"] += 3
else:
scores["sev4"] += 2
except (IndexError, ValueError):
pass
# Duration consideration
if duration > 0:
if duration >= 3600: # 1 hour
scores["sev1"] += 2
scores["sev2"] += 1
elif duration >= 1800: # 30 minutes
scores["sev2"] += 2
scores["sev3"] += 1
# Return highest scoring severity
return max(scores, key=scores.get)
def _determine_teams(self, description: str, service: str) -> List[str]:
"""Determine which teams should respond based on affected systems."""
teams = set()
text_to_analyze = f"{description} {service}".lower()
for component, team_list in self.team_mappings.items():
if component in text_to_analyze:
teams.update(team_list)
# Default teams if no specific match
if not teams:
teams = {"General Engineering", "SRE"}
return list(teams)
def _generate_initial_actions(self, severity: str, incident_data: Dict) -> List[Dict]:
"""Generate prioritized initial actions based on severity."""
base_actions = self.action_templates[severity].copy()
# Customize actions based on incident details
for action in base_actions:
if severity in ["sev1", "sev2"]:
action["urgency"] = "immediate" if severity == "sev1" else "high"
else:
action["urgency"] = "normal" if severity == "sev3" else "low"
return base_actions
def _generate_communication(self, severity: str, incident_data: Dict) -> Dict:
"""Generate communication template filled with incident data."""
template = self.communication_templates[severity]
# Fill template with incident data
now = datetime.now(timezone.utc)
service = incident_data.get('service', 'Unknown Service')
description = incident_data.get('description', 'Incident detected')
communication = {
"subject": template["subject"].format(
service=service,
brief_description=description[:50] + "..." if len(description) > 50 else description
),
"body": template["body"],
"urgency": severity,
"recipients": self._determine_recipients(severity),
"channels": self._determine_channels(severity),
"frequency_minutes": self._get_update_frequency(severity)
}
return communication
def _generate_timeline(self, severity: str) -> Dict:
"""Generate expected response timeline."""
rules = self.severity_rules[severity]
now = datetime.now(timezone.utc)
milestones = []
if severity == "sev1":
milestones = [
{"milestone": "Incident Commander assigned", "minutes": 5},
{"milestone": "War room established", "minutes": 10},
{"milestone": "Initial status page update", "minutes": 15},
{"milestone": "Executive notification", "minutes": 15},
{"milestone": "First customer update", "minutes": 30}
]
elif severity == "sev2":
milestones = [
{"milestone": "Response team assembled", "minutes": 15},
{"milestone": "Initial assessment complete", "minutes": 30},
{"milestone": "Stakeholder notification", "minutes": 60},
{"milestone": "Status page update (if needed)", "minutes": 60}
]
elif severity == "sev3":
milestones = [
{"milestone": "Team assignment", "minutes": 120},
{"milestone": "Initial triage complete", "minutes": 240},
{"milestone": "Resolution plan created", "minutes": 480}
]
else: # sev4
milestones = [
{"milestone": "Backlog creation", "minutes": 1440},
{"milestone": "Priority assessment", "minutes": 2880}
]
return {
"response_time_minutes": rules["response_time"] // 60,
"milestones": milestones,
"update_frequency_minutes": self._get_update_frequency(severity)
}
def _determine_escalation(self, severity: str, business_impact: str) -> Dict:
"""Determine escalation requirements and triggers."""
escalation_rules = {
"sev1": {
"immediate": ["Incident Commander", "Engineering Manager"],
"15_minutes": ["VP Engineering", "Customer Success"],
"30_minutes": ["CTO"],
"60_minutes": ["CEO", "All C-Suite"],
"triggers": ["Extended outage", "Revenue impact", "Media attention"]
},
"sev2": {
"immediate": ["Team Lead", "On-call Engineer"],
"30_minutes": ["Engineering Manager"],
"120_minutes": ["VP Engineering"],
"triggers": ["No progress", "Expanding scope", "Customer escalation"]
},
"sev3": {
"immediate": ["Assigned Engineer"],
"240_minutes": ["Team Lead"],
"triggers": ["Issue complexity", "Multiple teams needed"]
},
"sev4": {
"immediate": ["Product Owner"],
"triggers": ["Customer request", "Stakeholder priority"]
}
}
return escalation_rules.get(severity, escalation_rules["sev4"])
def _determine_recipients(self, severity: str) -> List[str]:
"""Determine who should receive notifications."""
recipients = {
"sev1": ["on-call", "engineering-leadership", "executives", "customer-success"],
"sev2": ["on-call", "engineering-leadership", "product-team"],
"sev3": ["assigned-team", "team-lead"],
"sev4": ["assigned-engineer"]
}
return recipients.get(severity, recipients["sev4"])
def _determine_channels(self, severity: str) -> List[str]:
"""Determine communication channels to use."""
channels = {
"sev1": ["pager", "phone", "slack", "email", "status-page"],
"sev2": ["pager", "slack", "email"],
"sev3": ["slack", "email"],
"sev4": ["ticket-system"]
}
return channels.get(severity, channels["sev4"])
def _get_update_frequency(self, severity: str) -> int:
"""Get recommended update frequency in minutes."""
frequencies = {"sev1": 15, "sev2": 30, "sev3": 240, "sev4": 0}
return frequencies.get(severity, 0)
def _calculate_confidence(self, description: str, affected_users: str, business_impact: str) -> float:
"""Calculate confidence score for the classification."""
confidence = 0.5 # Base confidence
# Higher confidence with more specific information
if '%' in affected_users and any(char.isdigit() for char in affected_users):
confidence += 0.2
if business_impact.lower() in ['critical', 'high', 'medium', 'low']:
confidence += 0.15
if len(description.split()) > 5: # Detailed description
confidence += 0.15
return min(confidence, 1.0)
def _explain_classification(self, severity: str, description: str, affected_users: str) -> str:
"""Provide explanation for the classification decision."""
rules = self.severity_rules[severity]
matched_keywords = []
for keyword in rules["keywords"]:
if keyword in description.lower():
matched_keywords.append(keyword)
explanation = f"Classified as {severity.upper()} based on: "
reasons = []
if matched_keywords:
reasons.append(f"keywords: {', '.join(matched_keywords[:3])}")
if '%' in affected_users:
reasons.append(f"user impact: {affected_users}")
if not reasons:
reasons.append("default classification based on available information")
return explanation + "; ".join(reasons)
def format_json_output(result: Dict) -> str:
"""Format result as pretty JSON."""
return json.dumps(result, indent=2, ensure_ascii=False)
def format_text_output(result: Dict) -> str:
"""Format result as human-readable text."""
classification = result["classification"]
response = result["response"]
actions = result["initial_actions"]
communication = result["communication"]
output = []
output.append("=" * 60)
output.append("INCIDENT CLASSIFICATION REPORT")
output.append("=" * 60)
output.append("")
# Classification section
output.append("CLASSIFICATION:")
output.append(f" Severity: {classification['severity']}")
output.append(f" Confidence: {classification['confidence']:.1%}")
output.append(f" Reasoning: {classification['reasoning']}")
output.append(f" Timestamp: {classification['timestamp']}")
output.append("")
# Response section
output.append("RECOMMENDED RESPONSE:")
output.append(f" Primary Team: {response['primary_team']}")
if response['supporting_teams']:
output.append(f" Supporting Teams: {', '.join(response['supporting_teams'])}")
output.append(f" Response Time: {response['response_time_minutes']} minutes")
output.append("")
# Actions section
output.append("INITIAL ACTIONS:")
for i, action in enumerate(actions[:5], 1): # Show first 5 actions
output.append(f" {i}. {action['action']} (Priority {action['priority']})")
output.append(f" Timeout: {action['timeout_minutes']} minutes")
output.append(f" {action['description']}")
output.append("")
# Communication section
output.append("COMMUNICATION:")
output.append(f" Subject: {communication['subject']}")
output.append(f" Urgency: {communication['urgency'].upper()}")
output.append(f" Recipients: {', '.join(communication['recipients'])}")
output.append(f" Channels: {', '.join(communication['channels'])}")
if communication['frequency_minutes'] > 0:
output.append(f" Update Frequency: Every {communication['frequency_minutes']} minutes")
output.append("")
output.append("=" * 60)
return "\n".join(output)
def parse_input_text(text: str) -> Dict[str, Any]:
"""Parse free-form text input into structured incident data."""
# Basic parsing - in a real system, this would be more sophisticated
incident_data = {
"description": text.strip(),
"service": "unknown service",
"affected_users": "unknown",
"business_impact": "unknown"
}
# Try to extract service name
service_patterns = [
r'(?:service|api|database|server|application)\s+(\w+)',
r'(\w+)(?:\s+(?:is|has|service|api|database))',
r'(?:^|\s)(\w+)\s+(?:down|failed|broken)'
]
for pattern in service_patterns:
match = re.search(pattern, text.lower())
if match:
incident_data["service"] = match.group(1)
break
# Try to extract user impact
impact_patterns = [
r'(\d+%)\s+(?:of\s+)?(?:users?|customers?)',
r'(?:all|every|100%)\s+(?:users?|customers?)',
r'(?:some|many|several)\s+(?:users?|customers?)'
]
for pattern in impact_patterns:
match = re.search(pattern, text.lower())
if match:
incident_data["affected_users"] = match.group(1) if match.group(1) else match.group(0)
break
# Try to infer business impact
if any(word in text.lower() for word in ['critical', 'urgent', 'emergency', 'down', 'outage']):
incident_data["business_impact"] = "high"
elif any(word in text.lower() for word in ['slow', 'degraded', 'performance']):
incident_data["business_impact"] = "medium"
elif any(word in text.lower() for word in ['minor', 'cosmetic', 'small']):
incident_data["business_impact"] = "low"
return incident_data
def interactive_mode():
"""Run in interactive mode, prompting user for input."""
classifier = IncidentClassifier()
print("🚨 Incident Classifier - Interactive Mode")
print("=" * 50)
print("Enter incident details (or 'quit' to exit):")
print()
while True:
try:
description = input("Incident description: ").strip()
if description.lower() in ['quit', 'exit', 'q']:
break
if not description:
print("Please provide an incident description.")
continue
service = input("Affected service (optional): ").strip() or "unknown"
affected_users = input("Affected users (e.g., '50%', 'all users'): ").strip() or "unknown"
business_impact = input("Business impact (high/medium/low): ").strip() or "unknown"
incident_data = {
"description": description,
"service": service,
"affected_users": affected_users,
"business_impact": business_impact
}
result = classifier.classify_incident(incident_data)
print("\n" + "=" * 50)
print(format_text_output(result))
print("=" * 50)
print()
except KeyboardInterrupt:
print("\n\nExiting...")
break
except Exception as e:
print(f"Error: {e}")
def main():
"""Main function with argument parsing and execution."""
parser = argparse.ArgumentParser(
description="Classify incidents and provide response recommendations",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python incident_classifier.py --input incident.json
echo "Database is down" | python incident_classifier.py --format text
python incident_classifier.py --interactive
Input JSON format:
{
"description": "Database connection timeouts",
"service": "user-service",
"affected_users": "80%",
"business_impact": "high"
}
"""
)
parser.add_argument(
"--input", "-i",
help="Input file path (JSON format) or '-' for stdin"
)
parser.add_argument(
"--format", "-f",
choices=["json", "text"],
default="json",
help="Output format (default: json)"
)
parser.add_argument(
"--interactive",
action="store_true",
help="Run in interactive mode"
)
parser.add_argument(
"--output", "-o",
help="Output file path (default: stdout)"
)
args = parser.parse_args()
# Interactive mode
if args.interactive:
interactive_mode()
return
classifier = IncidentClassifier()
try:
# Read input
if args.input == "-" or (not args.input and not sys.stdin.isatty()):
# Read from stdin
input_text = sys.stdin.read().strip()
if not input_text:
parser.error("No input provided")
# Try to parse as JSON first, then as text
try:
incident_data = json.loads(input_text)
except json.JSONDecodeError:
incident_data = parse_input_text(input_text)
elif args.input:
# Read from file
with open(args.input, 'r') as f:
incident_data = json.load(f)
else:
parser.error("No input specified. Use --input, --interactive, or pipe data to stdin.")
# Validate required fields
if not isinstance(incident_data, dict):
parser.error("Input must be a JSON object")
if "description" not in incident_data:
parser.error("Input must contain 'description' field")
# Classify incident
result = classifier.classify_incident(incident_data)
# Format output
if args.format == "json":
output = format_json_output(result)
else:
output = format_text_output(result)
# Write output
if args.output:
with open(args.output, 'w') as f:
f.write(output)
f.write('\n')
else:
print(output)
except FileNotFoundError as e:
print(f"Error: File not found - {e}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON - {e}", file=sys.stderr)
sys.exit(1)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,680 @@
# Common API Anti-Patterns and How to Avoid Them
## Introduction
This document outlines common anti-patterns in REST API design that can lead to poor developer experience, maintenance nightmares, and scalability issues. Each anti-pattern is accompanied by examples and recommended solutions.
## 1. Verb-Based URLs (The RPC Trap)
### Anti-Pattern
Using verbs in URLs instead of treating endpoints as resources.
```
❌ Bad Examples:
POST /api/getUsers
POST /api/createUser
GET /api/deleteUser/123
POST /api/updateUserPassword
GET /api/calculateOrderTotal/456
```
### Why It's Bad
- Violates REST principles
- Makes the API feel like RPC instead of REST
- HTTP methods lose their semantic meaning
- Reduces cacheability
- Harder to understand resource relationships
### Solution
```
✅ Good Examples:
GET /api/users # Get users
POST /api/users # Create user
DELETE /api/users/123 # Delete user
PATCH /api/users/123/password # Update password
GET /api/orders/456/total # Get order total
```
## 2. Inconsistent Naming Conventions
### Anti-Pattern
Mixed naming conventions across the API.
```json
Bad Examples:
{
"user_id": 123, // snake_case
"firstName": "John", // camelCase
"last-name": "Doe", // kebab-case
"EMAIL": "john@example.com", // UPPER_CASE
"IsActive": true // PascalCase
}
```
### Why It's Bad
- Confuses developers
- Increases cognitive load
- Makes code generation difficult
- Reduces API adoption
### Solution
```json
Choose one convention and stick to it (camelCase recommended):
{
"userId": 123,
"firstName": "John",
"lastName": "Doe",
"email": "john@example.com",
"isActive": true
}
```
## 3. Ignoring HTTP Status Codes
### Anti-Pattern
Always returning HTTP 200 regardless of the actual result.
```json
Bad Example:
HTTP/1.1 200 OK
{
"status": "error",
"code": 404,
"message": "User not found"
}
```
### Why It's Bad
- Breaks HTTP semantics
- Prevents proper error handling by clients
- Breaks caching and proxies
- Makes monitoring and debugging harder
### Solution
```json
Good Example:
HTTP/1.1 404 Not Found
{
"error": {
"code": "USER_NOT_FOUND",
"message": "User with ID 123 not found",
"requestId": "req-abc123"
}
}
```
## 4. Overly Complex Nested Resources
### Anti-Pattern
Creating deeply nested URL structures that are hard to navigate.
```
❌ Bad Example:
/companies/123/departments/456/teams/789/members/012/projects/345/tasks/678/comments/901
```
### Why It's Bad
- URLs become unwieldy
- Creates tight coupling between resources
- Makes independent resource access difficult
- Complicates authorization logic
### Solution
```
✅ Good Examples:
/tasks/678 # Direct access to task
/tasks/678/comments # Task comments
/users/012/tasks # User's tasks
/projects/345?team=789 # Project filtering
```
## 5. Inconsistent Error Response Formats
### Anti-Pattern
Different error response structures across endpoints.
```json
Bad Examples:
# Endpoint 1
{"error": "Invalid email"}
# Endpoint 2
{"success": false, "msg": "User not found", "code": 404}
# Endpoint 3
{"errors": [{"field": "name", "message": "Required"}]}
```
### Why It's Bad
- Makes error handling complex for clients
- Reduces code reusability
- Poor developer experience
### Solution
```json
Standardized Error Format:
{
"error": {
"code": "VALIDATION_ERROR",
"message": "The request contains invalid data",
"details": [
{
"field": "email",
"code": "INVALID_FORMAT",
"message": "Email address is not valid"
}
],
"requestId": "req-123456",
"timestamp": "2024-02-16T13:00:00Z"
}
}
```
## 6. Missing or Poor Pagination
### Anti-Pattern
Returning all results in a single response or inconsistent pagination.
```json
Bad Examples:
# No pagination (returns 10,000 records)
GET /api/users
# Inconsistent pagination parameters
GET /api/users?page=1&size=10
GET /api/orders?offset=0&limit=20
GET /api/products?start=0&count=50
```
### Why It's Bad
- Can cause performance issues
- May overwhelm clients
- Inconsistent pagination parameters confuse developers
- No way to estimate total results
### Solution
```json
Good Example:
GET /api/users?page=1&pageSize=10
{
"data": [...],
"pagination": {
"page": 1,
"pageSize": 10,
"total": 150,
"totalPages": 15,
"hasNext": true,
"hasPrev": false
}
}
```
## 7. Exposing Internal Implementation Details
### Anti-Pattern
URLs and field names that reflect database structure or internal architecture.
```
❌ Bad Examples:
/api/user_table/123
/api/db_orders
/api/legacy_customer_data
/api/temp_migration_users
Response fields:
{
"user_id_pk": 123,
"internal_ref_code": "usr_abc",
"db_created_timestamp": 1645123456
}
```
### Why It's Bad
- Couples API to internal implementation
- Makes refactoring difficult
- Exposes unnecessary technical details
- Reduces API longevity
### Solution
```
✅ Good Examples:
/api/users/123
/api/orders
/api/customers
Response fields:
{
"id": 123,
"referenceCode": "usr_abc",
"createdAt": "2024-02-16T13:00:00Z"
}
```
## 8. Overloading Single Endpoint
### Anti-Pattern
Using one endpoint for multiple unrelated operations based on request parameters.
```
❌ Bad Example:
POST /api/user-actions
{
"action": "create_user",
"userData": {...}
}
POST /api/user-actions
{
"action": "delete_user",
"userId": 123
}
POST /api/user-actions
{
"action": "send_email",
"userId": 123,
"emailType": "welcome"
}
```
### Why It's Bad
- Breaks REST principles
- Makes documentation complex
- Complicates client implementation
- Reduces discoverability
### Solution
```
✅ Good Examples:
POST /api/users # Create user
DELETE /api/users/123 # Delete user
POST /api/users/123/emails # Send email to user
```
## 9. Lack of Versioning Strategy
### Anti-Pattern
Making breaking changes without version management.
```
❌ Bad Examples:
# Original API
{
"name": "John Doe",
"age": 30
}
# Later (breaking change with no versioning)
{
"firstName": "John",
"lastName": "Doe",
"birthDate": "1994-02-16"
}
```
### Why It's Bad
- Breaks existing clients
- Forces all clients to update simultaneously
- No graceful migration path
- Reduces API stability
### Solution
```
✅ Good Examples:
# Version 1
GET /api/v1/users/123
{
"name": "John Doe",
"age": 30
}
# Version 2 (with both versions supported)
GET /api/v2/users/123
{
"firstName": "John",
"lastName": "Doe",
"birthDate": "1994-02-16",
"age": 30 // Backwards compatibility
}
```
## 10. Poor Error Messages
### Anti-Pattern
Vague, unhelpful, or technical error messages.
```json
Bad Examples:
{"error": "Something went wrong"}
{"error": "Invalid input"}
{"error": "SQL constraint violation: FK_user_profile_id"}
{"error": "NullPointerException at line 247"}
```
### Why It's Bad
- Doesn't help developers fix issues
- Increases support burden
- Poor developer experience
- May expose sensitive information
### Solution
```json
Good Examples:
{
"error": {
"code": "VALIDATION_ERROR",
"message": "The email address is required and must be in a valid format",
"details": [
{
"field": "email",
"code": "REQUIRED",
"message": "Email address is required"
}
]
}
}
```
## 11. Ignoring Content Negotiation
### Anti-Pattern
Hard-coding response format without considering client preferences.
```
❌ Bad Example:
# Always returns JSON regardless of Accept header
GET /api/users/123
Accept: application/xml
# Returns JSON anyway
```
### Why It's Bad
- Reduces API flexibility
- Ignores HTTP standards
- Makes integration harder for diverse clients
### Solution
```
✅ Good Example:
GET /api/users/123
Accept: application/xml
HTTP/1.1 200 OK
Content-Type: application/xml
<?xml version="1.0"?>
<user>
<id>123</id>
<name>John Doe</name>
</user>
```
## 12. Stateful API Design
### Anti-Pattern
Maintaining session state on the server between requests.
```
❌ Bad Example:
# Step 1: Initialize session
POST /api/session/init
# Step 2: Set context (requires step 1)
POST /api/session/set-user/123
# Step 3: Get data (requires steps 1 & 2)
GET /api/session/user-data
```
### Why It's Bad
- Breaks REST statelessness principle
- Reduces scalability
- Makes caching difficult
- Complicates error recovery
### Solution
```
✅ Good Example:
# Self-contained requests
GET /api/users/123/data
Authorization: Bearer jwt-token-with-context
```
## 13. Inconsistent HTTP Method Usage
### Anti-Pattern
Using HTTP methods inappropriately or inconsistently.
```
❌ Bad Examples:
GET /api/users/123/delete # DELETE operation with GET
POST /api/users/123/get # GET operation with POST
PUT /api/users # Creating with PUT on collection
GET /api/users/search # Search with side effects
```
### Why It's Bad
- Violates HTTP semantics
- Breaks caching and idempotency expectations
- Confuses developers and tools
### Solution
```
✅ Good Examples:
DELETE /api/users/123 # Delete with DELETE
GET /api/users/123 # Get with GET
POST /api/users # Create on collection
GET /api/users?q=search # Safe search with GET
```
## 14. Missing Rate Limiting Information
### Anti-Pattern
Not providing rate limiting information to clients.
```
❌ Bad Example:
HTTP/1.1 429 Too Many Requests
{
"error": "Rate limit exceeded"
}
```
### Why It's Bad
- Clients don't know when to retry
- No information about current limits
- Difficult to implement proper backoff strategies
### Solution
```
✅ Good Example:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 3600
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"retryAfter": 3600
}
}
```
## 15. Chatty API Design
### Anti-Pattern
Requiring multiple API calls to accomplish common tasks.
```
❌ Bad Example:
# Get user profile requires 4 API calls
GET /api/users/123 # Basic info
GET /api/users/123/profile # Profile details
GET /api/users/123/settings # User settings
GET /api/users/123/stats # User statistics
```
### Why It's Bad
- Increases latency
- Creates network overhead
- Makes mobile apps inefficient
- Complicates client implementation
### Solution
```
✅ Good Examples:
# Single call with expansion
GET /api/users/123?include=profile,settings,stats
# Or provide composite endpoints
GET /api/users/123/dashboard
# Or batch operations
POST /api/batch
{
"requests": [
{"method": "GET", "url": "/users/123"},
{"method": "GET", "url": "/users/123/profile"}
]
}
```
## 16. No Input Validation
### Anti-Pattern
Accepting and processing invalid input without proper validation.
```json
Bad Example:
POST /api/users
{
"email": "not-an-email",
"age": -5,
"name": ""
}
# API processes this and fails later or stores invalid data
```
### Why It's Bad
- Leads to data corruption
- Security vulnerabilities
- Difficult to debug issues
- Poor user experience
### Solution
```json
Good Example:
POST /api/users
{
"email": "not-an-email",
"age": -5,
"name": ""
}
HTTP/1.1 400 Bad Request
{
"error": {
"code": "VALIDATION_ERROR",
"message": "The request contains invalid data",
"details": [
{
"field": "email",
"code": "INVALID_FORMAT",
"message": "Email must be a valid email address"
},
{
"field": "age",
"code": "INVALID_RANGE",
"message": "Age must be between 0 and 150"
},
{
"field": "name",
"code": "REQUIRED",
"message": "Name is required and cannot be empty"
}
]
}
}
```
## 17. Synchronous Long-Running Operations
### Anti-Pattern
Blocking the client with long-running operations in synchronous endpoints.
```
❌ Bad Example:
POST /api/reports/generate
# Client waits 30 seconds for response
```
### Why It's Bad
- Poor user experience
- Timeouts and connection issues
- Resource waste on client and server
- Doesn't scale well
### Solution
```
✅ Good Example:
# Async pattern
POST /api/reports
HTTP/1.1 202 Accepted
Location: /api/reports/job-123
{
"jobId": "job-123",
"status": "processing",
"estimatedCompletion": "2024-02-16T13:05:00Z"
}
# Check status
GET /api/reports/job-123
{
"jobId": "job-123",
"status": "completed",
"result": "/api/reports/download/report-456"
}
```
## Prevention Strategies
### 1. API Design Reviews
- Implement mandatory design reviews
- Use checklists based on these anti-patterns
- Include multiple stakeholders
### 2. API Style Guides
- Create and enforce API style guides
- Use linting tools for consistency
- Regular training for development teams
### 3. Automated Testing
- Test for common anti-patterns
- Include contract testing
- Monitor API usage patterns
### 4. Documentation Standards
- Require comprehensive API documentation
- Include examples and error scenarios
- Keep documentation up-to-date
### 5. Client Feedback
- Regularly collect feedback from API consumers
- Monitor API usage analytics
- Conduct developer experience surveys
## Conclusion
Avoiding these anti-patterns requires:
- Understanding REST principles
- Consistent design standards
- Regular review and refactoring
- Focus on developer experience
- Proper tooling and automation
Remember: A well-designed API is an asset that grows in value over time, while a poorly designed API becomes a liability that hampers development and adoption.

View File

@@ -0,0 +1,487 @@
# REST API Design Rules Reference
## Core Principles
### 1. Resources, Not Actions
REST APIs should focus on **resources** (nouns) rather than **actions** (verbs). The HTTP methods provide the actions.
```
✅ Good:
GET /users # Get all users
GET /users/123 # Get user 123
POST /users # Create new user
PUT /users/123 # Update user 123
DELETE /users/123 # Delete user 123
❌ Bad:
POST /getUsers
POST /createUser
POST /updateUser/123
POST /deleteUser/123
```
### 2. Hierarchical Resource Structure
Use hierarchical URLs to represent resource relationships:
```
/users/123/orders/456/items/789
```
But avoid excessive nesting (max 3-4 levels):
```
❌ Too deep: /companies/123/departments/456/teams/789/members/012/tasks/345
✅ Better: /tasks/345?member=012&team=789
```
## Resource Naming Conventions
### URLs Should Use Kebab-Case
```
✅ Good:
/user-profiles
/order-items
/shipping-addresses
❌ Bad:
/userProfiles
/user_profiles
/orderItems
```
### Collections vs Individual Resources
```
Collection: /users
Individual: /users/123
Sub-resource: /users/123/orders
```
### Pluralization Rules
- Use **plural nouns** for collections: `/users`, `/orders`
- Use **singular nouns** for single resources: `/user-profile`, `/current-session`
- Be consistent throughout your API
## HTTP Methods Usage
### GET - Safe and Idempotent
- **Purpose**: Retrieve data
- **Safe**: No side effects
- **Idempotent**: Multiple calls return same result
- **Request Body**: Should not have one
- **Cacheable**: Yes
```
GET /users/123
GET /users?status=active&limit=10
```
### POST - Not Idempotent
- **Purpose**: Create resources, non-idempotent operations
- **Safe**: No
- **Idempotent**: No
- **Request Body**: Usually required
- **Cacheable**: Generally no
```
POST /users # Create new user
POST /users/123/activate # Activate user (action)
```
### PUT - Idempotent
- **Purpose**: Create or completely replace a resource
- **Safe**: No
- **Idempotent**: Yes
- **Request Body**: Required (complete resource)
- **Cacheable**: No
```
PUT /users/123 # Replace entire user resource
```
### PATCH - Partial Update
- **Purpose**: Partially update a resource
- **Safe**: No
- **Idempotent**: Not necessarily
- **Request Body**: Required (partial resource)
- **Cacheable**: No
```
PATCH /users/123 # Update only specified fields
```
### DELETE - Idempotent
- **Purpose**: Remove a resource
- **Safe**: No
- **Idempotent**: Yes (same result if called multiple times)
- **Request Body**: Usually not needed
- **Cacheable**: No
```
DELETE /users/123
```
## Status Codes
### Success Codes (2xx)
- **200 OK**: Standard success response
- **201 Created**: Resource created successfully (POST)
- **202 Accepted**: Request accepted for processing (async)
- **204 No Content**: Success with no response body (DELETE, PUT)
### Redirection Codes (3xx)
- **301 Moved Permanently**: Resource permanently moved
- **302 Found**: Temporary redirect
- **304 Not Modified**: Use cached version
### Client Error Codes (4xx)
- **400 Bad Request**: Invalid request syntax or data
- **401 Unauthorized**: Authentication required
- **403 Forbidden**: Access denied (user authenticated but not authorized)
- **404 Not Found**: Resource not found
- **405 Method Not Allowed**: HTTP method not supported
- **409 Conflict**: Resource conflict (duplicates, version mismatch)
- **422 Unprocessable Entity**: Valid syntax but semantic errors
- **429 Too Many Requests**: Rate limit exceeded
### Server Error Codes (5xx)
- **500 Internal Server Error**: Unexpected server error
- **502 Bad Gateway**: Invalid response from upstream server
- **503 Service Unavailable**: Server temporarily unavailable
- **504 Gateway Timeout**: Upstream server timeout
## URL Design Patterns
### Query Parameters for Filtering
```
GET /users?status=active
GET /users?role=admin&department=engineering
GET /orders?created_after=2024-01-01&status=pending
```
### Pagination Parameters
```
# Offset-based
GET /users?offset=20&limit=10
# Cursor-based
GET /users?cursor=eyJpZCI6MTIzfQ&limit=10
# Page-based
GET /users?page=3&page_size=10
```
### Sorting Parameters
```
GET /users?sort=created_at # Ascending
GET /users?sort=-created_at # Descending (prefix with -)
GET /users?sort=last_name,first_name # Multiple fields
```
### Field Selection
```
GET /users?fields=id,name,email
GET /users/123?include=orders,profile
GET /users/123?exclude=internal_notes
```
### Search Parameters
```
GET /users?q=john
GET /products?search=laptop&category=electronics
```
## Response Format Standards
### Consistent Response Structure
```json
{
"data": {
"id": 123,
"name": "John Doe",
"email": "john@example.com"
},
"meta": {
"timestamp": "2024-02-16T13:00:00Z",
"version": "1.0"
}
}
```
### Collection Responses
```json
{
"data": [
{"id": 1, "name": "Item 1"},
{"id": 2, "name": "Item 2"}
],
"pagination": {
"total": 150,
"page": 1,
"pageSize": 10,
"totalPages": 15,
"hasNext": true,
"hasPrev": false
},
"meta": {
"timestamp": "2024-02-16T13:00:00Z"
}
}
```
### Error Response Format
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "The request contains invalid parameters",
"details": [
{
"field": "email",
"code": "INVALID_FORMAT",
"message": "Email address is not valid"
}
],
"requestId": "req-123456",
"timestamp": "2024-02-16T13:00:00Z"
}
}
```
## Field Naming Conventions
### Use camelCase for JSON Fields
```json
Good:
{
"firstName": "John",
"lastName": "Doe",
"createdAt": "2024-02-16T13:00:00Z",
"isActive": true
}
Bad:
{
"first_name": "John",
"LastName": "Doe",
"created-at": "2024-02-16T13:00:00Z"
}
```
### Boolean Fields
Use positive, clear names with "is", "has", "can", or "should" prefixes:
```json
Good:
{
"isActive": true,
"hasPermission": false,
"canEdit": true,
"shouldNotify": false
}
Bad:
{
"active": true,
"disabled": false, // Double negative
"permission": false // Unclear meaning
}
```
### Date/Time Fields
- Use ISO 8601 format: `2024-02-16T13:00:00Z`
- Include timezone information
- Use consistent field naming:
```json
{
"createdAt": "2024-02-16T13:00:00Z",
"updatedAt": "2024-02-16T13:30:00Z",
"deletedAt": null,
"publishedAt": "2024-02-16T14:00:00Z"
}
```
## Content Negotiation
### Accept Headers
```
Accept: application/json
Accept: application/xml
Accept: application/json; version=1
```
### Content-Type Headers
```
Content-Type: application/json
Content-Type: application/json; charset=utf-8
Content-Type: multipart/form-data
```
### Versioning via Headers
```
Accept: application/vnd.myapi.v1+json
API-Version: 1.0
```
## Caching Guidelines
### Cache-Control Headers
```
Cache-Control: public, max-age=3600 # Cache for 1 hour
Cache-Control: private, max-age=0 # Don't cache
Cache-Control: no-cache, must-revalidate # Always validate
```
### ETags for Conditional Requests
```
HTTP/1.1 200 OK
ETag: "123456789"
Last-Modified: Wed, 21 Oct 2015 07:28:00 GMT
# Client subsequent request:
If-None-Match: "123456789"
If-Modified-Since: Wed, 21 Oct 2015 07:28:00 GMT
```
## Security Headers
### Authentication
```
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Authorization: Basic dXNlcjpwYXNzd29yZA==
Authorization: Api-Key abc123def456
```
### CORS Headers
```
Access-Control-Allow-Origin: https://example.com
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Access-Control-Allow-Headers: Content-Type, Authorization
```
## Rate Limiting
### Rate Limit Headers
```
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1640995200
X-RateLimit-Window: 3600
```
### Rate Limit Exceeded Response
```json
HTTP/1.1 429 Too Many Requests
Retry-After: 3600
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"details": {
"limit": 1000,
"window": "1 hour",
"retryAfter": 3600
}
}
}
```
## Hypermedia (HATEOAS)
### Links in Responses
```json
{
"id": 123,
"name": "John Doe",
"email": "john@example.com",
"_links": {
"self": {
"href": "/users/123"
},
"orders": {
"href": "/users/123/orders"
},
"edit": {
"href": "/users/123",
"method": "PUT"
},
"delete": {
"href": "/users/123",
"method": "DELETE"
}
}
}
```
### Link Relations
- **self**: Link to the resource itself
- **edit**: Link to edit the resource
- **delete**: Link to delete the resource
- **related**: Link to related resources
- **next/prev**: Pagination links
## Common Anti-Patterns to Avoid
### 1. Verbs in URLs
```
❌ Bad: /api/getUser/123
✅ Good: GET /api/users/123
```
### 2. Inconsistent Naming
```
❌ Bad: /user-profiles and /userAddresses
✅ Good: /user-profiles and /user-addresses
```
### 3. Deep Nesting
```
❌ Bad: /companies/123/departments/456/teams/789/members/012
✅ Good: /team-members/012?team=789
```
### 4. Ignoring HTTP Status Codes
```
❌ Bad: Always return 200 with error info in body
✅ Good: Use appropriate status codes (404, 400, 500, etc.)
```
### 5. Exposing Internal Structure
```
❌ Bad: /api/database_table_users
✅ Good: /api/users
```
### 6. No Versioning Strategy
```
❌ Bad: Breaking changes without version management
✅ Good: /api/v1/users or Accept: application/vnd.api+json;version=1
```
### 7. Inconsistent Error Responses
```
❌ Bad: Different error formats for different endpoints
✅ Good: Standardized error response structure
```
## Best Practices Summary
1. **Use nouns for resources, not verbs**
2. **Leverage HTTP methods correctly**
3. **Maintain consistent naming conventions**
4. **Implement proper error handling**
5. **Use appropriate HTTP status codes**
6. **Design for cacheability**
7. **Implement security from the start**
8. **Plan for versioning**
9. **Provide comprehensive documentation**
10. **Follow HATEOAS principles when applicable**
## Further Reading
- [RFC 7231 - HTTP/1.1 Semantics and Content](https://tools.ietf.org/html/rfc7231)
- [RFC 6570 - URI Template](https://tools.ietf.org/html/rfc6570)
- [OpenAPI Specification](https://swagger.io/specification/)
- [REST API Design Best Practices](https://www.restapitutorial.com/)
- [HTTP Status Code Definitions](https://httpstatuses.com/)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,309 @@
# Interview System Designer
A comprehensive toolkit for designing, optimizing, and calibrating interview processes. This skill provides tools to create role-specific interview loops, generate competency-based question banks, and analyze hiring data for bias and calibration issues.
## Overview
The Interview System Designer skill includes three powerful Python tools and comprehensive reference materials to help you build fair, effective, and scalable hiring processes:
1. **Interview Loop Designer** - Generate calibrated interview loops for any role and level
2. **Question Bank Generator** - Create competency-based interview questions with scoring rubrics
3. **Hiring Calibrator** - Analyze interview data to detect bias and calibration issues
## Tools
### 1. Interview Loop Designer (`loop_designer.py`)
Generates complete interview loops tailored to specific roles, levels, and teams.
**Features:**
- Role-specific competency mapping (SWE, PM, Designer, Data, DevOps, Leadership)
- Level-appropriate interview rounds (junior through principal)
- Optimized scheduling and time allocation
- Interviewer skill requirements
- Standardized scorecard templates
**Usage:**
```bash
# Basic usage
python3 loop_designer.py --role "Senior Software Engineer" --level senior
# With team and custom competencies
python3 loop_designer.py --role "Product Manager" --level mid --team growth --competencies leadership,strategy,analytics
# Using JSON input file
python3 loop_designer.py --input assets/sample_role_definitions.json --output loops/
# Specify output format
python3 loop_designer.py --role "Staff Data Scientist" --level staff --format json --output data_scientist_loop.json
```
**Input Options:**
- `--role`: Job role title (e.g., "Senior Software Engineer", "Product Manager")
- `--level`: Experience level (junior, mid, senior, staff, principal)
- `--team`: Team or department (optional)
- `--competencies`: Comma-separated list of specific competencies to focus on
- `--input`: JSON file with role definition
- `--output`: Output directory or file path
- `--format`: Output format (json, text, both) - default: both
**Example Output:**
```
Interview Loop Design for Senior Software Engineer (Senior Level)
============================================================
Total Duration: 300 minutes (5h 0m)
Total Rounds: 5
INTERVIEW ROUNDS
----------------------------------------
Round 1: Technical Phone Screen
Duration: 45 minutes
Format: Virtual
Focus Areas: Coding Fundamentals, Problem Solving
Round 2: System Design
Duration: 75 minutes
Format: Collaborative Whitboard
Focus Areas: System Thinking, Architectural Reasoning
...
```
### 2. Question Bank Generator (`question_bank_generator.py`)
Creates comprehensive interview question banks organized by competency area.
**Features:**
- Competency-based question organization
- Level-appropriate difficulty progression
- Multiple question types (technical, behavioral, situational)
- Detailed scoring rubrics with calibration examples
- Follow-up probes and conversation guides
**Usage:**
```bash
# Generate questions for specific competencies
python3 question_bank_generator.py --role "Frontend Engineer" --competencies react,typescript,system-design
# Create behavioral question bank
python3 question_bank_generator.py --role "Product Manager" --question-types behavioral,leadership --num-questions 15
# Generate questions for multiple levels
python3 question_bank_generator.py --role "DevOps Engineer" --levels junior,mid,senior --output questions/
```
**Input Options:**
- `--role`: Job role title
- `--level`: Experience level (default: senior)
- `--competencies`: Comma-separated list of competencies to focus on
- `--question-types`: Types to include (technical, behavioral, situational)
- `--num-questions`: Number of questions to generate (default: 20)
- `--input`: JSON file with role requirements
- `--output`: Output directory or file path
- `--format`: Output format (json, text, both) - default: both
**Question Types:**
- **Technical**: Coding problems, system design, domain-specific challenges
- **Behavioral**: STAR method questions focusing on past experiences
- **Situational**: Hypothetical scenarios testing decision-making
### 3. Hiring Calibrator (`hiring_calibrator.py`)
Analyzes interview scores to detect bias, calibration issues, and provides recommendations.
**Features:**
- Statistical bias detection across demographics
- Interviewer calibration analysis
- Score distribution and trending analysis
- Specific coaching recommendations
- Comprehensive reporting with actionable insights
**Usage:**
```bash
# Comprehensive analysis
python3 hiring_calibrator.py --input assets/sample_interview_results.json --analysis-type comprehensive
# Focus on specific areas
python3 hiring_calibrator.py --input interview_data.json --analysis-type bias --competencies technical,leadership
# Trend analysis over time
python3 hiring_calibrator.py --input historical_data.json --trend-analysis --period quarterly
```
**Input Options:**
- `--input`: JSON file with interview results data (required)
- `--analysis-type`: Type of analysis (comprehensive, bias, calibration, interviewer, scoring)
- `--competencies`: Comma-separated list of competencies to focus on
- `--trend-analysis`: Enable trend analysis over time
- `--period`: Time period for trends (daily, weekly, monthly, quarterly)
- `--output`: Output file path
- `--format`: Output format (json, text, both) - default: both
**Analysis Types:**
- **Comprehensive**: Full analysis including bias, calibration, and recommendations
- **Bias**: Focus on demographic and interviewer bias patterns
- **Calibration**: Interviewer consistency and agreement analysis
- **Interviewer**: Individual interviewer performance and coaching needs
- **Scoring**: Score distribution and pattern analysis
## Data Formats
### Role Definition Input (JSON)
```json
{
"role": "Senior Software Engineer",
"level": "senior",
"team": "platform",
"competencies": ["system_design", "technical_leadership", "mentoring"],
"requirements": {
"years_experience": "5-8",
"technical_skills": ["Python", "AWS", "Kubernetes"],
"leadership_experience": true
}
}
```
### Interview Results Input (JSON)
```json
[
{
"candidate_id": "candidate_001",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_alice",
"date": "2024-01-15T09:00:00Z",
"scores": {
"coding_fundamentals": 3.5,
"system_design": 4.0,
"technical_leadership": 3.0,
"communication": 3.5
},
"overall_recommendation": "Hire",
"gender": "male",
"ethnicity": "asian",
"years_experience": 6
}
]
```
## Reference Materials
### Competency Matrix Templates (`references/competency_matrix_templates.md`)
- Comprehensive competency matrices for all engineering roles
- Level-specific expectations (junior through principal)
- Assessment criteria and growth paths
- Customization guidelines for different company stages and industries
### Bias Mitigation Checklist (`references/bias_mitigation_checklist.md`)
- Pre-interview preparation checklist
- Interview process bias prevention strategies
- Real-time bias interruption techniques
- Legal compliance reminders
- Emergency response protocols
### Debrief Facilitation Guide (`references/debrief_facilitation_guide.md`)
- Structured debrief meeting frameworks
- Evidence-based discussion techniques
- Bias interruption strategies
- Decision documentation standards
- Common challenges and solutions
## Sample Data
The `assets/` directory contains sample data for testing:
- `sample_role_definitions.json`: Example role definitions for various positions
- `sample_interview_results.json`: Sample interview data with multiple candidates and interviewers
## Expected Outputs
The `expected_outputs/` directory contains examples of tool outputs:
- Interview loop designs in both JSON and human-readable formats
- Question banks with scoring rubrics and calibration examples
- Calibration analysis reports with bias detection and recommendations
## Best Practices
### Interview Loop Design
1. **Competency Focus**: Align interview rounds with role-critical competencies
2. **Level Calibration**: Adjust expectations and question difficulty based on experience level
3. **Time Optimization**: Balance thoroughness with candidate experience
4. **Interviewer Training**: Ensure interviewers are qualified and calibrated
### Question Bank Development
1. **Evidence-Based**: Focus on observable behaviors and concrete examples
2. **Bias Mitigation**: Use structured questions that minimize subjective interpretation
3. **Calibration**: Include examples of different quality responses for consistency
4. **Continuous Improvement**: Regularly update questions based on predictive validity
### Calibration Analysis
1. **Regular Monitoring**: Analyze hiring data quarterly for bias patterns
2. **Prompt Action**: Address calibration issues immediately with targeted coaching
3. **Data Quality**: Ensure complete and consistent data collection
4. **Legal Compliance**: Monitor for discriminatory patterns and document corrections
## Installation & Setup
No external dependencies required - uses Python 3 standard library only.
```bash
# Clone or download the skill directory
cd interview-system-designer/
# Make scripts executable (optional)
chmod +x *.py
# Test with sample data
python3 loop_designer.py --role "Senior Software Engineer" --level senior
python3 question_bank_generator.py --role "Product Manager" --level mid
python3 hiring_calibrator.py --input assets/sample_interview_results.json
```
## Integration
### With Existing Systems
- **ATS Integration**: Export interview loops as structured data for applicant tracking systems
- **Calendar Systems**: Use scheduling outputs to auto-create interview blocks
- **HR Analytics**: Import calibration reports into broader diversity and inclusion dashboards
### Custom Workflows
- **Batch Processing**: Process multiple roles or historical data sets
- **Automated Reporting**: Schedule regular calibration analysis
- **Custom Competencies**: Extend frameworks with company-specific competencies
## Troubleshooting
### Common Issues
**"Role not found" errors:**
- The tool will map common variations (engineer → software_engineer)
- For custom roles, use the closest standard role and specify custom competencies
**"Insufficient data" errors:**
- Minimum 5 interviews required for statistical analysis
- Ensure interview data includes required fields (candidate_id, interviewer_id, scores, date)
**Missing output files:**
- Check file permissions in output directory
- Ensure adequate disk space
- Verify JSON input file format is valid
### Performance Considerations
- Interview loop generation: < 1 second
- Question bank generation: 1-3 seconds for 20 questions
- Calibration analysis: 1-5 seconds for 50 interviews, scales linearly
## Contributing
To extend this skill:
1. **New Roles**: Add competency frameworks in `_init_competency_frameworks()`
2. **New Question Types**: Extend question templates in respective generators
3. **New Analysis Types**: Add analysis methods to hiring calibrator
4. **Custom Outputs**: Modify formatting functions for different output needs
## License & Usage
This skill is designed for internal company use in hiring process optimization. All bias detection and mitigation features should be reviewed with legal counsel to ensure compliance with local employment laws.
For questions or support, refer to the comprehensive documentation in each script's docstring and the reference materials provided.

View File

@@ -0,0 +1,458 @@
---
name: interview-system-designer
description: This skill should be used when the user asks to "design interview processes", "create hiring pipelines", "calibrate interview loops", "generate interview questions", "design competency matrices", "analyze interviewer bias", "create scoring rubrics", "build question banks", or "optimize hiring systems". Use for designing role-specific interview loops, competency assessments, and hiring calibration systems.
---
# Interview System Designer
Comprehensive interview system design, competency assessment, and hiring process optimization.
## Table of Contents
- [Quick Start](#quick-start)
- [Tools Overview](#tools-overview)
- [Interview Loop Designer](#1-interview-loop-designer)
- [Question Bank Generator](#2-question-bank-generator)
- [Hiring Calibrator](#3-hiring-calibrator)
- [Interview System Workflows](#interview-system-workflows)
- [Role-Specific Loop Design](#role-specific-loop-design)
- [Competency Matrix Development](#competency-matrix-development)
- [Question Bank Creation](#question-bank-creation)
- [Bias Mitigation Framework](#bias-mitigation-framework)
- [Hiring Bar Calibration](#hiring-bar-calibration)
- [Competency Frameworks](#competency-frameworks)
- [Scoring & Calibration](#scoring--calibration)
- [Reference Documentation](#reference-documentation)
- [Industry Standards](#industry-standards)
---
## Quick Start
```bash
# Design a complete interview loop for a senior software engineer role
python loop_designer.py --role "Senior Software Engineer" --level senior --team platform --output loops/
# Generate a comprehensive question bank for a product manager position
python question_bank_generator.py --role "Product Manager" --level senior --competencies leadership,strategy,analytics --output questions/
# Analyze interview calibration across multiple candidates and interviewers
python hiring_calibrator.py --input interview_data.json --output calibration_report.json --analysis-type full
```
---
## Tools Overview
### 1. Interview Loop Designer
Generates calibrated interview loops tailored to specific roles, levels, and teams.
**Input:** Role definition (title, level, team, competency requirements)
**Output:** Complete interview loop with rounds, focus areas, time allocation, scorecard templates
**Key Features:**
- Role-specific competency mapping
- Level-appropriate question difficulty
- Interviewer skill requirements
- Time-optimized scheduling
- Standardized scorecards
**Usage:**
```bash
# Design loop for a specific role
python loop_designer.py --role "Staff Data Scientist" --level staff --team ml-platform
# Generate loop with specific focus areas
python loop_designer.py --role "Engineering Manager" --level senior --competencies leadership,technical,strategy
# Create loop for multiple levels
python loop_designer.py --role "Backend Engineer" --levels junior,mid,senior --output loops/backend/
```
### 2. Question Bank Generator
Creates comprehensive, competency-based interview questions with detailed scoring criteria.
**Input:** Role requirements, competency areas, experience level
**Output:** Structured question bank with scoring rubrics, follow-up probes, and calibration examples
**Key Features:**
- Competency-based question organization
- Level-appropriate difficulty progression
- Behavioral and technical question types
- Anti-bias question design
- Calibration examples (poor/good/great answers)
**Usage:**
```bash
# Generate questions for technical competencies
python question_bank_generator.py --role "Frontend Engineer" --competencies react,typescript,system-design
# Create behavioral question bank
python question_bank_generator.py --role "Product Manager" --question-types behavioral,leadership --output pm_questions/
# Generate questions for all levels
python question_bank_generator.py --role "DevOps Engineer" --levels junior,mid,senior,staff
```
### 3. Hiring Calibrator
Analyzes interview scores to detect bias, calibration issues, and recommends improvements.
**Input:** Interview results data (candidate scores, interviewer feedback, demographics)
**Output:** Calibration analysis, bias detection report, interviewer coaching recommendations
**Key Features:**
- Statistical bias detection
- Interviewer calibration analysis
- Score distribution analysis
- Recommendation engine
- Trend tracking over time
**Usage:**
```bash
# Analyze calibration across all interviews
python hiring_calibrator.py --input interview_results.json --analysis-type comprehensive
# Focus on specific competency areas
python hiring_calibrator.py --input data.json --competencies technical,leadership --output bias_report.json
# Track calibration trends over time
python hiring_calibrator.py --input historical_data.json --trend-analysis --period quarterly
```
---
## Interview System Workflows
### Role-Specific Loop Design
#### Software Engineering Roles
**Junior/Mid Software Engineer (2-4 years)**
- **Duration:** 3-4 hours across 3-4 rounds
- **Focus Areas:** Coding fundamentals, debugging, system understanding, growth mindset
- **Rounds:**
1. Technical Phone Screen (45min) - Coding fundamentals, algorithms
2. Coding Deep Dive (60min) - Problem-solving, code quality, testing
3. System Design Basics (45min) - Component interaction, basic scalability
4. Behavioral & Values (30min) - Team collaboration, learning agility
**Senior Software Engineer (5-8 years)**
- **Duration:** 4-5 hours across 4-5 rounds
- **Focus Areas:** System design, technical leadership, mentoring capability, domain expertise
- **Rounds:**
1. Technical Phone Screen (45min) - Advanced algorithms, optimization
2. System Design (60min) - Scalability, trade-offs, architectural decisions
3. Coding Excellence (60min) - Code quality, testing strategies, refactoring
4. Technical Leadership (45min) - Mentoring, technical decisions, cross-team collaboration
5. Behavioral & Culture (30min) - Leadership examples, conflict resolution
**Staff+ Engineer (8+ years)**
- **Duration:** 5-6 hours across 5-6 rounds
- **Focus Areas:** Architectural vision, organizational impact, technical strategy, cross-functional leadership
- **Rounds:**
1. Technical Phone Screen (45min) - System architecture, complex problem-solving
2. Architecture Design (90min) - Large-scale systems, technology choices, evolution patterns
3. Technical Strategy (60min) - Technical roadmaps, technology adoption, risk assessment
4. Leadership & Influence (60min) - Cross-team impact, technical vision, stakeholder management
5. Coding & Best Practices (45min) - Code quality standards, development processes
6. Cultural & Strategic Fit (30min) - Company values, strategic thinking
#### Product Management Roles
**Product Manager (3-6 years)**
- **Duration:** 3-4 hours across 4 rounds
- **Focus Areas:** Product sense, analytical thinking, stakeholder management, execution
- **Rounds:**
1. Product Sense (60min) - Feature prioritization, user empathy, market understanding
2. Analytical Thinking (45min) - Data interpretation, metrics design, experimentation
3. Execution & Process (45min) - Project management, cross-functional collaboration
4. Behavioral & Leadership (30min) - Stakeholder management, conflict resolution
**Senior Product Manager (6-10 years)**
- **Duration:** 4-5 hours across 4-5 rounds
- **Focus Areas:** Product strategy, team leadership, business impact, market analysis
- **Rounds:**
1. Product Strategy (75min) - Market analysis, competitive positioning, roadmap planning
2. Leadership & Influence (60min) - Team building, stakeholder management, decision-making
3. Data & Analytics (45min) - Advanced metrics, experimentation design, business intelligence
4. Technical Collaboration (45min) - Technical trade-offs, engineering partnership
5. Case Study Presentation (45min) - Past impact, lessons learned, strategic thinking
#### Design Roles
**UX Designer (2-5 years)**
- **Duration:** 3-4 hours across 3-4 rounds
- **Focus Areas:** Design process, user research, visual design, collaboration
- **Rounds:**
1. Portfolio Review (60min) - Design process, problem-solving approach, visual skills
2. Design Challenge (90min) - User-centered design, wireframing, iteration
3. Collaboration & Process (45min) - Cross-functional work, feedback incorporation
4. Behavioral & Values (30min) - User advocacy, creative problem-solving
**Senior UX Designer (5+ years)**
- **Duration:** 4-5 hours across 4-5 rounds
- **Focus Areas:** Design leadership, system thinking, research methodology, business impact
- **Rounds:**
1. Portfolio Deep Dive (75min) - Design impact, methodology, leadership examples
2. Design System Challenge (90min) - Systems thinking, scalability, consistency
3. Research & Strategy (60min) - User research methods, data-driven design decisions
4. Leadership & Mentoring (45min) - Design team leadership, process improvement
5. Business & Strategy (30min) - Design's business impact, stakeholder management
### Competency Matrix Development
#### Technical Competencies
**Software Engineering**
- **Coding Proficiency:** Algorithm design, data structures, language expertise
- **System Design:** Architecture patterns, scalability, performance optimization
- **Testing & Quality:** Unit testing, integration testing, code review practices
- **DevOps & Tools:** CI/CD, monitoring, debugging, development workflows
**Data Science & Analytics**
- **Statistical Analysis:** Statistical methods, hypothesis testing, experimental design
- **Machine Learning:** Algorithm selection, model evaluation, feature engineering
- **Data Engineering:** ETL processes, data pipeline design, data quality
- **Business Intelligence:** Metrics design, dashboard creation, stakeholder communication
**Product Management**
- **Product Strategy:** Market analysis, competitive research, roadmap planning
- **User Research:** User interviews, usability testing, persona development
- **Data Analysis:** Metrics interpretation, A/B testing, cohort analysis
- **Technical Understanding:** API design, database concepts, system architecture
#### Behavioral Competencies
**Leadership & Influence**
- **Team Building:** Hiring, onboarding, team culture development
- **Mentoring & Coaching:** Skill development, career guidance, feedback delivery
- **Strategic Thinking:** Long-term planning, vision setting, decision-making frameworks
- **Change Management:** Process improvement, organizational change, resistance handling
**Communication & Collaboration**
- **Stakeholder Management:** Expectation setting, conflict resolution, alignment building
- **Cross-Functional Partnership:** Engineering-Product-Design collaboration
- **Presentation Skills:** Technical communication, executive briefings, documentation
- **Active Listening:** Empathy, question asking, perspective taking
**Problem-Solving & Innovation**
- **Analytical Thinking:** Problem decomposition, root cause analysis, hypothesis formation
- **Creative Problem-Solving:** Alternative solution generation, constraint navigation
- **Learning Agility:** Skill acquisition, adaptation to change, knowledge transfer
- **Risk Assessment:** Uncertainty navigation, trade-off analysis, mitigation planning
### Question Bank Creation
#### Technical Questions by Level
**Junior Level Questions**
- **Coding:** "Implement a function to find the second largest element in an array"
- **System Design:** "How would you design a simple URL shortener for 1000 users?"
- **Debugging:** "Walk through how you would debug a slow-loading web page"
**Senior Level Questions**
- **Architecture:** "Design a real-time chat system supporting 1M concurrent users"
- **Leadership:** "Describe how you would onboard a new team member in your area"
- **Trade-offs:** "Compare microservices vs monolith for a rapidly scaling startup"
**Staff+ Level Questions**
- **Strategy:** "How would you evaluate and introduce a new programming language to the organization?"
- **Influence:** "Describe a time you drove technical consensus across multiple teams"
- **Vision:** "How do you balance technical debt against feature development?"
#### Behavioral Questions Framework
**STAR Method Implementation**
- **Situation:** Context and background of the scenario
- **Task:** Specific challenge or goal that needed to be addressed
- **Action:** Concrete steps taken to address the challenge
- **Result:** Measurable outcomes and lessons learned
**Sample Questions:**
- "Tell me about a time you had to influence a decision without formal authority"
- "Describe a situation where you had to deliver difficult feedback to a colleague"
- "Give an example of when you had to adapt your communication style for different audiences"
- "Walk me through a time when you had to make a decision with incomplete information"
### Bias Mitigation Framework
#### Structural Bias Prevention
**Interview Panel Composition**
- Diverse interviewer panels (gender, ethnicity, experience level)
- Rotating panel assignments to prevent pattern bias
- Anonymous resume screening for initial phone screens
- Standardized question sets to ensure consistency
**Process Standardization**
- Structured interview guides with required probing questions
- Consistent time allocation across all candidates
- Standardized evaluation criteria and scoring rubrics
- Required justification for all scoring decisions
#### Cognitive Bias Recognition
**Common Interview Biases**
- **Halo Effect:** One strong impression influences overall assessment
- **Confirmation Bias:** Seeking information that confirms initial impressions
- **Similarity Bias:** Favoring candidates with similar backgrounds/experiences
- **Contrast Effect:** Comparing candidates against each other rather than standard
- **Anchoring Bias:** Over-relying on first piece of information received
**Mitigation Strategies**
- Pre-interview bias awareness training for all interviewers
- Structured debrief sessions with independent score recording
- Regular calibration sessions with example candidate discussions
- Statistical monitoring of scoring patterns by interviewer and demographic
### Hiring Bar Calibration
#### Calibration Methodology
**Regular Calibration Sessions**
- Monthly interviewer calibration meetings
- Shadow interviewing for new interviewers (minimum 5 sessions)
- Quarterly cross-team calibration reviews
- Annual hiring bar review and adjustment process
**Performance Tracking**
- New hire performance correlation with interview scores
- Interviewer accuracy tracking (prediction vs actual performance)
- False positive/negative analysis
- Offer acceptance rate analysis by interviewer
**Feedback Loops**
- Six-month new hire performance reviews
- Manager feedback on interview process effectiveness
- Candidate experience surveys and feedback integration
- Continuous process improvement based on data analysis
---
## Competency Frameworks
### Engineering Competency Levels
#### Level 1-2: Individual Contributor (Junior/Mid)
- **Technical Skills:** Language proficiency, testing basics, code review participation
- **Problem Solving:** Structured approach to debugging, logical thinking
- **Communication:** Clear status updates, effective question asking
- **Learning:** Proactive skill development, mentorship seeking
#### Level 3-4: Senior Individual Contributor
- **Technical Leadership:** Architecture decisions, code quality advocacy
- **Mentoring:** Junior developer guidance, knowledge sharing
- **Project Ownership:** End-to-end feature delivery, stakeholder communication
- **Innovation:** Process improvement, technology evaluation
#### Level 5-6: Staff+ Engineer
- **Organizational Impact:** Cross-team technical leadership, strategic planning
- **Technical Vision:** Long-term architectural planning, technology roadmap
- **People Development:** Team growth, hiring contribution, culture building
- **External Influence:** Industry contribution, thought leadership
### Product Management Competency Levels
#### Level 1-2: Associate/Product Manager
- **Product Execution:** Feature specification, requirements gathering
- **User Focus:** User research participation, feedback collection
- **Data Analysis:** Basic metrics analysis, experiment interpretation
- **Stakeholder Management:** Cross-functional collaboration, communication
#### Level 3-4: Senior Product Manager
- **Strategic Thinking:** Market analysis, competitive positioning
- **Leadership:** Cross-functional team leadership, decision making
- **Business Impact:** Revenue impact, market share growth
- **Process Innovation:** Product development process improvement
#### Level 5-6: Principal Product Manager
- **Vision Setting:** Product strategy, market direction
- **Organizational Influence:** Executive communication, team building
- **Innovation Leadership:** New market creation, disruptive thinking
- **Talent Development:** PM team growth, hiring leadership
---
## Scoring & Calibration
### Scoring Rubric Framework
#### 4-Point Scoring Scale
- **4 - Exceeds Expectations:** Demonstrates mastery beyond required level
- **3 - Meets Expectations:** Solid performance meeting all requirements
- **2 - Partially Meets:** Shows potential but has development areas
- **1 - Does Not Meet:** Significant gaps in required competencies
#### Competency-Specific Scoring
**Technical Competencies**
- Code Quality (4): Clean, maintainable, well-tested code with excellent documentation
- Code Quality (3): Functional code with good structure and basic testing
- Code Quality (2): Working code with some structural issues or missing tests
- Code Quality (1): Non-functional or poorly structured code with significant issues
**Leadership Competencies**
- Team Influence (4): Drives team success, develops others, creates lasting positive change
- Team Influence (3): Contributes positively to team dynamics and outcomes
- Team Influence (2): Shows leadership potential with some effective examples
- Team Influence (1): Limited evidence of leadership ability or negative team impact
### Calibration Standards
#### Statistical Benchmarks
- Target score distribution: 20% (4s), 40% (3s), 30% (2s), 10% (1s)
- Interviewer consistency target: <0.5 standard deviation from team average
- Pass rate target: 15-25% for most roles (varies by level and market conditions)
- Time to hire target: 2-3 weeks from first interview to offer
#### Quality Metrics
- New hire 6-month performance correlation: >0.6 with interview scores
- Interviewer agreement rate: >80% within 1 point on final recommendations
- Candidate experience satisfaction: >4.0/5.0 average rating
- Offer acceptance rate: >85% for preferred candidates
---
## Reference Documentation
### Interview Templates
- Role-specific interview guides and question banks
- Scorecard templates for consistent evaluation
- Debrief facilitation guides for effective team discussions
### Bias Mitigation Resources
- Unconscious bias training materials and exercises
- Structured interviewing best practices checklist
- Demographic diversity tracking and reporting templates
### Calibration Tools
- Interview performance correlation analysis templates
- Interviewer coaching and development frameworks
- Hiring pipeline metrics and dashboard specifications
---
## Industry Standards
### Best Practices Integration
- Google's structured interviewing methodology
- Amazon's Leadership Principles assessment framework
- Microsoft's competency-based evaluation system
- Netflix's culture fit assessment approach
### Compliance & Legal Considerations
- EEOC compliance requirements and documentation
- ADA accommodation procedures and guidelines
- International hiring law considerations
- Privacy and data protection requirements (GDPR, CCPA)
### Continuous Improvement Framework
- Regular process auditing and refinement cycles
- Industry benchmarking and comparative analysis
- Technology integration for interview optimization
- Candidate experience enhancement initiatives
This comprehensive interview system design framework provides the structure and tools necessary to build fair, effective, and scalable hiring processes that consistently identify top talent while minimizing bias and maximizing candidate experience.

View File

@@ -0,0 +1,382 @@
[
{
"candidate_id": "candidate_001",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_alice",
"date": "2024-01-15T09:00:00Z",
"scores": {
"coding_fundamentals": 3.5,
"system_design": 4.0,
"technical_leadership": 3.0,
"communication": 3.5,
"problem_solving": 4.0
},
"overall_recommendation": "Hire",
"gender": "male",
"ethnicity": "asian",
"years_experience": 6,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_001",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_bob",
"date": "2024-01-15T11:00:00Z",
"scores": {
"system_design": 3.5,
"technical_leadership": 3.5,
"mentoring": 3.0,
"cross_team_collaboration": 4.0,
"strategic_thinking": 3.5
},
"overall_recommendation": "Hire",
"gender": "male",
"ethnicity": "asian",
"years_experience": 6,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_002",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_alice",
"date": "2024-01-16T09:00:00Z",
"scores": {
"coding_fundamentals": 2.5,
"system_design": 3.0,
"technical_leadership": 2.0,
"communication": 3.0,
"problem_solving": 3.0
},
"overall_recommendation": "No Hire",
"gender": "female",
"ethnicity": "hispanic",
"years_experience": 5,
"university_tier": "tier_2",
"previous_company_size": "startup"
},
{
"candidate_id": "candidate_002",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_charlie",
"date": "2024-01-16T11:00:00Z",
"scores": {
"system_design": 2.0,
"technical_leadership": 2.5,
"mentoring": 2.0,
"cross_team_collaboration": 3.0,
"strategic_thinking": 2.5
},
"overall_recommendation": "No Hire",
"gender": "female",
"ethnicity": "hispanic",
"years_experience": 5,
"university_tier": "tier_2",
"previous_company_size": "startup"
},
{
"candidate_id": "candidate_003",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_david",
"date": "2024-01-17T14:00:00Z",
"scores": {
"coding_fundamentals": 4.0,
"system_design": 3.5,
"technical_leadership": 4.0,
"communication": 4.0,
"problem_solving": 3.5
},
"overall_recommendation": "Strong Hire",
"gender": "male",
"ethnicity": "white",
"years_experience": 8,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_003",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_alice",
"date": "2024-01-17T16:00:00Z",
"scores": {
"system_design": 4.0,
"technical_leadership": 4.0,
"mentoring": 3.5,
"cross_team_collaboration": 4.0,
"strategic_thinking": 3.5
},
"overall_recommendation": "Hire",
"gender": "male",
"ethnicity": "white",
"years_experience": 8,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_004",
"role": "Product Manager",
"interviewer_id": "interviewer_emma",
"date": "2024-01-18T10:00:00Z",
"scores": {
"product_strategy": 3.0,
"user_research": 3.5,
"data_analysis": 4.0,
"stakeholder_management": 3.0,
"communication": 3.5
},
"overall_recommendation": "Hire",
"gender": "female",
"ethnicity": "black",
"years_experience": 4,
"university_tier": "tier_2",
"previous_company_size": "medium"
},
{
"candidate_id": "candidate_005",
"role": "Product Manager",
"interviewer_id": "interviewer_frank",
"date": "2024-01-19T13:00:00Z",
"scores": {
"product_strategy": 2.5,
"user_research": 2.0,
"data_analysis": 3.0,
"stakeholder_management": 2.5,
"communication": 3.0
},
"overall_recommendation": "No Hire",
"gender": "male",
"ethnicity": "white",
"years_experience": 3,
"university_tier": "tier_3",
"previous_company_size": "startup"
},
{
"candidate_id": "candidate_006",
"role": "Junior Software Engineer",
"interviewer_id": "interviewer_alice",
"date": "2024-01-20T09:00:00Z",
"scores": {
"coding_fundamentals": 3.0,
"debugging": 3.5,
"testing_basics": 3.0,
"collaboration": 4.0,
"learning_agility": 3.5
},
"overall_recommendation": "Hire",
"gender": "female",
"ethnicity": "asian",
"years_experience": 1,
"university_tier": "bootcamp",
"previous_company_size": "none"
},
{
"candidate_id": "candidate_007",
"role": "Junior Software Engineer",
"interviewer_id": "interviewer_bob",
"date": "2024-01-21T10:30:00Z",
"scores": {
"coding_fundamentals": 2.0,
"debugging": 2.5,
"testing_basics": 2.0,
"collaboration": 3.0,
"learning_agility": 3.0
},
"overall_recommendation": "No Hire",
"gender": "male",
"ethnicity": "hispanic",
"years_experience": 0,
"university_tier": "tier_2",
"previous_company_size": "none"
},
{
"candidate_id": "candidate_008",
"role": "Staff Frontend Engineer",
"interviewer_id": "interviewer_grace",
"date": "2024-01-22T14:00:00Z",
"scores": {
"frontend_architecture": 4.0,
"system_design": 4.0,
"technical_leadership": 4.0,
"team_building": 3.5,
"strategic_thinking": 3.5
},
"overall_recommendation": "Strong Hire",
"gender": "female",
"ethnicity": "white",
"years_experience": 9,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_008",
"role": "Staff Frontend Engineer",
"interviewer_id": "interviewer_henry",
"date": "2024-01-22T16:00:00Z",
"scores": {
"frontend_architecture": 3.5,
"technical_leadership": 4.0,
"team_building": 4.0,
"cross_functional_collaboration": 4.0,
"organizational_impact": 3.5
},
"overall_recommendation": "Hire",
"gender": "female",
"ethnicity": "white",
"years_experience": 9,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_009",
"role": "Data Scientist",
"interviewer_id": "interviewer_ivan",
"date": "2024-01-23T11:00:00Z",
"scores": {
"statistical_analysis": 3.5,
"machine_learning": 4.0,
"data_engineering": 3.0,
"business_acumen": 3.5,
"communication": 3.0
},
"overall_recommendation": "Hire",
"gender": "male",
"ethnicity": "indian",
"years_experience": 5,
"university_tier": "tier_1",
"previous_company_size": "medium"
},
{
"candidate_id": "candidate_010",
"role": "DevOps Engineer",
"interviewer_id": "interviewer_jane",
"date": "2024-01-24T15:00:00Z",
"scores": {
"infrastructure_automation": 3.5,
"ci_cd_design": 4.0,
"monitoring_observability": 3.0,
"security_implementation": 3.5,
"incident_management": 4.0
},
"overall_recommendation": "Hire",
"gender": "female",
"ethnicity": "black",
"years_experience": 6,
"university_tier": "tier_2",
"previous_company_size": "startup"
},
{
"candidate_id": "candidate_011",
"role": "UX Designer",
"interviewer_id": "interviewer_karl",
"date": "2024-01-25T10:00:00Z",
"scores": {
"design_process": 4.0,
"user_research": 3.5,
"design_systems": 4.0,
"cross_functional_collaboration": 3.5,
"design_leadership": 3.0
},
"overall_recommendation": "Hire",
"gender": "non_binary",
"ethnicity": "white",
"years_experience": 7,
"university_tier": "tier_1",
"previous_company_size": "medium"
},
{
"candidate_id": "candidate_012",
"role": "Engineering Manager",
"interviewer_id": "interviewer_lisa",
"date": "2024-01-26T13:30:00Z",
"scores": {
"people_leadership": 4.0,
"technical_background": 3.5,
"strategic_thinking": 3.5,
"performance_management": 4.0,
"cross_functional_leadership": 3.5
},
"overall_recommendation": "Hire",
"gender": "male",
"ethnicity": "white",
"years_experience": 8,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_013",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_alice",
"date": "2024-01-27T09:00:00Z",
"scores": {
"coding_fundamentals": 4.0,
"system_design": 4.0,
"technical_leadership": 4.0,
"communication": 4.0,
"problem_solving": 4.0
},
"overall_recommendation": "Strong Hire",
"gender": "female",
"ethnicity": "asian",
"years_experience": 7,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_013",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_charlie",
"date": "2024-01-27T11:00:00Z",
"scores": {
"system_design": 3.5,
"technical_leadership": 3.5,
"mentoring": 4.0,
"cross_team_collaboration": 4.0,
"strategic_thinking": 3.5
},
"overall_recommendation": "Hire",
"gender": "female",
"ethnicity": "asian",
"years_experience": 7,
"university_tier": "tier_1",
"previous_company_size": "large"
},
{
"candidate_id": "candidate_014",
"role": "Senior Software Engineer",
"interviewer_id": "interviewer_david",
"date": "2024-01-28T14:00:00Z",
"scores": {
"coding_fundamentals": 1.5,
"system_design": 2.0,
"technical_leadership": 1.0,
"communication": 2.0,
"problem_solving": 2.0
},
"overall_recommendation": "Strong No Hire",
"gender": "male",
"ethnicity": "white",
"years_experience": 4,
"university_tier": "tier_3",
"previous_company_size": "startup"
},
{
"candidate_id": "candidate_015",
"role": "Product Manager",
"interviewer_id": "interviewer_emma",
"date": "2024-01-29T11:00:00Z",
"scores": {
"product_strategy": 4.0,
"user_research": 3.5,
"data_analysis": 4.0,
"stakeholder_management": 4.0,
"communication": 3.5
},
"overall_recommendation": "Strong Hire",
"gender": "male",
"ethnicity": "black",
"years_experience": 5,
"university_tier": "tier_2",
"previous_company_size": "medium"
}
]

View File

@@ -0,0 +1,170 @@
[
{
"role": "Senior Software Engineer",
"level": "senior",
"team": "platform",
"department": "engineering",
"competencies": [
"system_design",
"coding_fundamentals",
"technical_leadership",
"mentoring",
"cross_team_collaboration"
],
"requirements": {
"years_experience": "5-8",
"technical_skills": ["Python", "Java", "Docker", "Kubernetes", "AWS"],
"leadership_experience": true,
"mentoring_required": true
},
"hiring_bar": "high",
"interview_focus": ["technical_depth", "system_architecture", "leadership_potential"]
},
{
"role": "Product Manager",
"level": "mid",
"team": "growth",
"department": "product",
"competencies": [
"product_strategy",
"user_research",
"data_analysis",
"stakeholder_management",
"cross_functional_leadership"
],
"requirements": {
"years_experience": "3-5",
"domain_knowledge": ["user_analytics", "experimentation", "product_metrics"],
"leadership_experience": false,
"technical_background": "preferred"
},
"hiring_bar": "medium-high",
"interview_focus": ["product_sense", "analytical_thinking", "execution_ability"]
},
{
"role": "Staff Frontend Engineer",
"level": "staff",
"team": "consumer",
"department": "engineering",
"competencies": [
"frontend_architecture",
"system_design",
"technical_leadership",
"team_building",
"cross_functional_collaboration"
],
"requirements": {
"years_experience": "8+",
"technical_skills": ["React", "TypeScript", "GraphQL", "Webpack", "Performance Optimization"],
"leadership_experience": true,
"architecture_experience": true
},
"hiring_bar": "very-high",
"interview_focus": ["architectural_vision", "technical_strategy", "organizational_impact"]
},
{
"role": "Data Scientist",
"level": "mid",
"team": "ml_platform",
"department": "data",
"competencies": [
"statistical_analysis",
"machine_learning",
"data_engineering",
"business_acumen",
"communication"
],
"requirements": {
"years_experience": "3-6",
"technical_skills": ["Python", "SQL", "TensorFlow", "Spark", "Statistics"],
"domain_knowledge": ["ML algorithms", "experimentation", "data_pipelines"],
"leadership_experience": false
},
"hiring_bar": "high",
"interview_focus": ["technical_depth", "problem_solving", "business_impact"]
},
{
"role": "DevOps Engineer",
"level": "senior",
"team": "infrastructure",
"department": "engineering",
"competencies": [
"infrastructure_automation",
"ci_cd_design",
"monitoring_observability",
"security_implementation",
"incident_management"
],
"requirements": {
"years_experience": "5-7",
"technical_skills": ["Kubernetes", "Terraform", "AWS", "Docker", "Monitoring"],
"security_background": "required",
"leadership_experience": "preferred"
},
"hiring_bar": "high",
"interview_focus": ["system_reliability", "automation_expertise", "operational_excellence"]
},
{
"role": "UX Designer",
"level": "senior",
"team": "design_systems",
"department": "design",
"competencies": [
"design_process",
"user_research",
"design_systems",
"cross_functional_collaboration",
"design_leadership"
],
"requirements": {
"years_experience": "5-8",
"portfolio_quality": "high",
"research_experience": true,
"systems_thinking": true
},
"hiring_bar": "high",
"interview_focus": ["design_process", "systems_thinking", "user_advocacy"]
},
{
"role": "Engineering Manager",
"level": "senior",
"team": "backend",
"department": "engineering",
"competencies": [
"people_leadership",
"technical_background",
"strategic_thinking",
"performance_management",
"cross_functional_leadership"
],
"requirements": {
"years_experience": "6-10",
"management_experience": "2+ years",
"technical_background": "required",
"hiring_experience": true
},
"hiring_bar": "very-high",
"interview_focus": ["people_leadership", "technical_judgment", "organizational_impact"]
},
{
"role": "Junior Software Engineer",
"level": "junior",
"team": "web",
"department": "engineering",
"competencies": [
"coding_fundamentals",
"debugging",
"testing_basics",
"collaboration",
"learning_agility"
],
"requirements": {
"years_experience": "0-2",
"technical_skills": ["JavaScript", "HTML/CSS", "Git", "Basic Algorithms"],
"education": "CS degree or bootcamp",
"growth_mindset": true
},
"hiring_bar": "medium",
"interview_focus": ["coding_ability", "problem_solving", "potential_assessment"]
}
]

View File

@@ -0,0 +1,622 @@
{
"role": "Product Manager",
"level": "senior",
"competencies": [
"strategy",
"analytics",
"business_strategy",
"product_strategy",
"stakeholder_management",
"p&l_responsibility",
"leadership",
"team_leadership",
"user_research",
"data_analysis"
],
"question_types": [
"technical",
"behavioral",
"situational"
],
"generated_at": "2026-02-16T13:27:41.303329",
"total_questions": 20,
"questions": [
{
"question": "What challenges have you faced related to p&l responsibility and how did you overcome them?",
"competency": "p&l_responsibility",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
},
{
"question": "Analyze conversion funnel data to identify the biggest drop-off point and propose solutions.",
"competency": "data_analysis",
"type": "analytical",
"difficulty": "medium",
"time_limit": 45,
"key_concepts": [
"funnel_analysis",
"conversion_optimization",
"statistical_significance"
]
},
{
"question": "What challenges have you faced related to team leadership and how did you overcome them?",
"competency": "team_leadership",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
},
{
"question": "Design a go-to-market strategy for a new B2B SaaS product entering a competitive market.",
"competency": "product_strategy",
"type": "strategic",
"difficulty": "hard",
"time_limit": 60,
"key_concepts": [
"market_analysis",
"competitive_positioning",
"pricing_strategy",
"channel_strategy"
]
},
{
"question": "What challenges have you faced related to business strategy and how did you overcome them?",
"competency": "business_strategy",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
},
{
"question": "Describe your experience with business strategy in your current or previous role.",
"competency": "business_strategy",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "Describe your experience with team leadership in your current or previous role.",
"competency": "team_leadership",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "Describe a situation where you had to influence someone without having direct authority over them.",
"competency": "leadership",
"type": "behavioral",
"method": "STAR",
"focus_areas": [
"influence",
"persuasion",
"stakeholder_management"
]
},
{
"question": "Given a dataset of user activities, calculate the daily active users for the past month.",
"competency": "data_analysis",
"type": "analytical",
"difficulty": "easy",
"time_limit": 30,
"key_concepts": [
"sql_basics",
"date_functions",
"aggregation"
]
},
{
"question": "Describe your experience with analytics in your current or previous role.",
"competency": "analytics",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "How would you prioritize features for a mobile app with limited engineering resources?",
"competency": "product_strategy",
"type": "case_study",
"difficulty": "medium",
"time_limit": 45,
"key_concepts": [
"prioritization_frameworks",
"resource_allocation",
"impact_estimation"
]
},
{
"question": "Describe your experience with stakeholder management in your current or previous role.",
"competency": "stakeholder_management",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "What challenges have you faced related to stakeholder management and how did you overcome them?",
"competency": "stakeholder_management",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
},
{
"question": "What challenges have you faced related to user research and how did you overcome them?",
"competency": "user_research",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
},
{
"question": "What challenges have you faced related to strategy and how did you overcome them?",
"competency": "strategy",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
},
{
"question": "Describe your experience with user research in your current or previous role.",
"competency": "user_research",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "Describe your experience with p&l responsibility in your current or previous role.",
"competency": "p&l_responsibility",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "Describe your experience with strategy in your current or previous role.",
"competency": "strategy",
"type": "experience",
"focus_areas": [
"experience_depth",
"practical_application"
]
},
{
"question": "Tell me about a time when you had to lead a team through a significant change or challenge.",
"competency": "leadership",
"type": "behavioral",
"method": "STAR",
"focus_areas": [
"change_management",
"team_motivation",
"communication"
]
},
{
"question": "What challenges have you faced related to analytics and how did you overcome them?",
"competency": "analytics",
"type": "challenge_based",
"focus_areas": [
"problem_solving",
"learning_from_experience"
]
}
],
"scoring_rubrics": {
"question_8": {
"question": "Describe a situation where you had to influence someone without having direct authority over them.",
"competency": "leadership",
"type": "behavioral",
"scoring_criteria": {
"situation_clarity": {
"4": "Clear, specific situation with relevant context and stakes",
"3": "Good situation description with adequate context",
"2": "Situation described but lacks some specifics",
"1": "Vague or unclear situation description"
},
"action_quality": {
"4": "Specific, thoughtful actions showing strong competency",
"3": "Good actions demonstrating competency",
"2": "Adequate actions but could be stronger",
"1": "Weak or inappropriate actions"
},
"result_impact": {
"4": "Significant positive impact with measurable results",
"3": "Good positive impact with clear outcomes",
"2": "Some positive impact demonstrated",
"1": "Little or no positive impact shown"
},
"self_awareness": {
"4": "Excellent self-reflection, learns from experience, acknowledges growth areas",
"3": "Good self-awareness and learning orientation",
"2": "Some self-reflection demonstrated",
"1": "Limited self-awareness or reflection"
}
},
"weight": "high",
"time_limit": 30
},
"question_19": {
"question": "Tell me about a time when you had to lead a team through a significant change or challenge.",
"competency": "leadership",
"type": "behavioral",
"scoring_criteria": {
"situation_clarity": {
"4": "Clear, specific situation with relevant context and stakes",
"3": "Good situation description with adequate context",
"2": "Situation described but lacks some specifics",
"1": "Vague or unclear situation description"
},
"action_quality": {
"4": "Specific, thoughtful actions showing strong competency",
"3": "Good actions demonstrating competency",
"2": "Adequate actions but could be stronger",
"1": "Weak or inappropriate actions"
},
"result_impact": {
"4": "Significant positive impact with measurable results",
"3": "Good positive impact with clear outcomes",
"2": "Some positive impact demonstrated",
"1": "Little or no positive impact shown"
},
"self_awareness": {
"4": "Excellent self-reflection, learns from experience, acknowledges growth areas",
"3": "Good self-awareness and learning orientation",
"2": "Some self-reflection demonstrated",
"1": "Limited self-awareness or reflection"
}
},
"weight": "high",
"time_limit": 30
}
},
"follow_up_probes": {
"question_1": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_2": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_3": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_4": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_5": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_6": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_7": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_8": [
"What would you do differently if you faced this situation again?",
"How did you handle team members who were resistant to the change?",
"What metrics did you use to measure success?",
"How did you communicate progress to stakeholders?",
"What did you learn from this experience?"
],
"question_9": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_10": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_11": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_12": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_13": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_14": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_15": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_16": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_17": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_18": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
],
"question_19": [
"What would you do differently if you faced this situation again?",
"How did you handle team members who were resistant to the change?",
"What metrics did you use to measure success?",
"How did you communicate progress to stakeholders?",
"What did you learn from this experience?"
],
"question_20": [
"Can you provide more specific details about your approach?",
"What would you do differently if you had to do this again?",
"What challenges did you face and how did you overcome them?"
]
},
"calibration_examples": {
"question_1": {
"question": "What challenges have you faced related to p&l responsibility and how did you overcome them?",
"competency": "p&l_responsibility",
"sample_answers": {
"poor_answer": {
"answer": "Sample poor answer for p&l_responsibility question - lacks detail, specificity, or demonstrates weak competency",
"score": "1-2",
"issues": [
"Vague response",
"Limited evidence of competency",
"Poor structure"
]
},
"good_answer": {
"answer": "Sample good answer for p&l_responsibility question - adequate detail, demonstrates competency clearly",
"score": "3",
"strengths": [
"Clear structure",
"Demonstrates competency",
"Adequate detail"
]
},
"great_answer": {
"answer": "Sample excellent answer for p&l_responsibility question - exceptional detail, strong evidence, goes above and beyond",
"score": "4",
"strengths": [
"Exceptional detail",
"Strong evidence",
"Strategic thinking",
"Goes beyond requirements"
]
}
},
"scoring_rationale": {
"key_indicators": "Look for evidence of p&l responsibility competency",
"red_flags": "Vague answers, lack of specifics, negative outcomes without learning",
"green_flags": "Specific examples, clear impact, demonstrates growth and learning"
}
},
"question_2": {
"question": "Analyze conversion funnel data to identify the biggest drop-off point and propose solutions.",
"competency": "data_analysis",
"sample_answers": {
"poor_answer": {
"answer": "Sample poor answer for data_analysis question - lacks detail, specificity, or demonstrates weak competency",
"score": "1-2",
"issues": [
"Vague response",
"Limited evidence of competency",
"Poor structure"
]
},
"good_answer": {
"answer": "Sample good answer for data_analysis question - adequate detail, demonstrates competency clearly",
"score": "3",
"strengths": [
"Clear structure",
"Demonstrates competency",
"Adequate detail"
]
},
"great_answer": {
"answer": "Sample excellent answer for data_analysis question - exceptional detail, strong evidence, goes above and beyond",
"score": "4",
"strengths": [
"Exceptional detail",
"Strong evidence",
"Strategic thinking",
"Goes beyond requirements"
]
}
},
"scoring_rationale": {
"key_indicators": "Look for evidence of data analysis competency",
"red_flags": "Vague answers, lack of specifics, negative outcomes without learning",
"green_flags": "Specific examples, clear impact, demonstrates growth and learning"
}
},
"question_3": {
"question": "What challenges have you faced related to team leadership and how did you overcome them?",
"competency": "team_leadership",
"sample_answers": {
"poor_answer": {
"answer": "Sample poor answer for team_leadership question - lacks detail, specificity, or demonstrates weak competency",
"score": "1-2",
"issues": [
"Vague response",
"Limited evidence of competency",
"Poor structure"
]
},
"good_answer": {
"answer": "Sample good answer for team_leadership question - adequate detail, demonstrates competency clearly",
"score": "3",
"strengths": [
"Clear structure",
"Demonstrates competency",
"Adequate detail"
]
},
"great_answer": {
"answer": "Sample excellent answer for team_leadership question - exceptional detail, strong evidence, goes above and beyond",
"score": "4",
"strengths": [
"Exceptional detail",
"Strong evidence",
"Strategic thinking",
"Goes beyond requirements"
]
}
},
"scoring_rationale": {
"key_indicators": "Look for evidence of team leadership competency",
"red_flags": "Vague answers, lack of specifics, negative outcomes without learning",
"green_flags": "Specific examples, clear impact, demonstrates growth and learning"
}
},
"question_4": {
"question": "Design a go-to-market strategy for a new B2B SaaS product entering a competitive market.",
"competency": "product_strategy",
"sample_answers": {
"poor_answer": {
"answer": "Sample poor answer for product_strategy question - lacks detail, specificity, or demonstrates weak competency",
"score": "1-2",
"issues": [
"Vague response",
"Limited evidence of competency",
"Poor structure"
]
},
"good_answer": {
"answer": "Sample good answer for product_strategy question - adequate detail, demonstrates competency clearly",
"score": "3",
"strengths": [
"Clear structure",
"Demonstrates competency",
"Adequate detail"
]
},
"great_answer": {
"answer": "Sample excellent answer for product_strategy question - exceptional detail, strong evidence, goes above and beyond",
"score": "4",
"strengths": [
"Exceptional detail",
"Strong evidence",
"Strategic thinking",
"Goes beyond requirements"
]
}
},
"scoring_rationale": {
"key_indicators": "Look for evidence of product strategy competency",
"red_flags": "Vague answers, lack of specifics, negative outcomes without learning",
"green_flags": "Specific examples, clear impact, demonstrates growth and learning"
}
},
"question_5": {
"question": "What challenges have you faced related to business strategy and how did you overcome them?",
"competency": "business_strategy",
"sample_answers": {
"poor_answer": {
"answer": "Sample poor answer for business_strategy question - lacks detail, specificity, or demonstrates weak competency",
"score": "1-2",
"issues": [
"Vague response",
"Limited evidence of competency",
"Poor structure"
]
},
"good_answer": {
"answer": "Sample good answer for business_strategy question - adequate detail, demonstrates competency clearly",
"score": "3",
"strengths": [
"Clear structure",
"Demonstrates competency",
"Adequate detail"
]
},
"great_answer": {
"answer": "Sample excellent answer for business_strategy question - exceptional detail, strong evidence, goes above and beyond",
"score": "4",
"strengths": [
"Exceptional detail",
"Strong evidence",
"Strategic thinking",
"Goes beyond requirements"
]
}
},
"scoring_rationale": {
"key_indicators": "Look for evidence of business strategy competency",
"red_flags": "Vague answers, lack of specifics, negative outcomes without learning",
"green_flags": "Specific examples, clear impact, demonstrates growth and learning"
}
}
},
"usage_guidelines": {
"interview_flow": {
"warm_up": "Start with 1-2 easier questions to build rapport",
"core_assessment": "Focus majority of time on core competency questions",
"closing": "End with questions about candidate's questions/interests"
},
"time_management": {
"technical_questions": "Allow extra time for coding/design questions",
"behavioral_questions": "Keep to time limits but allow for follow-ups",
"total_recommendation": "45-75 minutes per interview round"
},
"question_selection": {
"variety": "Mix question types within each competency area",
"difficulty": "Adjust based on candidate responses and energy",
"customization": "Adapt questions based on candidate's background"
},
"common_mistakes": [
"Don't ask all questions mechanically",
"Don't skip follow-up questions",
"Don't forget to assess cultural fit alongside competencies",
"Don't let one strong/weak area bias overall assessment"
],
"calibration_reminders": [
"Compare against role standard, not other candidates",
"Focus on evidence demonstrated, not potential",
"Consider level-appropriate expectations",
"Document specific examples in feedback"
]
}
}

View File

@@ -0,0 +1,177 @@
Interview Question Bank: Product Manager (Senior Level)
======================================================================
Generated: 2026-02-16T13:27:41.303329
Total Questions: 20
Question Types: technical, behavioral, situational
Target Competencies: strategy, analytics, business_strategy, product_strategy, stakeholder_management, p&l_responsibility, leadership, team_leadership, user_research, data_analysis
INTERVIEW QUESTIONS
--------------------------------------------------
1. What challenges have you faced related to p&l responsibility and how did you overcome them?
Competency: P&L Responsibility
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
2. Analyze conversion funnel data to identify the biggest drop-off point and propose solutions.
Competency: Data Analysis
Type: Analytical
Time Limit: 45 minutes
3. What challenges have you faced related to team leadership and how did you overcome them?
Competency: Team Leadership
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
4. Design a go-to-market strategy for a new B2B SaaS product entering a competitive market.
Competency: Product Strategy
Type: Strategic
Time Limit: 60 minutes
5. What challenges have you faced related to business strategy and how did you overcome them?
Competency: Business Strategy
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
6. Describe your experience with business strategy in your current or previous role.
Competency: Business Strategy
Type: Experience
Focus Areas: experience_depth, practical_application
7. Describe your experience with team leadership in your current or previous role.
Competency: Team Leadership
Type: Experience
Focus Areas: experience_depth, practical_application
8. Describe a situation where you had to influence someone without having direct authority over them.
Competency: Leadership
Type: Behavioral
Focus Areas: influence, persuasion, stakeholder_management
9. Given a dataset of user activities, calculate the daily active users for the past month.
Competency: Data Analysis
Type: Analytical
Time Limit: 30 minutes
10. Describe your experience with analytics in your current or previous role.
Competency: Analytics
Type: Experience
Focus Areas: experience_depth, practical_application
11. How would you prioritize features for a mobile app with limited engineering resources?
Competency: Product Strategy
Type: Case_Study
Time Limit: 45 minutes
12. Describe your experience with stakeholder management in your current or previous role.
Competency: Stakeholder Management
Type: Experience
Focus Areas: experience_depth, practical_application
13. What challenges have you faced related to stakeholder management and how did you overcome them?
Competency: Stakeholder Management
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
14. What challenges have you faced related to user research and how did you overcome them?
Competency: User Research
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
15. What challenges have you faced related to strategy and how did you overcome them?
Competency: Strategy
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
16. Describe your experience with user research in your current or previous role.
Competency: User Research
Type: Experience
Focus Areas: experience_depth, practical_application
17. Describe your experience with p&l responsibility in your current or previous role.
Competency: P&L Responsibility
Type: Experience
Focus Areas: experience_depth, practical_application
18. Describe your experience with strategy in your current or previous role.
Competency: Strategy
Type: Experience
Focus Areas: experience_depth, practical_application
19. Tell me about a time when you had to lead a team through a significant change or challenge.
Competency: Leadership
Type: Behavioral
Focus Areas: change_management, team_motivation, communication
20. What challenges have you faced related to analytics and how did you overcome them?
Competency: Analytics
Type: Challenge_Based
Focus Areas: problem_solving, learning_from_experience
SCORING RUBRICS
--------------------------------------------------
Sample Scoring Criteria (behavioral questions):
Situation Clarity:
4: Clear, specific situation with relevant context and stakes
3: Good situation description with adequate context
2: Situation described but lacks some specifics
1: Vague or unclear situation description
Action Quality:
4: Specific, thoughtful actions showing strong competency
3: Good actions demonstrating competency
2: Adequate actions but could be stronger
1: Weak or inappropriate actions
Result Impact:
4: Significant positive impact with measurable results
3: Good positive impact with clear outcomes
2: Some positive impact demonstrated
1: Little or no positive impact shown
Self Awareness:
4: Excellent self-reflection, learns from experience, acknowledges growth areas
3: Good self-awareness and learning orientation
2: Some self-reflection demonstrated
1: Limited self-awareness or reflection
FOLLOW-UP PROBE EXAMPLES
--------------------------------------------------
Sample follow-up questions:
• Can you provide more specific details about your approach?
• What would you do differently if you had to do this again?
• What challenges did you face and how did you overcome them?
USAGE GUIDELINES
--------------------------------------------------
Interview Flow:
• Warm Up: Start with 1-2 easier questions to build rapport
• Core Assessment: Focus majority of time on core competency questions
• Closing: End with questions about candidate's questions/interests
Time Management:
• Technical Questions: Allow extra time for coding/design questions
• Behavioral Questions: Keep to time limits but allow for follow-ups
• Total Recommendation: 45-75 minutes per interview round
Common Mistakes to Avoid:
• Don't ask all questions mechanically
• Don't skip follow-up questions
• Don't forget to assess cultural fit alongside competencies
CALIBRATION EXAMPLES
--------------------------------------------------
Question: What challenges have you faced related to p&l responsibility and how did you overcome them?
Sample Answer Quality Levels:
Poor Answer (Score 1-2):
Issues: Vague response, Limited evidence of competency, Poor structure
Good Answer (Score 3):
Strengths: Clear structure, Demonstrates competency, Adequate detail
Great Answer (Score 4):
Strengths: Exceptional detail, Strong evidence, Strategic thinking, Goes beyond requirements

View File

@@ -0,0 +1,435 @@
{
"role": "Senior Software Engineer",
"level": "senior",
"team": "platform",
"generated_at": "2026-02-16T13:27:37.925680",
"total_duration_minutes": 300,
"total_rounds": 5,
"rounds": {
"round_1_technical_phone_screen": {
"name": "Technical Phone Screen",
"duration_minutes": 45,
"format": "virtual",
"objectives": [
"Assess coding fundamentals",
"Evaluate problem-solving approach",
"Screen for basic technical competency"
],
"question_types": [
"coding_problems",
"technical_concepts",
"experience_questions"
],
"evaluation_criteria": [
"technical_accuracy",
"problem_solving_process",
"communication_clarity"
],
"order": 1,
"focus_areas": [
"coding_fundamentals",
"problem_solving",
"technical_leadership",
"system_architecture",
"people_development"
]
},
"round_2_coding_deep_dive": {
"name": "Coding Deep Dive",
"duration_minutes": 75,
"format": "in_person_or_virtual",
"objectives": [
"Evaluate coding skills in depth",
"Assess code quality and testing",
"Review debugging approach"
],
"question_types": [
"complex_coding_problems",
"code_review",
"testing_strategy"
],
"evaluation_criteria": [
"code_quality",
"testing_approach",
"debugging_skills",
"optimization_thinking"
],
"order": 2,
"focus_areas": [
"technical_execution",
"code_quality",
"technical_leadership",
"system_architecture",
"people_development"
]
},
"round_3_system_design": {
"name": "System Design",
"duration_minutes": 75,
"format": "collaborative_whiteboard",
"objectives": [
"Assess architectural thinking",
"Evaluate scalability considerations",
"Review trade-off analysis"
],
"question_types": [
"system_architecture",
"scalability_design",
"trade_off_analysis"
],
"evaluation_criteria": [
"architectural_thinking",
"scalability_awareness",
"trade_off_reasoning"
],
"order": 3,
"focus_areas": [
"system_thinking",
"architectural_reasoning",
"technical_leadership",
"system_architecture",
"people_development"
]
},
"round_4_behavioral": {
"name": "Behavioral Interview",
"duration_minutes": 45,
"format": "conversational",
"objectives": [
"Assess cultural fit",
"Evaluate past experiences",
"Review leadership examples"
],
"question_types": [
"star_method_questions",
"situational_scenarios",
"values_alignment"
],
"evaluation_criteria": [
"communication_skills",
"leadership_examples",
"cultural_alignment"
],
"order": 4,
"focus_areas": [
"cultural_fit",
"communication",
"teamwork",
"technical_leadership",
"system_architecture"
]
},
"round_5_technical_leadership": {
"name": "Technical Leadership",
"duration_minutes": 60,
"format": "discussion_based",
"objectives": [
"Evaluate mentoring capability",
"Assess technical decision making",
"Review cross-team collaboration"
],
"question_types": [
"leadership_scenarios",
"technical_decisions",
"mentoring_examples"
],
"evaluation_criteria": [
"leadership_potential",
"technical_judgment",
"influence_skills"
],
"order": 5,
"focus_areas": [
"leadership",
"mentoring",
"influence",
"technical_leadership",
"system_architecture"
]
}
},
"suggested_schedule": {
"type": "multi_day",
"total_duration_minutes": 300,
"recommended_breaks": [
{
"type": "short_break",
"duration": 15,
"after_minutes": 90
},
{
"type": "lunch_break",
"duration": 60,
"after_minutes": 180
}
],
"day_structure": {
"day_1": {
"date": "TBD",
"start_time": "09:00",
"end_time": "12:45",
"rounds": [
{
"type": "interview",
"round_name": "round_1_technical_phone_screen",
"title": "Technical Phone Screen",
"start_time": "09:00",
"end_time": "09:45",
"duration_minutes": 45,
"format": "virtual"
},
{
"type": "interview",
"round_name": "round_2_coding_deep_dive",
"title": "Coding Deep Dive",
"start_time": "10:00",
"end_time": "11:15",
"duration_minutes": 75,
"format": "in_person_or_virtual"
},
{
"type": "interview",
"round_name": "round_3_system_design",
"title": "System Design",
"start_time": "11:30",
"end_time": "12:45",
"duration_minutes": 75,
"format": "collaborative_whiteboard"
}
]
},
"day_2": {
"date": "TBD",
"start_time": "09:00",
"end_time": "11:00",
"rounds": [
{
"type": "interview",
"round_name": "round_4_behavioral",
"title": "Behavioral Interview",
"start_time": "09:00",
"end_time": "09:45",
"duration_minutes": 45,
"format": "conversational"
},
{
"type": "interview",
"round_name": "round_5_technical_leadership",
"title": "Technical Leadership",
"start_time": "10:00",
"end_time": "11:00",
"duration_minutes": 60,
"format": "discussion_based"
}
]
}
},
"logistics_notes": [
"Coordinate interviewer availability before scheduling",
"Ensure all interviewers have access to job description and competency requirements",
"Prepare interview rooms/virtual links for all rounds",
"Share candidate resume and application with all interviewers",
"Test video conferencing setup before virtual interviews",
"Share virtual meeting links with candidate 24 hours in advance",
"Prepare whiteboard or collaborative online tool for design sessions"
]
},
"scorecard_template": {
"scoring_scale": {
"4": "Exceeds Expectations - Demonstrates mastery beyond required level",
"3": "Meets Expectations - Solid performance meeting all requirements",
"2": "Partially Meets - Shows potential but has development areas",
"1": "Does Not Meet - Significant gaps in required competencies"
},
"dimensions": [
{
"dimension": "system_architecture",
"weight": "high",
"scale": "1-4",
"description": "Assessment of system architecture competency"
},
{
"dimension": "technical_leadership",
"weight": "high",
"scale": "1-4",
"description": "Assessment of technical leadership competency"
},
{
"dimension": "mentoring",
"weight": "high",
"scale": "1-4",
"description": "Assessment of mentoring competency"
},
{
"dimension": "cross_team_collab",
"weight": "high",
"scale": "1-4",
"description": "Assessment of cross team collab competency"
},
{
"dimension": "technology_evaluation",
"weight": "medium",
"scale": "1-4",
"description": "Assessment of technology evaluation competency"
},
{
"dimension": "process_improvement",
"weight": "medium",
"scale": "1-4",
"description": "Assessment of process improvement competency"
},
{
"dimension": "hiring_contribution",
"weight": "medium",
"scale": "1-4",
"description": "Assessment of hiring contribution competency"
},
{
"dimension": "communication",
"weight": "high",
"scale": "1-4"
},
{
"dimension": "cultural_fit",
"weight": "medium",
"scale": "1-4"
},
{
"dimension": "learning_agility",
"weight": "medium",
"scale": "1-4"
}
],
"overall_recommendation": {
"options": [
"Strong Hire",
"Hire",
"No Hire",
"Strong No Hire"
],
"criteria": "Based on weighted average and minimum thresholds"
},
"calibration_notes": {
"required": true,
"min_length": 100,
"sections": [
"strengths",
"areas_for_development",
"specific_examples"
]
}
},
"interviewer_requirements": {
"round_1_technical_phone_screen": {
"required_skills": [
"technical_assessment",
"coding_evaluation"
],
"preferred_experience": [
"same_domain",
"senior_level"
],
"calibration_level": "standard",
"suggested_interviewers": [
"senior_engineer",
"tech_lead"
]
},
"round_2_coding_deep_dive": {
"required_skills": [
"advanced_technical",
"code_quality_assessment"
],
"preferred_experience": [
"senior_engineer",
"system_design"
],
"calibration_level": "high",
"suggested_interviewers": [
"senior_engineer",
"staff_engineer"
]
},
"round_3_system_design": {
"required_skills": [
"architecture_design",
"scalability_assessment"
],
"preferred_experience": [
"senior_architect",
"large_scale_systems"
],
"calibration_level": "high",
"suggested_interviewers": [
"senior_architect",
"staff_engineer"
]
},
"round_4_behavioral": {
"required_skills": [
"behavioral_interviewing",
"competency_assessment"
],
"preferred_experience": [
"hiring_manager",
"people_leadership"
],
"calibration_level": "standard",
"suggested_interviewers": [
"hiring_manager",
"people_manager"
]
},
"round_5_technical_leadership": {
"required_skills": [
"leadership_assessment",
"technical_mentoring"
],
"preferred_experience": [
"engineering_manager",
"tech_lead"
],
"calibration_level": "high",
"suggested_interviewers": [
"engineering_manager",
"senior_staff"
]
}
},
"competency_framework": {
"required": [
"system_architecture",
"technical_leadership",
"mentoring",
"cross_team_collab"
],
"preferred": [
"technology_evaluation",
"process_improvement",
"hiring_contribution"
],
"focus_areas": [
"technical_leadership",
"system_architecture",
"people_development"
]
},
"calibration_notes": {
"hiring_bar_notes": "Calibrated for senior level software engineer role",
"common_pitfalls": [
"Avoid comparing candidates to each other rather than to the role standard",
"Don't let one strong/weak area overshadow overall assessment",
"Ensure consistent application of evaluation criteria"
],
"calibration_checkpoints": [
"Review score distribution after every 5 candidates",
"Conduct monthly interviewer calibration sessions",
"Track correlation with 6-month performance reviews"
],
"escalation_criteria": [
"Any candidate receiving all 4s or all 1s",
"Significant disagreement between interviewers (>1.5 point spread)",
"Unusual circumstances or accommodations needed"
]
}
}

View File

@@ -0,0 +1,151 @@
Interview Loop Design for Senior Software Engineer (Senior Level)
============================================================
Team: platform
Generated: 2026-02-16T13:27:37.925680
Total Duration: 300 minutes (5h 0m)
Total Rounds: 5
INTERVIEW ROUNDS
----------------------------------------
Round 1: Technical Phone Screen
Duration: 45 minutes
Format: Virtual
Objectives:
• Assess coding fundamentals
• Evaluate problem-solving approach
• Screen for basic technical competency
Focus Areas:
• Coding Fundamentals
• Problem Solving
• Technical Leadership
• System Architecture
• People Development
Round 2: Coding Deep Dive
Duration: 75 minutes
Format: In Person Or Virtual
Objectives:
• Evaluate coding skills in depth
• Assess code quality and testing
• Review debugging approach
Focus Areas:
• Technical Execution
• Code Quality
• Technical Leadership
• System Architecture
• People Development
Round 3: System Design
Duration: 75 minutes
Format: Collaborative Whiteboard
Objectives:
• Assess architectural thinking
• Evaluate scalability considerations
• Review trade-off analysis
Focus Areas:
• System Thinking
• Architectural Reasoning
• Technical Leadership
• System Architecture
• People Development
Round 4: Behavioral Interview
Duration: 45 minutes
Format: Conversational
Objectives:
• Assess cultural fit
• Evaluate past experiences
• Review leadership examples
Focus Areas:
• Cultural Fit
• Communication
• Teamwork
• Technical Leadership
• System Architecture
Round 5: Technical Leadership
Duration: 60 minutes
Format: Discussion Based
Objectives:
• Evaluate mentoring capability
• Assess technical decision making
• Review cross-team collaboration
Focus Areas:
• Leadership
• Mentoring
• Influence
• Technical Leadership
• System Architecture
SUGGESTED SCHEDULE
----------------------------------------
Schedule Type: Multi Day
Day 1:
Time: 09:00 - 12:45
09:00-09:45: Technical Phone Screen (45min)
10:00-11:15: Coding Deep Dive (75min)
11:30-12:45: System Design (75min)
Day 2:
Time: 09:00 - 11:00
09:00-09:45: Behavioral Interview (45min)
10:00-11:00: Technical Leadership (60min)
INTERVIEWER REQUIREMENTS
----------------------------------------
Technical Phone Screen:
Required Skills: technical_assessment, coding_evaluation
Suggested Interviewers: senior_engineer, tech_lead
Calibration Level: Standard
Coding Deep Dive:
Required Skills: advanced_technical, code_quality_assessment
Suggested Interviewers: senior_engineer, staff_engineer
Calibration Level: High
System Design:
Required Skills: architecture_design, scalability_assessment
Suggested Interviewers: senior_architect, staff_engineer
Calibration Level: High
Behavioral:
Required Skills: behavioral_interviewing, competency_assessment
Suggested Interviewers: hiring_manager, people_manager
Calibration Level: Standard
Technical Leadership:
Required Skills: leadership_assessment, technical_mentoring
Suggested Interviewers: engineering_manager, senior_staff
Calibration Level: High
SCORECARD TEMPLATE
----------------------------------------
Scoring Scale:
4: Exceeds Expectations - Demonstrates mastery beyond required level
3: Meets Expectations - Solid performance meeting all requirements
2: Partially Meets - Shows potential but has development areas
1: Does Not Meet - Significant gaps in required competencies
Evaluation Dimensions:
• System Architecture (Weight: high)
• Technical Leadership (Weight: high)
• Mentoring (Weight: high)
• Cross Team Collab (Weight: high)
• Technology Evaluation (Weight: medium)
• Process Improvement (Weight: medium)
• Hiring Contribution (Weight: medium)
• Communication (Weight: high)
• Cultural Fit (Weight: medium)
• Learning Agility (Weight: medium)
CALIBRATION NOTES
----------------------------------------
Hiring Bar: Calibrated for senior level software engineer role
Common Pitfalls:
• Avoid comparing candidates to each other rather than to the role standard
• Don't let one strong/weak area overshadow overall assessment
• Ensure consistent application of evaluation criteria

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,908 @@
#!/usr/bin/env python3
"""
Interview Loop Designer
Generates calibrated interview loops tailored to specific roles, levels, and teams.
Creates complete interview loops with rounds, focus areas, time allocation,
interviewer skill requirements, and scorecard templates.
Usage:
python loop_designer.py --role "Senior Software Engineer" --level senior --team platform
python loop_designer.py --role "Product Manager" --level mid --competencies leadership,strategy
python loop_designer.py --input role_definition.json --output loops/
"""
import os
import sys
import json
import argparse
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any, Tuple
from collections import defaultdict
class InterviewLoopDesigner:
"""Designs comprehensive interview loops based on role requirements."""
def __init__(self):
self.competency_frameworks = self._init_competency_frameworks()
self.role_templates = self._init_role_templates()
self.interviewer_skills = self._init_interviewer_skills()
def _init_competency_frameworks(self) -> Dict[str, Dict]:
"""Initialize competency frameworks for different roles."""
return {
"software_engineer": {
"junior": {
"required": ["coding_fundamentals", "debugging", "testing_basics", "version_control"],
"preferred": ["system_understanding", "code_review", "collaboration"],
"focus_areas": ["technical_execution", "learning_agility", "team_collaboration"]
},
"mid": {
"required": ["advanced_coding", "system_design_basics", "testing_strategy", "debugging_complex"],
"preferred": ["mentoring_basics", "technical_communication", "project_ownership"],
"focus_areas": ["technical_depth", "system_thinking", "ownership"]
},
"senior": {
"required": ["system_architecture", "technical_leadership", "mentoring", "cross_team_collab"],
"preferred": ["technology_evaluation", "process_improvement", "hiring_contribution"],
"focus_areas": ["technical_leadership", "system_architecture", "people_development"]
},
"staff": {
"required": ["architectural_vision", "organizational_impact", "technical_strategy", "team_building"],
"preferred": ["industry_influence", "innovation_leadership", "executive_communication"],
"focus_areas": ["organizational_impact", "technical_vision", "strategic_influence"]
},
"principal": {
"required": ["company_wide_impact", "technical_vision", "talent_development", "strategic_planning"],
"preferred": ["industry_leadership", "board_communication", "market_influence"],
"focus_areas": ["strategic_leadership", "organizational_transformation", "external_influence"]
}
},
"product_manager": {
"junior": {
"required": ["product_execution", "user_research", "data_analysis", "stakeholder_comm"],
"preferred": ["market_awareness", "technical_understanding", "project_management"],
"focus_areas": ["execution_excellence", "user_focus", "analytical_thinking"]
},
"mid": {
"required": ["product_strategy", "cross_functional_leadership", "metrics_design", "market_analysis"],
"preferred": ["team_building", "technical_collaboration", "competitive_analysis"],
"focus_areas": ["strategic_thinking", "leadership", "business_impact"]
},
"senior": {
"required": ["business_strategy", "team_leadership", "p&l_ownership", "market_positioning"],
"preferred": ["hiring_leadership", "board_communication", "partnership_development"],
"focus_areas": ["business_leadership", "market_strategy", "organizational_impact"]
},
"staff": {
"required": ["portfolio_management", "organizational_leadership", "strategic_planning", "market_creation"],
"preferred": ["executive_presence", "investor_relations", "acquisition_strategy"],
"focus_areas": ["strategic_leadership", "market_innovation", "organizational_transformation"]
}
},
"designer": {
"junior": {
"required": ["design_fundamentals", "user_research", "prototyping", "design_tools"],
"preferred": ["user_empathy", "visual_design", "collaboration"],
"focus_areas": ["design_execution", "user_research", "creative_problem_solving"]
},
"mid": {
"required": ["design_systems", "user_testing", "cross_functional_collab", "design_strategy"],
"preferred": ["mentoring", "process_improvement", "business_understanding"],
"focus_areas": ["design_leadership", "system_thinking", "business_impact"]
},
"senior": {
"required": ["design_leadership", "team_building", "strategic_design", "stakeholder_management"],
"preferred": ["design_culture", "hiring_leadership", "executive_communication"],
"focus_areas": ["design_strategy", "team_leadership", "organizational_impact"]
}
},
"data_scientist": {
"junior": {
"required": ["statistical_analysis", "python_r", "data_visualization", "sql"],
"preferred": ["machine_learning", "business_understanding", "communication"],
"focus_areas": ["analytical_skills", "technical_execution", "business_impact"]
},
"mid": {
"required": ["advanced_ml", "experiment_design", "data_engineering", "stakeholder_comm"],
"preferred": ["mentoring", "project_leadership", "product_collaboration"],
"focus_areas": ["advanced_analytics", "project_leadership", "cross_functional_impact"]
},
"senior": {
"required": ["data_strategy", "team_leadership", "ml_systems", "business_strategy"],
"preferred": ["hiring_leadership", "executive_communication", "technology_evaluation"],
"focus_areas": ["strategic_leadership", "technical_vision", "organizational_impact"]
}
},
"devops_engineer": {
"junior": {
"required": ["infrastructure_basics", "scripting", "monitoring", "troubleshooting"],
"preferred": ["automation", "cloud_platforms", "security_awareness"],
"focus_areas": ["operational_excellence", "automation_mindset", "problem_solving"]
},
"mid": {
"required": ["ci_cd_design", "infrastructure_as_code", "security_implementation", "performance_optimization"],
"preferred": ["team_collaboration", "incident_management", "capacity_planning"],
"focus_areas": ["system_reliability", "automation_leadership", "cross_team_collaboration"]
},
"senior": {
"required": ["platform_architecture", "team_leadership", "security_strategy", "organizational_impact"],
"preferred": ["hiring_contribution", "technology_evaluation", "executive_communication"],
"focus_areas": ["platform_leadership", "strategic_thinking", "organizational_transformation"]
}
},
"engineering_manager": {
"junior": {
"required": ["team_leadership", "technical_background", "people_management", "project_coordination"],
"preferred": ["hiring_experience", "performance_management", "technical_mentoring"],
"focus_areas": ["people_leadership", "team_building", "execution_excellence"]
},
"senior": {
"required": ["organizational_leadership", "strategic_planning", "talent_development", "cross_functional_leadership"],
"preferred": ["technical_vision", "culture_building", "executive_communication"],
"focus_areas": ["organizational_impact", "strategic_leadership", "talent_development"]
},
"staff": {
"required": ["multi_team_leadership", "organizational_strategy", "executive_presence", "cultural_transformation"],
"preferred": ["board_communication", "market_understanding", "acquisition_integration"],
"focus_areas": ["organizational_transformation", "strategic_leadership", "cultural_evolution"]
}
}
}
def _init_role_templates(self) -> Dict[str, Dict]:
"""Initialize role-specific interview templates."""
return {
"software_engineer": {
"core_rounds": ["technical_phone_screen", "coding_deep_dive", "system_design", "behavioral"],
"optional_rounds": ["technical_leadership", "domain_expertise", "culture_fit"],
"total_duration_range": (180, 360), # 3-6 hours
"required_competencies": ["coding", "problem_solving", "communication"]
},
"product_manager": {
"core_rounds": ["product_sense", "analytical_thinking", "execution_process", "behavioral"],
"optional_rounds": ["strategic_thinking", "technical_collaboration", "leadership"],
"total_duration_range": (180, 300), # 3-5 hours
"required_competencies": ["product_strategy", "analytical_thinking", "stakeholder_management"]
},
"designer": {
"core_rounds": ["portfolio_review", "design_challenge", "collaboration_process", "behavioral"],
"optional_rounds": ["design_system_thinking", "research_methodology", "leadership"],
"total_duration_range": (180, 300), # 3-5 hours
"required_competencies": ["design_process", "user_empathy", "visual_communication"]
},
"data_scientist": {
"core_rounds": ["technical_assessment", "case_study", "statistical_thinking", "behavioral"],
"optional_rounds": ["ml_systems", "business_strategy", "technical_leadership"],
"total_duration_range": (210, 330), # 3.5-5.5 hours
"required_competencies": ["statistical_analysis", "programming", "business_acumen"]
},
"devops_engineer": {
"core_rounds": ["technical_assessment", "system_design", "troubleshooting", "behavioral"],
"optional_rounds": ["security_assessment", "automation_design", "leadership"],
"total_duration_range": (180, 300), # 3-5 hours
"required_competencies": ["infrastructure", "automation", "problem_solving"]
},
"engineering_manager": {
"core_rounds": ["leadership_assessment", "technical_background", "people_management", "behavioral"],
"optional_rounds": ["strategic_thinking", "hiring_assessment", "culture_building"],
"total_duration_range": (240, 360), # 4-6 hours
"required_competencies": ["people_leadership", "technical_understanding", "strategic_thinking"]
}
}
def _init_interviewer_skills(self) -> Dict[str, Dict]:
"""Initialize interviewer skill requirements for different round types."""
return {
"technical_phone_screen": {
"required_skills": ["technical_assessment", "coding_evaluation"],
"preferred_experience": ["same_domain", "senior_level"],
"calibration_level": "standard"
},
"coding_deep_dive": {
"required_skills": ["advanced_technical", "code_quality_assessment"],
"preferred_experience": ["senior_engineer", "system_design"],
"calibration_level": "high"
},
"system_design": {
"required_skills": ["architecture_design", "scalability_assessment"],
"preferred_experience": ["senior_architect", "large_scale_systems"],
"calibration_level": "high"
},
"behavioral": {
"required_skills": ["behavioral_interviewing", "competency_assessment"],
"preferred_experience": ["hiring_manager", "people_leadership"],
"calibration_level": "standard"
},
"technical_leadership": {
"required_skills": ["leadership_assessment", "technical_mentoring"],
"preferred_experience": ["engineering_manager", "tech_lead"],
"calibration_level": "high"
},
"product_sense": {
"required_skills": ["product_evaluation", "market_analysis"],
"preferred_experience": ["product_manager", "product_leadership"],
"calibration_level": "high"
},
"analytical_thinking": {
"required_skills": ["data_analysis", "metrics_evaluation"],
"preferred_experience": ["data_analyst", "product_manager"],
"calibration_level": "standard"
},
"design_challenge": {
"required_skills": ["design_evaluation", "user_experience"],
"preferred_experience": ["senior_designer", "design_manager"],
"calibration_level": "high"
}
}
def generate_interview_loop(self, role: str, level: str, team: Optional[str] = None,
competencies: Optional[List[str]] = None) -> Dict[str, Any]:
"""Generate a complete interview loop for the specified role and level."""
# Normalize inputs
role_key = role.lower().replace(" ", "_").replace("-", "_")
level_key = level.lower()
# Get role template and competency requirements
if role_key not in self.competency_frameworks:
role_key = self._find_closest_role(role_key)
if level_key not in self.competency_frameworks[role_key]:
level_key = self._find_closest_level(role_key, level_key)
competency_req = self.competency_frameworks[role_key][level_key]
role_template = self.role_templates.get(role_key, self.role_templates["software_engineer"])
# Design the interview loop
rounds = self._design_rounds(role_key, level_key, competency_req, role_template, competencies)
schedule = self._create_schedule(rounds)
scorecard = self._generate_scorecard(role_key, level_key, competency_req)
interviewer_requirements = self._define_interviewer_requirements(rounds)
return {
"role": role,
"level": level,
"team": team,
"generated_at": datetime.now().isoformat(),
"total_duration_minutes": sum(round_info["duration_minutes"] for round_info in rounds.values()),
"total_rounds": len(rounds),
"rounds": rounds,
"suggested_schedule": schedule,
"scorecard_template": scorecard,
"interviewer_requirements": interviewer_requirements,
"competency_framework": competency_req,
"calibration_notes": self._generate_calibration_notes(role_key, level_key)
}
def _find_closest_role(self, role_key: str) -> str:
"""Find the closest matching role template."""
role_mappings = {
"engineer": "software_engineer",
"developer": "software_engineer",
"swe": "software_engineer",
"backend": "software_engineer",
"frontend": "software_engineer",
"fullstack": "software_engineer",
"pm": "product_manager",
"product": "product_manager",
"ux": "designer",
"ui": "designer",
"graphic": "designer",
"data": "data_scientist",
"analyst": "data_scientist",
"ml": "data_scientist",
"ops": "devops_engineer",
"sre": "devops_engineer",
"infrastructure": "devops_engineer",
"manager": "engineering_manager",
"lead": "engineering_manager"
}
for key_part in role_key.split("_"):
if key_part in role_mappings:
return role_mappings[key_part]
return "software_engineer" # Default fallback
def _find_closest_level(self, role_key: str, level_key: str) -> str:
"""Find the closest matching level for the role."""
available_levels = list(self.competency_frameworks[role_key].keys())
level_mappings = {
"entry": "junior",
"associate": "junior",
"jr": "junior",
"mid": "mid",
"middle": "mid",
"sr": "senior",
"senior": "senior",
"staff": "staff",
"principal": "principal",
"lead": "senior",
"manager": "senior"
}
mapped_level = level_mappings.get(level_key, level_key)
if mapped_level in available_levels:
return mapped_level
elif "senior" in available_levels:
return "senior"
else:
return available_levels[0]
def _design_rounds(self, role_key: str, level_key: str, competency_req: Dict,
role_template: Dict, custom_competencies: Optional[List[str]]) -> Dict[str, Dict]:
"""Design the specific interview rounds based on role and level."""
rounds = {}
# Determine which rounds to include
core_rounds = role_template["core_rounds"].copy()
optional_rounds = role_template["optional_rounds"].copy()
# Add optional rounds based on level
if level_key in ["senior", "staff", "principal"]:
if "technical_leadership" in optional_rounds and role_key in ["software_engineer", "engineering_manager"]:
core_rounds.append("technical_leadership")
if "strategic_thinking" in optional_rounds and role_key in ["product_manager", "engineering_manager"]:
core_rounds.append("strategic_thinking")
if "design_system_thinking" in optional_rounds and role_key == "designer":
core_rounds.append("design_system_thinking")
if level_key in ["staff", "principal"]:
if "domain_expertise" in optional_rounds:
core_rounds.append("domain_expertise")
# Define round details
round_definitions = self._get_round_definitions()
for i, round_type in enumerate(core_rounds, 1):
if round_type in round_definitions:
round_def = round_definitions[round_type].copy()
round_def["order"] = i
round_def["focus_areas"] = self._customize_focus_areas(round_type, competency_req, custom_competencies)
rounds[f"round_{i}_{round_type}"] = round_def
return rounds
def _get_round_definitions(self) -> Dict[str, Dict]:
"""Get predefined round definitions with standard durations and formats."""
return {
"technical_phone_screen": {
"name": "Technical Phone Screen",
"duration_minutes": 45,
"format": "virtual",
"objectives": ["Assess coding fundamentals", "Evaluate problem-solving approach", "Screen for basic technical competency"],
"question_types": ["coding_problems", "technical_concepts", "experience_questions"],
"evaluation_criteria": ["technical_accuracy", "problem_solving_process", "communication_clarity"]
},
"coding_deep_dive": {
"name": "Coding Deep Dive",
"duration_minutes": 75,
"format": "in_person_or_virtual",
"objectives": ["Evaluate coding skills in depth", "Assess code quality and testing", "Review debugging approach"],
"question_types": ["complex_coding_problems", "code_review", "testing_strategy"],
"evaluation_criteria": ["code_quality", "testing_approach", "debugging_skills", "optimization_thinking"]
},
"system_design": {
"name": "System Design",
"duration_minutes": 75,
"format": "collaborative_whiteboard",
"objectives": ["Assess architectural thinking", "Evaluate scalability considerations", "Review trade-off analysis"],
"question_types": ["system_architecture", "scalability_design", "trade_off_analysis"],
"evaluation_criteria": ["architectural_thinking", "scalability_awareness", "trade_off_reasoning"]
},
"behavioral": {
"name": "Behavioral Interview",
"duration_minutes": 45,
"format": "conversational",
"objectives": ["Assess cultural fit", "Evaluate past experiences", "Review leadership examples"],
"question_types": ["star_method_questions", "situational_scenarios", "values_alignment"],
"evaluation_criteria": ["communication_skills", "leadership_examples", "cultural_alignment"]
},
"technical_leadership": {
"name": "Technical Leadership",
"duration_minutes": 60,
"format": "discussion_based",
"objectives": ["Evaluate mentoring capability", "Assess technical decision making", "Review cross-team collaboration"],
"question_types": ["leadership_scenarios", "technical_decisions", "mentoring_examples"],
"evaluation_criteria": ["leadership_potential", "technical_judgment", "influence_skills"]
},
"product_sense": {
"name": "Product Sense",
"duration_minutes": 75,
"format": "case_study",
"objectives": ["Assess product intuition", "Evaluate user empathy", "Review market understanding"],
"question_types": ["product_scenarios", "feature_prioritization", "user_journey_analysis"],
"evaluation_criteria": ["product_intuition", "user_empathy", "analytical_thinking"]
},
"analytical_thinking": {
"name": "Analytical Thinking",
"duration_minutes": 60,
"format": "data_analysis",
"objectives": ["Evaluate data interpretation", "Assess metric design", "Review experiment planning"],
"question_types": ["data_interpretation", "metric_design", "experiment_analysis"],
"evaluation_criteria": ["analytical_rigor", "metric_intuition", "experimental_thinking"]
},
"design_challenge": {
"name": "Design Challenge",
"duration_minutes": 90,
"format": "hands_on_design",
"objectives": ["Assess design process", "Evaluate user-centered thinking", "Review iteration approach"],
"question_types": ["design_problems", "user_research", "design_critique"],
"evaluation_criteria": ["design_process", "user_focus", "visual_communication"]
},
"portfolio_review": {
"name": "Portfolio Review",
"duration_minutes": 75,
"format": "presentation_discussion",
"objectives": ["Review past work", "Assess design thinking", "Evaluate impact measurement"],
"question_types": ["portfolio_walkthrough", "design_decisions", "impact_stories"],
"evaluation_criteria": ["design_quality", "process_thinking", "business_impact"]
}
}
def _customize_focus_areas(self, round_type: str, competency_req: Dict,
custom_competencies: Optional[List[str]]) -> List[str]:
"""Customize focus areas based on role competency requirements."""
base_focus_areas = competency_req.get("focus_areas", [])
round_focus_mapping = {
"technical_phone_screen": ["coding_fundamentals", "problem_solving"],
"coding_deep_dive": ["technical_execution", "code_quality"],
"system_design": ["system_thinking", "architectural_reasoning"],
"behavioral": ["cultural_fit", "communication", "teamwork"],
"technical_leadership": ["leadership", "mentoring", "influence"],
"product_sense": ["product_intuition", "user_empathy"],
"analytical_thinking": ["data_analysis", "metric_design"],
"design_challenge": ["design_process", "user_focus"]
}
focus_areas = round_focus_mapping.get(round_type, [])
# Add custom competencies if specified
if custom_competencies:
focus_areas.extend([comp for comp in custom_competencies if comp not in focus_areas])
# Add role-specific focus areas
focus_areas.extend([area for area in base_focus_areas if area not in focus_areas])
return focus_areas[:5] # Limit to top 5 focus areas
def _create_schedule(self, rounds: Dict[str, Dict]) -> Dict[str, Any]:
"""Create a suggested interview schedule."""
sorted_rounds = sorted(rounds.items(), key=lambda x: x[1]["order"])
# Calculate optimal scheduling
total_duration = sum(round_info["duration_minutes"] for _, round_info in sorted_rounds)
if total_duration <= 240: # 4 hours or less - single day
schedule_type = "single_day"
day_structure = self._create_single_day_schedule(sorted_rounds)
else: # Multi-day schedule
schedule_type = "multi_day"
day_structure = self._create_multi_day_schedule(sorted_rounds)
return {
"type": schedule_type,
"total_duration_minutes": total_duration,
"recommended_breaks": self._calculate_breaks(total_duration),
"day_structure": day_structure,
"logistics_notes": self._generate_logistics_notes(sorted_rounds)
}
def _create_single_day_schedule(self, rounds: List[Tuple[str, Dict]]) -> Dict[str, Any]:
"""Create a single-day interview schedule."""
start_time = datetime.strptime("09:00", "%H:%M")
current_time = start_time
schedule = []
for round_name, round_info in rounds:
# Add break if needed (after 90 minutes of interviews)
if schedule and sum(item.get("duration_minutes", 0) for item in schedule if "break" not in item.get("type", "")) >= 90:
schedule.append({
"type": "break",
"start_time": current_time.strftime("%H:%M"),
"duration_minutes": 15,
"end_time": (current_time + timedelta(minutes=15)).strftime("%H:%M")
})
current_time += timedelta(minutes=15)
# Add the interview round
end_time = current_time + timedelta(minutes=round_info["duration_minutes"])
schedule.append({
"type": "interview",
"round_name": round_name,
"title": round_info["name"],
"start_time": current_time.strftime("%H:%M"),
"end_time": end_time.strftime("%H:%M"),
"duration_minutes": round_info["duration_minutes"],
"format": round_info["format"]
})
current_time = end_time
return {
"day_1": {
"date": "TBD",
"start_time": start_time.strftime("%H:%M"),
"end_time": current_time.strftime("%H:%M"),
"rounds": schedule
}
}
def _create_multi_day_schedule(self, rounds: List[Tuple[str, Dict]]) -> Dict[str, Any]:
"""Create a multi-day interview schedule."""
# Split rounds across days (max 4 hours per day)
max_daily_minutes = 240
days = {}
current_day = 1
current_day_duration = 0
current_day_rounds = []
for round_name, round_info in rounds:
duration = round_info["duration_minutes"] + 15 # Add buffer time
if current_day_duration + duration > max_daily_minutes and current_day_rounds:
# Finalize current day
days[f"day_{current_day}"] = self._finalize_day_schedule(current_day_rounds)
current_day += 1
current_day_duration = 0
current_day_rounds = []
current_day_rounds.append((round_name, round_info))
current_day_duration += duration
# Finalize last day
if current_day_rounds:
days[f"day_{current_day}"] = self._finalize_day_schedule(current_day_rounds)
return days
def _finalize_day_schedule(self, day_rounds: List[Tuple[str, Dict]]) -> Dict[str, Any]:
"""Finalize the schedule for a specific day."""
start_time = datetime.strptime("09:00", "%H:%M")
current_time = start_time
schedule = []
for round_name, round_info in day_rounds:
end_time = current_time + timedelta(minutes=round_info["duration_minutes"])
schedule.append({
"type": "interview",
"round_name": round_name,
"title": round_info["name"],
"start_time": current_time.strftime("%H:%M"),
"end_time": end_time.strftime("%H:%M"),
"duration_minutes": round_info["duration_minutes"],
"format": round_info["format"]
})
current_time = end_time + timedelta(minutes=15) # 15-min buffer
return {
"date": "TBD",
"start_time": start_time.strftime("%H:%M"),
"end_time": (current_time - timedelta(minutes=15)).strftime("%H:%M"),
"rounds": schedule
}
def _calculate_breaks(self, total_duration: int) -> List[Dict[str, Any]]:
"""Calculate recommended breaks based on total duration."""
breaks = []
if total_duration >= 120: # 2+ hours
breaks.append({"type": "short_break", "duration": 15, "after_minutes": 90})
if total_duration >= 240: # 4+ hours
breaks.append({"type": "lunch_break", "duration": 60, "after_minutes": 180})
if total_duration >= 360: # 6+ hours
breaks.append({"type": "short_break", "duration": 15, "after_minutes": 300})
return breaks
def _generate_scorecard(self, role_key: str, level_key: str, competency_req: Dict) -> Dict[str, Any]:
"""Generate a scorecard template for the interview loop."""
scoring_dimensions = []
# Add competency-based scoring dimensions
for competency in competency_req["required"]:
scoring_dimensions.append({
"dimension": competency,
"weight": "high",
"scale": "1-4",
"description": f"Assessment of {competency.replace('_', ' ')} competency"
})
for competency in competency_req.get("preferred", []):
scoring_dimensions.append({
"dimension": competency,
"weight": "medium",
"scale": "1-4",
"description": f"Assessment of {competency.replace('_', ' ')} competency"
})
# Add standard dimensions
standard_dimensions = [
{"dimension": "communication", "weight": "high", "scale": "1-4"},
{"dimension": "cultural_fit", "weight": "medium", "scale": "1-4"},
{"dimension": "learning_agility", "weight": "medium", "scale": "1-4"}
]
scoring_dimensions.extend(standard_dimensions)
return {
"scoring_scale": {
"4": "Exceeds Expectations - Demonstrates mastery beyond required level",
"3": "Meets Expectations - Solid performance meeting all requirements",
"2": "Partially Meets - Shows potential but has development areas",
"1": "Does Not Meet - Significant gaps in required competencies"
},
"dimensions": scoring_dimensions,
"overall_recommendation": {
"options": ["Strong Hire", "Hire", "No Hire", "Strong No Hire"],
"criteria": "Based on weighted average and minimum thresholds"
},
"calibration_notes": {
"required": True,
"min_length": 100,
"sections": ["strengths", "areas_for_development", "specific_examples"]
}
}
def _define_interviewer_requirements(self, rounds: Dict[str, Dict]) -> Dict[str, Dict]:
"""Define interviewer skill requirements for each round."""
requirements = {}
for round_name, round_info in rounds.items():
round_type = round_name.split("_", 2)[-1] # Extract round type
if round_type in self.interviewer_skills:
skill_req = self.interviewer_skills[round_type].copy()
skill_req["suggested_interviewers"] = self._suggest_interviewer_profiles(round_type)
requirements[round_name] = skill_req
else:
# Default requirements
requirements[round_name] = {
"required_skills": ["interviewing_basics", "evaluation_skills"],
"preferred_experience": ["relevant_domain"],
"calibration_level": "standard",
"suggested_interviewers": ["experienced_interviewer"]
}
return requirements
def _suggest_interviewer_profiles(self, round_type: str) -> List[str]:
"""Suggest specific interviewer profiles for different round types."""
profile_mapping = {
"technical_phone_screen": ["senior_engineer", "tech_lead"],
"coding_deep_dive": ["senior_engineer", "staff_engineer"],
"system_design": ["senior_architect", "staff_engineer"],
"behavioral": ["hiring_manager", "people_manager"],
"technical_leadership": ["engineering_manager", "senior_staff"],
"product_sense": ["senior_pm", "product_leader"],
"analytical_thinking": ["senior_analyst", "data_scientist"],
"design_challenge": ["senior_designer", "design_manager"]
}
return profile_mapping.get(round_type, ["experienced_interviewer"])
def _generate_calibration_notes(self, role_key: str, level_key: str) -> Dict[str, Any]:
"""Generate calibration notes and best practices."""
return {
"hiring_bar_notes": f"Calibrated for {level_key} level {role_key.replace('_', ' ')} role",
"common_pitfalls": [
"Avoid comparing candidates to each other rather than to the role standard",
"Don't let one strong/weak area overshadow overall assessment",
"Ensure consistent application of evaluation criteria"
],
"calibration_checkpoints": [
"Review score distribution after every 5 candidates",
"Conduct monthly interviewer calibration sessions",
"Track correlation with 6-month performance reviews"
],
"escalation_criteria": [
"Any candidate receiving all 4s or all 1s",
"Significant disagreement between interviewers (>1.5 point spread)",
"Unusual circumstances or accommodations needed"
]
}
def _generate_logistics_notes(self, rounds: List[Tuple[str, Dict]]) -> List[str]:
"""Generate logistics and coordination notes."""
notes = [
"Coordinate interviewer availability before scheduling",
"Ensure all interviewers have access to job description and competency requirements",
"Prepare interview rooms/virtual links for all rounds",
"Share candidate resume and application with all interviewers"
]
# Add format-specific notes
formats_used = {round_info["format"] for _, round_info in rounds}
if "virtual" in formats_used:
notes.append("Test video conferencing setup before virtual interviews")
notes.append("Share virtual meeting links with candidate 24 hours in advance")
if "collaborative_whiteboard" in formats_used:
notes.append("Prepare whiteboard or collaborative online tool for design sessions")
if "hands_on_design" in formats_used:
notes.append("Provide design tools access or ensure candidate can screen share their preferred tools")
return notes
def format_human_readable(loop_data: Dict[str, Any]) -> str:
"""Format the interview loop data in a human-readable format."""
output = []
# Header
output.append(f"Interview Loop Design for {loop_data['role']} ({loop_data['level'].title()} Level)")
output.append("=" * 60)
if loop_data.get('team'):
output.append(f"Team: {loop_data['team']}")
output.append(f"Generated: {loop_data['generated_at']}")
output.append(f"Total Duration: {loop_data['total_duration_minutes']} minutes ({loop_data['total_duration_minutes']//60}h {loop_data['total_duration_minutes']%60}m)")
output.append(f"Total Rounds: {loop_data['total_rounds']}")
output.append("")
# Interview Rounds
output.append("INTERVIEW ROUNDS")
output.append("-" * 40)
sorted_rounds = sorted(loop_data['rounds'].items(), key=lambda x: x[1]['order'])
for round_name, round_info in sorted_rounds:
output.append(f"\nRound {round_info['order']}: {round_info['name']}")
output.append(f"Duration: {round_info['duration_minutes']} minutes")
output.append(f"Format: {round_info['format'].replace('_', ' ').title()}")
output.append("Objectives:")
for obj in round_info['objectives']:
output.append(f"{obj}")
output.append("Focus Areas:")
for area in round_info['focus_areas']:
output.append(f"{area.replace('_', ' ').title()}")
# Suggested Schedule
output.append("\nSUGGESTED SCHEDULE")
output.append("-" * 40)
schedule = loop_data['suggested_schedule']
output.append(f"Schedule Type: {schedule['type'].replace('_', ' ').title()}")
for day_name, day_info in schedule['day_structure'].items():
output.append(f"\n{day_name.replace('_', ' ').title()}:")
output.append(f"Time: {day_info['start_time']} - {day_info['end_time']}")
for item in day_info['rounds']:
if item['type'] == 'interview':
output.append(f" {item['start_time']}-{item['end_time']}: {item['title']} ({item['duration_minutes']}min)")
else:
output.append(f" {item['start_time']}-{item['end_time']}: {item['type'].title()} ({item['duration_minutes']}min)")
# Interviewer Requirements
output.append("\nINTERVIEWER REQUIREMENTS")
output.append("-" * 40)
for round_name, requirements in loop_data['interviewer_requirements'].items():
round_display = round_name.split("_", 2)[-1].replace("_", " ").title()
output.append(f"\n{round_display}:")
output.append(f"Required Skills: {', '.join(requirements['required_skills'])}")
output.append(f"Suggested Interviewers: {', '.join(requirements['suggested_interviewers'])}")
output.append(f"Calibration Level: {requirements['calibration_level'].title()}")
# Scorecard Overview
output.append("\nSCORECARD TEMPLATE")
output.append("-" * 40)
scorecard = loop_data['scorecard_template']
output.append("Scoring Scale:")
for score, description in scorecard['scoring_scale'].items():
output.append(f" {score}: {description}")
output.append("\nEvaluation Dimensions:")
for dim in scorecard['dimensions']:
output.append(f"{dim['dimension'].replace('_', ' ').title()} (Weight: {dim['weight']})")
# Calibration Notes
output.append("\nCALIBRATION NOTES")
output.append("-" * 40)
calibration = loop_data['calibration_notes']
output.append(f"Hiring Bar: {calibration['hiring_bar_notes']}")
output.append("\nCommon Pitfalls:")
for pitfall in calibration['common_pitfalls']:
output.append(f"{pitfall}")
return "\n".join(output)
def main():
parser = argparse.ArgumentParser(description="Generate calibrated interview loops for specific roles and levels")
parser.add_argument("--role", type=str, help="Job role title (e.g., 'Senior Software Engineer')")
parser.add_argument("--level", type=str, help="Experience level (junior, mid, senior, staff, principal)")
parser.add_argument("--team", type=str, help="Team or department (optional)")
parser.add_argument("--competencies", type=str, help="Comma-separated list of specific competencies to focus on")
parser.add_argument("--input", type=str, help="Input JSON file with role definition")
parser.add_argument("--output", type=str, help="Output directory or file path")
parser.add_argument("--format", choices=["json", "text", "both"], default="both", help="Output format")
args = parser.parse_args()
designer = InterviewLoopDesigner()
# Handle input
if args.input:
try:
with open(args.input, 'r') as f:
role_data = json.load(f)
role = role_data.get('role') or role_data.get('title', '')
level = role_data.get('level', 'senior')
team = role_data.get('team')
competencies = role_data.get('competencies')
except Exception as e:
print(f"Error reading input file: {e}")
sys.exit(1)
else:
if not args.role or not args.level:
print("Error: --role and --level are required when not using --input")
sys.exit(1)
role = args.role
level = args.level
team = args.team
competencies = args.competencies.split(',') if args.competencies else None
# Generate interview loop
try:
loop_data = designer.generate_interview_loop(role, level, team, competencies)
# Handle output
if args.output:
output_path = args.output
if os.path.isdir(output_path):
safe_role = "".join(c for c in role.lower() if c.isalnum() or c in (' ', '-', '_')).replace(' ', '_')
base_filename = f"{safe_role}_{level}_interview_loop"
json_path = os.path.join(output_path, f"{base_filename}.json")
text_path = os.path.join(output_path, f"{base_filename}.txt")
else:
# Use provided path as base
json_path = output_path if output_path.endswith('.json') else f"{output_path}.json"
text_path = output_path.replace('.json', '.txt') if output_path.endswith('.json') else f"{output_path}.txt"
else:
safe_role = "".join(c for c in role.lower() if c.isalnum() or c in (' ', '-', '_')).replace(' ', '_')
base_filename = f"{safe_role}_{level}_interview_loop"
json_path = f"{base_filename}.json"
text_path = f"{base_filename}.txt"
# Write outputs
if args.format in ["json", "both"]:
with open(json_path, 'w') as f:
json.dump(loop_data, f, indent=2, default=str)
print(f"JSON output written to: {json_path}")
if args.format in ["text", "both"]:
with open(text_path, 'w') as f:
f.write(format_human_readable(loop_data))
print(f"Text output written to: {text_path}")
# Always print summary to stdout
print("\nInterview Loop Summary:")
print(f"Role: {loop_data['role']} ({loop_data['level'].title()})")
print(f"Total Duration: {loop_data['total_duration_minutes']} minutes")
print(f"Number of Rounds: {loop_data['total_rounds']}")
print(f"Schedule Type: {loop_data['suggested_schedule']['type'].replace('_', ' ').title()}")
except Exception as e:
print(f"Error generating interview loop: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,308 @@
# Interview Bias Mitigation Checklist
This comprehensive checklist helps identify, prevent, and mitigate various forms of bias in the interview process. Use this as a systematic guide to ensure fair and equitable hiring practices.
## Pre-Interview Phase
### Job Description & Requirements
- [ ] **Remove unnecessary requirements** that don't directly relate to job performance
- [ ] **Avoid gendered language** (competitive, aggressive vs. collaborative, detail-oriented)
- [ ] **Remove university prestige requirements** unless absolutely necessary for role
- [ ] **Focus on skills and outcomes** rather than years of experience in specific technologies
- [ ] **Use inclusive language** and avoid cultural assumptions
- [ ] **Specify only essential requirements** vs. nice-to-have qualifications
- [ ] **Remove location/commute assumptions** for remote-eligible positions
- [ ] **Review requirements for unconscious bias** (e.g., assuming continuous work history)
### Sourcing & Pipeline
- [ ] **Diversify sourcing channels** beyond traditional networks
- [ ] **Partner with diverse professional organizations** and communities
- [ ] **Use bias-minimizing sourcing tools** and platforms
- [ ] **Track sourcing effectiveness** by demographic groups
- [ ] **Train recruiters on bias awareness** and inclusive outreach
- [ ] **Review referral patterns** for potential network bias
- [ ] **Expand university partnerships** beyond elite institutions
- [ ] **Use structured outreach messages** to reduce individual bias
### Resume Screening
- [ ] **Implement blind resume review** (remove names, photos, university names initially)
- [ ] **Use standardized screening criteria** applied consistently
- [ ] **Multiple screeners for each resume** with independent scoring
- [ ] **Focus on relevant skills and achievements** over pedigree indicators
- [ ] **Avoid assumptions about career gaps** or non-traditional backgrounds
- [ ] **Consider alternative paths to skills** (bootcamps, self-taught, career changes)
- [ ] **Track screening pass rates** by demographic groups
- [ ] **Regular screener calibration sessions** on bias awareness
## Interview Panel Composition
### Diversity Requirements
- [ ] **Ensure diverse interview panels** (gender, ethnicity, seniority levels)
- [ ] **Include at least one underrepresented interviewer** when possible
- [ ] **Rotate panel assignments** to prevent bias patterns
- [ ] **Balance seniority levels** on panels (not all senior or all junior)
- [ ] **Include cross-functional perspectives** when relevant
- [ ] **Avoid panels of only one demographic group** when possible
- [ ] **Consider panel member unconscious bias training** status
- [ ] **Document panel composition rationale** for future review
### Interviewer Selection
- [ ] **Choose interviewers based on relevant competency assessment ability**
- [ ] **Ensure interviewers have completed bias training** within last 12 months
- [ ] **Select interviewers with consistent calibration history**
- [ ] **Avoid interviewers with known bias patterns** (flagged in previous analyses)
- [ ] **Include at least one interviewer familiar with candidate's background type**
- [ ] **Balance perspectives** (technical depth, cultural fit, growth potential)
- [ ] **Consider interviewer availability for proper preparation time**
- [ ] **Ensure interviewers understand role requirements and standards**
## Interview Process Design
### Question Standardization
- [ ] **Use standardized question sets** for each competency area
- [ ] **Develop questions that assess skills, not culture fit stereotypes**
- [ ] **Avoid questions about personal background** unless directly job-relevant
- [ ] **Remove questions that could reveal protected characteristics**
- [ ] **Focus on behavioral examples** using STAR method
- [ ] **Include scenario-based questions** with clear evaluation criteria
- [ ] **Test questions for potential bias** with diverse interviewers
- [ ] **Regularly update question bank** based on effectiveness data
### Structured Interview Protocol
- [ ] **Define clear time allocations** for each question/section
- [ ] **Establish consistent interview flow** across all candidates
- [ ] **Create standardized intro/outro** processes
- [ ] **Use identical technical setup and tools** for all candidates
- [ ] **Provide same background information** to all interviewers
- [ ] **Standardize note-taking format** and requirements
- [ ] **Define clear handoff procedures** between interviewers
- [ ] **Document any deviations** from standard protocol
### Accommodation Preparation
- [ ] **Proactively offer accommodations** without requiring disclosure
- [ ] **Provide multiple interview format options** (phone, video, in-person)
- [ ] **Ensure accessibility of interview locations and tools**
- [ ] **Allow extended time** when requested or needed
- [ ] **Provide materials in advance** when helpful
- [ ] **Train interviewers on accommodation protocols**
- [ ] **Test all technology** for accessibility compliance
- [ ] **Have backup plans** for technical issues
## During the Interview
### Interviewer Behavior
- [ ] **Use welcoming, professional tone** with all candidates
- [ ] **Avoid assumptions based on appearance or background**
- [ ] **Give equal encouragement and support** to all candidates
- [ ] **Allow equal time for candidate questions**
- [ ] **Avoid leading questions** that suggest desired answers
- [ ] **Listen actively** without interrupting unnecessarily
- [ ] **Take detailed notes** focusing on responses, not impressions
- [ ] **Avoid small talk** that could reveal irrelevant personal information
### Question Delivery
- [ ] **Ask questions as written** without improvisation that could introduce bias
- [ ] **Provide equal clarification** when candidates ask for it
- [ ] **Use consistent follow-up probing** across candidates
- [ ] **Allow reasonable thinking time** before expecting responses
- [ ] **Avoid rephrasing questions** in ways that give hints
- [ ] **Stay focused on defined competencies** being assessed
- [ ] **Give equal encouragement** for elaboration when needed
- [ ] **Maintain professional demeanor** regardless of candidate background
### Real-time Bias Checking
- [ ] **Notice first impressions** but don't let them drive assessment
- [ ] **Question gut reactions** - are they based on competency evidence?
- [ ] **Focus on specific examples** and evidence provided
- [ ] **Avoid pattern matching** to existing successful employees
- [ ] **Notice cultural assumptions** in interpretation of responses
- [ ] **Check for confirmation bias** - seeking evidence to support initial impressions
- [ ] **Consider alternative explanations** for candidate responses
- [ ] **Stay aware of fatigue effects** on judgment throughout the day
## Evaluation & Scoring
### Scoring Consistency
- [ ] **Use defined rubrics consistently** across all candidates
- [ ] **Score immediately after interview** while details are fresh
- [ ] **Focus scoring on demonstrated competencies** not potential or personality
- [ ] **Provide specific evidence** for each score given
- [ ] **Avoid comparative scoring** (comparing candidates to each other)
- [ ] **Use calibrated examples** of each score level
- [ ] **Score independently** before discussing with other interviewers
- [ ] **Document reasoning** for all scores, especially extreme ones (1s and 4s)
### Bias Check Questions
- [ ] **"Would I score this differently if the candidate looked different?"**
- [ ] **"Am I basing this on evidence or assumptions?"**
- [ ] **"Would this response get the same score from a different demographic?"**
- [ ] **"Am I penalizing non-traditional backgrounds or approaches?"**
- [ ] **"Is my scoring consistent with the defined rubric?"**
- [ ] **"Am I letting one strong/weak area bias overall assessment?"**
- [ ] **"Are my cultural assumptions affecting interpretation?"**
- [ ] **"Would I want to work with this person?" (Check if this is biasing assessment)**
### Documentation Requirements
- [ ] **Record specific examples** supporting each competency score
- [ ] **Avoid subjective language** like "seems like," "appears to be"
- [ ] **Focus on observable behaviors** and concrete responses
- [ ] **Note exact quotes** when relevant to assessment
- [ ] **Distinguish between facts and interpretations**
- [ ] **Provide improvement suggestions** that are skill-based, not person-based
- [ ] **Avoid comparative language** to other candidates or employees
- [ ] **Use neutral language** free from cultural assumptions
## Debrief Process
### Structured Discussion
- [ ] **Start with independent score sharing** before discussion
- [ ] **Focus discussion on evidence** not impressions or feelings
- [ ] **Address significant score discrepancies** with evidence review
- [ ] **Challenge biased language** or assumptions in discussion
- [ ] **Ensure all voices are heard** in group decision making
- [ ] **Document reasons for final decision** with specific evidence
- [ ] **Avoid personality-based discussions** ("culture fit" should be evidence-based)
- [ ] **Consider multiple perspectives** on candidate responses
### Decision-Making Process
- [ ] **Use weighted scoring system** based on role requirements
- [ ] **Require minimum scores** in critical competency areas
- [ ] **Avoid veto power** unless based on clear, documented evidence
- [ ] **Consider growth potential** fairly across all candidates
- [ ] **Document dissenting opinions** and reasoning
- [ ] **Use tie-breaking criteria** that are predetermined and fair
- [ ] **Consider additional data collection** if team is split
- [ ] **Make final decision based on role requirements**, not team preferences
### Final Recommendations
- [ ] **Provide specific, actionable feedback** for development areas
- [ ] **Focus recommendations on skills and competencies**
- [ ] **Avoid language that could reflect bias** in written feedback
- [ ] **Consider onboarding needs** based on actual skill gaps, not assumptions
- [ ] **Provide coaching recommendations** that are evidence-based
- [ ] **Avoid personal judgments** about candidate character or personality
- [ ] **Make hiring recommendation** based solely on job-relevant criteria
- [ ] **Document any concerns** with specific, observable evidence
## Post-Interview Monitoring
### Data Collection
- [ ] **Track interviewer scoring patterns** for consistency analysis
- [ ] **Monitor pass rates** by demographic groups
- [ ] **Collect candidate experience feedback** on interview fairness
- [ ] **Analyze score distributions** for potential bias indicators
- [ ] **Track time-to-decision** across different candidate types
- [ ] **Monitor offer acceptance rates** by demographics
- [ ] **Collect new hire performance data** for process validation
- [ ] **Document any bias incidents** or concerns raised
### Regular Analysis
- [ ] **Conduct quarterly bias audits** of interview data
- [ ] **Review interviewer calibration** and identify outliers
- [ ] **Analyze demographic trends** in hiring outcomes
- [ ] **Compare candidate experience surveys** across groups
- [ ] **Track correlation between interview scores and job performance**
- [ ] **Review and update bias mitigation strategies** based on data
- [ ] **Share findings with interview teams** for continuous improvement
- [ ] **Update training programs** based on identified bias patterns
## Bias Types to Watch For
### Affinity Bias
- **Definition**: Favoring candidates similar to yourself
- **Watch for**: Over-positive response to shared backgrounds, interests, or experiences
- **Mitigation**: Focus on job-relevant competencies, diversify interview panels
### Halo/Horn Effect
- **Definition**: One positive/negative trait influencing overall assessment
- **Watch for**: Strong performance in one area affecting scores in unrelated areas
- **Mitigation**: Score each competency independently, use structured evaluation
### Confirmation Bias
- **Definition**: Seeking information that confirms initial impressions
- **Watch for**: Asking follow-ups that lead candidate toward expected responses
- **Mitigation**: Use standardized questions, consider alternative interpretations
### Attribution Bias
- **Definition**: Attributing success/failure to different causes based on candidate demographics
- **Watch for**: Assuming women are "lucky" vs. men are "skilled" for same achievements
- **Mitigation**: Focus on candidate's role in achievements, avoid assumptions
### Cultural Bias
- **Definition**: Judging candidates based on cultural differences rather than job performance
- **Watch for**: Penalizing communication styles, work approaches, or values that differ from team norm
- **Mitigation**: Define job-relevant criteria clearly, consider diverse perspectives valuable
### Educational Bias
- **Definition**: Over-weighting prestigious educational credentials
- **Watch for**: Assuming higher capability based on school rank rather than demonstrated skills
- **Mitigation**: Focus on skills demonstration, consider alternative learning paths
### Experience Bias
- **Definition**: Requiring specific company or industry experience unnecessarily
- **Watch for**: Discounting transferable skills from different industries or company sizes
- **Mitigation**: Define core skills needed, assess adaptability and learning ability
## Emergency Bias Response Protocol
### During Interview
1. **Pause the interview** if significant bias is observed
2. **Privately address** bias with interviewer if possible
3. **Document the incident** for review
4. **Continue with fair assessment** of candidate
5. **Flag for debrief discussion** if interview continues
### Post-Interview
1. **Report bias incidents** to hiring manager/HR immediately
2. **Document specific behaviors** observed
3. **Consider additional interviewer** for second opinion
4. **Review candidate assessment** for bias impact
5. **Implement corrective actions** for future interviews
### Interviewer Coaching
1. **Provide immediate feedback** on bias observed
2. **Schedule bias training refresher** if needed
3. **Monitor future interviews** for improvement
4. **Consider removing from interview rotation** if bias persists
5. **Document coaching provided** for performance management
## Legal Compliance Reminders
### Protected Characteristics
- Age, race, color, religion, sex, national origin, disability status, veteran status
- Pregnancy, genetic information, sexual orientation, gender identity
- Any other characteristics protected by local/state/federal law
### Prohibited Questions
- Questions about family planning, marital status, pregnancy
- Age-related questions (unless BFOQ)
- Religious or political affiliations
- Disability status (unless voluntary disclosure for accommodation)
- Arrest records (without conviction relevance)
- Financial status or credit (unless job-relevant)
### Documentation Requirements
- Keep all interview materials for required retention period
- Ensure consistent documentation across all candidates
- Avoid documenting protected characteristic observations
- Focus documentation on job-relevant observations only
## Training & Certification
### Required Training Topics
- Unconscious bias awareness and mitigation
- Structured interviewing techniques
- Legal compliance in hiring
- Company-specific bias mitigation protocols
- Role-specific competency assessment
- Accommodation and accessibility requirements
### Ongoing Development
- Annual bias training refresher
- Quarterly calibration sessions
- Regular updates on legal requirements
- Peer feedback and coaching
- Industry best practice updates
- Data-driven process improvements
This checklist should be reviewed and updated regularly based on legal requirements, industry best practices, and internal bias analysis results.

View File

@@ -0,0 +1,171 @@
# Competency Matrix Templates
This document provides comprehensive competency matrix templates for different engineering roles and levels. Use these matrices to design role-specific interview loops and evaluation criteria.
## Software Engineering Competency Matrix
### Technical Competencies
| Competency | Junior (L1-L2) | Mid (L3-L4) | Senior (L5-L6) | Staff+ (L7+) |
|------------|----------------|-------------|----------------|--------------|
| **Coding & Algorithms** | Basic data structures, simple algorithms, language syntax | Advanced algorithms, complexity analysis, optimization | Complex problem solving, algorithm design, performance tuning | Architecture-level algorithmic decisions, novel approach design |
| **System Design** | Component interactions, basic scalability concepts | Service design, database modeling, API design | Distributed systems, scalability patterns, trade-off analysis | Large-scale architecture, cross-system design, technology strategy |
| **Code Quality** | Readable code, basic testing, follows conventions | Maintainable code, comprehensive testing, design patterns | Code reviews, quality standards, refactoring leadership | Engineering standards, quality culture, technical debt management |
| **Debugging & Problem Solving** | Basic debugging, structured problem approach | Complex debugging, root cause analysis, performance issues | System-wide debugging, production issues, incident response | Cross-system troubleshooting, preventive measures, tooling design |
| **Domain Knowledge** | Learning role-specific technologies | Proficiency in domain tools/frameworks | Deep domain expertise, technology evaluation | Domain leadership, technology roadmap, innovation |
### Behavioral Competencies
| Competency | Junior (L1-L2) | Mid (L3-L4) | Senior (L5-L6) | Staff+ (L7+) |
|------------|----------------|-------------|----------------|--------------|
| **Communication** | Clear status updates, asks good questions | Technical explanations, stakeholder updates | Cross-functional communication, technical writing | Executive communication, external representation, thought leadership |
| **Collaboration** | Team participation, code reviews | Cross-team projects, knowledge sharing | Team leadership, conflict resolution | Cross-org collaboration, culture building, strategic partnerships |
| **Leadership & Influence** | Peer mentoring, positive attitude | Junior mentoring, project ownership | Team guidance, technical decisions, hiring | Org-wide influence, vision setting, culture change |
| **Growth & Learning** | Skill development, feedback receptivity | Proactive learning, teaching others | Continuous improvement, trend awareness | Learning culture, industry leadership, innovation adoption |
| **Ownership & Initiative** | Task completion, quality focus | Project ownership, process improvement | Feature/service ownership, strategic thinking | Product/platform ownership, business impact, market influence |
## Product Management Competency Matrix
### Product Competencies
| Competency | Associate PM (L1-L2) | PM (L3-L4) | Senior PM (L5-L6) | Principal PM (L7+) |
|------------|---------------------|------------|-------------------|-------------------|
| **Product Strategy** | Feature requirements, user stories | Product roadmaps, market analysis | Business strategy, competitive positioning | Portfolio strategy, market creation, platform vision |
| **User Research & Analytics** | Basic user interviews, metrics tracking | Research design, data interpretation | Research strategy, advanced analytics | Research culture, measurement frameworks, insight generation |
| **Technical Understanding** | Basic tech concepts, API awareness | System architecture, technical trade-offs | Technical strategy, platform decisions | Technology vision, architectural influence, innovation leadership |
| **Execution & Process** | Feature delivery, stakeholder coordination | Project management, cross-functional leadership | Process optimization, team scaling | Operational excellence, org design, strategic execution |
| **Business Acumen** | Revenue awareness, customer understanding | P&L understanding, business case development | Business strategy, market dynamics | Corporate strategy, board communication, investor relations |
### Leadership Competencies
| Competency | Associate PM (L1-L2) | PM (L3-L4) | Senior PM (L5-L6) | Principal PM (L7+) |
|------------|---------------------|------------|-------------------|-------------------|
| **Stakeholder Management** | Team collaboration, clear communication | Cross-functional alignment, expectation management | Executive communication, influence without authority | Board interaction, external partnerships, industry influence |
| **Team Development** | Peer learning, feedback sharing | Junior mentoring, knowledge transfer | Team building, hiring, performance management | Talent development, culture building, org leadership |
| **Decision Making** | Data-driven decisions, priority setting | Complex trade-offs, strategic choices | Ambiguous situations, high-stakes decisions | Strategic vision, transformational decisions, risk management |
| **Innovation & Vision** | Creative problem solving, user empathy | Market opportunity identification, feature innovation | Product vision, market strategy | Industry vision, disruptive thinking, platform creation |
## Design Competency Matrix
### Design Competencies
| Competency | Junior Designer (L1-L2) | Mid Designer (L3-L4) | Senior Designer (L5-L6) | Principal Designer (L7+) |
|------------|-------------------------|---------------------|-------------------------|-------------------------|
| **Visual Design** | UI components, typography, color theory | Design systems, visual hierarchy | Brand integration, advanced layouts | Visual strategy, brand evolution, design innovation |
| **User Experience** | User flows, wireframing, prototyping | Interaction design, usability testing | Experience strategy, journey mapping | UX vision, service design, behavioral insights |
| **Research & Validation** | User interviews, usability tests | Research planning, data synthesis | Research strategy, methodology design | Research culture, insight frameworks, market research |
| **Design Systems** | Component usage, style guides | System contribution, pattern creation | System architecture, governance | System strategy, scalable design, platform thinking |
| **Tools & Craft** | Design software proficiency, asset creation | Advanced techniques, workflow optimization | Tool evaluation, process design | Technology integration, future tooling, craft evolution |
### Collaboration Competencies
| Competency | Junior Designer (L1-L2) | Mid Designer (L3-L4) | Senior Designer (L5-L6) | Principal Designer (L7+) |
|------------|-------------------------|---------------------|-------------------------|-------------------------|
| **Cross-functional Partnership** | Engineering collaboration, handoff quality | Product partnership, stakeholder alignment | Leadership collaboration, strategic alignment | Executive partnership, business strategy integration |
| **Communication & Advocacy** | Design rationale, feedback integration | Design presentations, user advocacy | Executive communication, design thinking evangelism | Industry thought leadership, external representation |
| **Mentorship & Growth** | Peer learning, skill sharing | Junior mentoring, critique facilitation | Team development, hiring, career guidance | Design culture, talent strategy, industry leadership |
| **Business Impact** | User-centered thinking, design quality | Feature success, user satisfaction | Business metrics, strategic impact | Market influence, competitive advantage, innovation leadership |
## Data Science Competency Matrix
### Technical Competencies
| Competency | Junior DS (L1-L2) | Mid DS (L3-L4) | Senior DS (L5-L6) | Principal DS (L7+) |
|------------|-------------------|----------------|-------------------|-------------------|
| **Statistical Analysis** | Descriptive stats, hypothesis testing | Advanced statistics, experimental design | Causal inference, advanced modeling | Statistical strategy, methodology innovation |
| **Machine Learning** | Basic ML algorithms, model training | Advanced ML, feature engineering | ML systems, model deployment | ML strategy, AI platform, research direction |
| **Data Engineering** | SQL, basic ETL, data cleaning | Pipeline design, data modeling | Platform architecture, scalable systems | Data strategy, infrastructure vision, governance |
| **Programming & Tools** | Python/R proficiency, visualization | Advanced programming, tool integration | Software engineering, system design | Technology strategy, platform development, innovation |
| **Domain Expertise** | Business understanding, metric interpretation | Domain modeling, insight generation | Strategic analysis, business integration | Market expertise, competitive intelligence, thought leadership |
### Impact & Leadership Competencies
| Competency | Junior DS (L1-L2) | Mid DS (L3-L4) | Senior DS (L5-L6) | Principal DS (L7+) |
|------------|-------------------|----------------|-------------------|-------------------|
| **Business Impact** | Metric improvement, insight delivery | Project leadership, business case development | Strategic initiatives, P&L impact | Business transformation, market advantage, innovation |
| **Communication** | Technical reporting, visualization | Stakeholder presentations, executive briefings | Board communication, external representation | Industry leadership, thought leadership, market influence |
| **Team Leadership** | Peer collaboration, knowledge sharing | Junior mentoring, project management | Team building, hiring, culture development | Organizational leadership, talent strategy, vision setting |
| **Innovation & Research** | Algorithm implementation, experimentation | Research projects, publication | Research strategy, academic partnerships | Research vision, industry influence, breakthrough innovation |
## DevOps Engineering Competency Matrix
### Technical Competencies
| Competency | Junior DevOps (L1-L2) | Mid DevOps (L3-L4) | Senior DevOps (L5-L6) | Principal DevOps (L7+) |
|------------|----------------------|-------------------|----------------------|----------------------|
| **Infrastructure** | Basic cloud services, server management | Infrastructure automation, containerization | Platform architecture, multi-cloud strategy | Infrastructure vision, emerging technologies, industry standards |
| **CI/CD & Automation** | Pipeline basics, script writing | Advanced pipelines, deployment automation | Platform design, workflow optimization | Automation strategy, developer experience, productivity platforms |
| **Monitoring & Observability** | Basic monitoring, log analysis | Advanced monitoring, alerting systems | Observability strategy, SLA/SLI design | Monitoring vision, reliability engineering, performance culture |
| **Security & Compliance** | Security basics, access management | Security automation, compliance frameworks | Security architecture, risk management | Security strategy, governance, industry leadership |
| **Performance & Scalability** | Performance monitoring, basic optimization | Capacity planning, performance tuning | Scalability architecture, cost optimization | Performance strategy, efficiency platforms, innovation |
### Leadership & Impact Competencies
| Competency | Junior DevOps (L1-L2) | Mid DevOps (L3-L4) | Senior DevOps (L5-L6) | Principal DevOps (L7+) |
|------------|----------------------|-------------------|----------------------|----------------------|
| **Developer Experience** | Tool support, documentation | Platform development, self-service tools | Developer productivity, workflow design | Developer platform vision, industry best practices |
| **Incident Management** | Incident response, troubleshooting | Incident coordination, root cause analysis | Incident strategy, prevention systems | Reliability culture, organizational resilience |
| **Team Collaboration** | Cross-team support, knowledge sharing | Process improvement, training delivery | Culture building, practice evangelism | Organizational transformation, industry influence |
| **Strategic Impact** | Operational excellence, cost awareness | Efficiency improvements, platform adoption | Strategic initiatives, business enablement | Technology strategy, competitive advantage, market leadership |
## Engineering Management Competency Matrix
### People Leadership Competencies
| Competency | Manager (L1-L2) | Senior Manager (L3-L4) | Director (L5-L6) | VP+ (L7+) |
|------------|-----------------|------------------------|------------------|----------|
| **Team Building** | Hiring, onboarding, 1:1s | Team culture, performance management | Multi-team coordination, org design | Organizational culture, talent strategy |
| **Performance Management** | Individual development, feedback | Performance systems, coaching | Calibration across teams, promotion standards | Talent development, succession planning |
| **Communication** | Team updates, stakeholder management | Executive communication, cross-functional alignment | Board updates, external communication | Industry representation, thought leadership |
| **Conflict Resolution** | Team conflicts, process improvements | Cross-team issues, organizational friction | Strategic alignment, cultural challenges | Corporate-level conflicts, crisis management |
### Technical Leadership Competencies
| Competency | Manager (L1-L2) | Senior Manager (L3-L4) | Director (L5-L6) | VP+ (L7+) |
|------------|-----------------|------------------------|------------------|----------|
| **Technical Vision** | Team technical decisions, architecture input | Platform strategy, technology choices | Technical roadmap, innovation strategy | Technology vision, industry standards |
| **System Ownership** | Feature/service ownership, quality standards | Platform ownership, scalability planning | System portfolio, technical debt management | Technology strategy, competitive advantage |
| **Process & Practice** | Team processes, development practices | Engineering standards, quality systems | Process innovation, best practices | Engineering culture, industry influence |
| **Technology Strategy** | Tool evaluation, team technology choices | Platform decisions, technical investments | Technology portfolio, strategic architecture | Corporate technology strategy, market leadership |
## Usage Guidelines
### Assessment Approach
1. **Level Calibration**: Use these matrices to calibrate expectations for each level within your organization
2. **Interview Design**: Select competencies most relevant to the specific role and level being hired for
3. **Evaluation Consistency**: Ensure all interviewers understand and apply the same competency standards
4. **Growth Planning**: Use matrices for career development and promotion discussions
### Customization Tips
1. **Industry Adaptation**: Modify competencies based on your industry (fintech, healthcare, etc.)
2. **Company Stage**: Adjust expectations based on startup vs. enterprise environment
3. **Team Needs**: Emphasize competencies most critical for current team challenges
4. **Cultural Fit**: Add company-specific values and cultural competencies
### Common Pitfalls
1. **Unrealistic Expectations**: Don't expect senior-level competencies from junior candidates
2. **One-Size-Fits-All**: Customize competency emphasis based on role requirements
3. **Static Assessment**: Regularly update matrices based on changing business needs
4. **Bias Introduction**: Ensure competencies are measurable and don't introduce unconscious bias
## Matrix Validation Process
### Regular Review Cycle
- **Quarterly**: Review competency relevance and adjust weights
- **Semi-annually**: Update level expectations based on market standards
- **Annually**: Comprehensive review with stakeholder feedback
### Stakeholder Input
- **Hiring Managers**: Validate role-specific competency requirements
- **Current Team Members**: Confirm level expectations match reality
- **Recent Hires**: Gather feedback on assessment accuracy
- **HR Partners**: Ensure legal compliance and bias mitigation
### Continuous Improvement
- **Performance Correlation**: Track new hire performance against competency assessments
- **Market Benchmarking**: Compare standards with industry peers
- **Feedback Integration**: Incorporate interviewer and candidate feedback
- **Bias Monitoring**: Regular analysis of assessment patterns across demographics

View File

@@ -0,0 +1,319 @@
# Interview Debrief Facilitation Guide
This guide provides a comprehensive framework for conducting effective, unbiased interview debriefs that lead to consistent hiring decisions. Use this to facilitate productive discussions that focus on evidence-based evaluation.
## Pre-Debrief Preparation
### Facilitator Responsibilities
- [ ] **Review all interviewer feedback** before the meeting
- [ ] **Identify significant score discrepancies** that need discussion
- [ ] **Prepare discussion agenda** with time allocations
- [ ] **Gather role requirements** and competency framework
- [ ] **Review any flags or special considerations** noted during interviews
- [ ] **Ensure all required materials** are available (scorecards, rubrics, candidate resume)
- [ ] **Set up meeting logistics** (room, video conference, screen sharing)
- [ ] **Send agenda to participants** 30 minutes before meeting
### Required Materials Checklist
- [ ] Candidate resume and application materials
- [ ] Job description and competency requirements
- [ ] Individual interviewer scorecards
- [ ] Scoring rubrics and competency definitions
- [ ] Interview notes and documentation
- [ ] Any technical assessments or work samples
- [ ] Company hiring standards and calibration examples
- [ ] Bias mitigation reminders and prompts
### Participant Preparation Requirements
- [ ] All interviewers must **complete independent scoring** before debrief
- [ ] **Submit written feedback** with specific evidence for each competency
- [ ] **Review scoring rubrics** to ensure consistent interpretation
- [ ] **Prepare specific examples** to support scoring decisions
- [ ] **Flag any concerns or unusual circumstances** that affected assessment
- [ ] **Avoid discussing candidate** with other interviewers before debrief
- [ ] **Come prepared to defend scores** with concrete evidence
- [ ] **Be ready to adjust scores** based on additional evidence shared
## Debrief Meeting Structure
### Opening (5 minutes)
1. **State meeting purpose**: Make hiring decision based on evidence
2. **Review agenda and time limits**: Keep discussion focused and productive
3. **Remind of bias mitigation principles**: Focus on competencies, not personality
4. **Confirm confidentiality**: Discussion stays within hiring team
5. **Establish ground rules**: One person speaks at a time, evidence-based discussion
### Individual Score Sharing (10-15 minutes)
- **Go around the room systematically** - each interviewer shares scores independently
- **No discussion or challenges yet** - just data collection
- **Record scores on shared document** visible to all participants
- **Note any abstentions** or "insufficient data" responses
- **Identify clear patterns** and discrepancies without commentary
- **Flag any scores requiring explanation** (1s or 4s typically need strong evidence)
### Competency-by-Competency Discussion (30-40 minutes)
#### For Each Core Competency:
**1. Present Score Distribution (2 minutes)**
- Display all scores for this competency
- Note range and any outliers
- Identify if consensus exists or discussion needed
**2. Evidence Sharing (5-8 minutes per competency)**
- Start with interviewers who assessed this competency directly
- Share specific examples and observations
- Focus on what candidate said/did, not interpretations
- Allow questions for clarification (not challenges yet)
**3. Discussion and Calibration (3-5 minutes)**
- Address significant discrepancies (>1 point difference)
- Challenge vague or potentially biased language
- Seek additional evidence if needed
- Allow score adjustments based on new information
- Reach consensus or note dissenting views
#### Structured Discussion Questions:
- **"What specific evidence supports this score?"**
- **"Can you provide the exact example or quote?"**
- **"How does this compare to our rubric definition?"**
- **"Would this response receive the same score regardless of who gave it?"**
- **"Are we evaluating the competency or making assumptions?"**
- **"What would need to change for this to be the next level up/down?"**
### Overall Recommendation Discussion (10-15 minutes)
#### Weighted Score Calculation
1. **Apply competency weights** based on role requirements
2. **Calculate overall weighted average**
3. **Check minimum threshold requirements**
4. **Consider any veto criteria** (critical competency failures)
#### Final Recommendation Options
- **Strong Hire**: Exceeds requirements in most areas, clear value-add
- **Hire**: Meets requirements with growth potential
- **No Hire**: Doesn't meet minimum requirements for success
- **Strong No Hire**: Significant gaps that would impact team/company
#### Decision Rationale Documentation
- **Summarize key strengths** with specific evidence
- **Identify development areas** with specific examples
- **Explain final recommendation** with competency-based reasoning
- **Note any dissenting opinions** and reasoning
- **Document onboarding considerations** if hiring
### Closing and Next Steps (5 minutes)
- **Confirm final decision** and documentation
- **Assign follow-up actions** (feedback delivery, offer preparation, etc.)
- **Schedule any additional interviews** if needed
- **Review timeline** for candidate communication
- **Remind confidentiality** of discussion and decision
## Facilitation Best Practices
### Creating Psychological Safety
- **Encourage honest feedback** without fear of judgment
- **Validate different perspectives** and assessment approaches
- **Address power dynamics** - ensure junior voices are heard
- **Model vulnerability** - admit when evidence changes your mind
- **Focus on learning** and calibration, not winning arguments
- **Thank participants** for thorough preparation and thoughtful input
### Managing Difficult Conversations
#### When Scores Vary Significantly
1. **Acknowledge the discrepancy** without judgment
2. **Ask for specific evidence** from each scorer
3. **Look for different interpretations** of the same data
4. **Consider if different questions** revealed different competency levels
5. **Check for bias patterns** in reasoning
6. **Allow time for reflection** and potential score adjustments
#### When Someone Uses Biased Language
1. **Pause the conversation** gently but firmly
2. **Ask for specific evidence** behind the assessment
3. **Reframe in competency terms** - "What specific skills did this demonstrate?"
4. **Challenge assumptions** - "Help me understand how we know that"
5. **Redirect to rubric** - "How does this align with our scoring criteria?"
6. **Document and follow up** privately if bias persists
#### When the Discussion Gets Off Track
- **Redirect to competencies**: "Let's focus on the technical skills demonstrated"
- **Ask for evidence**: "What specific example supports that assessment?"
- **Reference rubrics**: "How does this align with our level 3 definition?"
- **Manage time**: "We have 5 minutes left on this competency"
- **Table unrelated issues**: "That's important but separate from this hire decision"
### Encouraging Evidence-Based Discussion
#### Good Evidence Examples
- **Direct quotes**: "When asked about debugging, they said..."
- **Specific behaviors**: "They organized their approach by first..."
- **Observable outcomes**: "Their code compiled on first run and handled edge cases"
- **Process descriptions**: "They walked through their problem-solving step by step"
- **Measurable results**: "They identified 3 optimization opportunities"
#### Poor Evidence Examples
- **Gut feelings**: "They just seemed off"
- **Comparisons**: "Not as strong as our last hire"
- **Assumptions**: "Probably wouldn't fit our culture"
- **Vague impressions**: "Didn't seem passionate"
- **Irrelevant factors**: "Their background is different from ours"
### Managing Group Dynamics
#### Ensuring Equal Participation
- **Direct questions** to quieter participants
- **Prevent interrupting** and ensure everyone finishes thoughts
- **Balance speaking time** across all interviewers
- **Validate minority opinions** even if not adopted
- **Check for unheard perspectives** before finalizing decisions
#### Handling Strong Personalities
- **Set time limits** for individual speaking
- **Redirect monopolizers**: "Let's hear from others on this"
- **Challenge confidently stated opinions** that lack evidence
- **Support less assertive voices** in expressing dissenting views
- **Focus on data**, not personality or seniority in decision making
## Bias Interruption Strategies
### Affinity Bias Interruption
- **Notice pattern**: Positive assessment seems based on shared background/interests
- **Interrupt with**: "Let's focus on the job-relevant skills they demonstrated"
- **Redirect to**: Specific competency evidence and measurable outcomes
- **Document**: Note if personal connection affected professional assessment
### Halo/Horn Effect Interruption
- **Notice pattern**: One area strongly influencing assessment of unrelated areas
- **Interrupt with**: "Let's score each competency independently"
- **Redirect to**: Specific evidence for each individual competency area
- **Recalibrate**: Ask for separate examples supporting each score
### Confirmation Bias Interruption
- **Notice pattern**: Only seeking/discussing evidence that supports initial impression
- **Interrupt with**: "What evidence might suggest a different assessment?"
- **Redirect to**: Consider alternative interpretations of the same data
- **Challenge**: "How might we be wrong about this assessment?"
### Attribution Bias Interruption
- **Notice pattern**: Attributing success to luck/help for some demographics, skill for others
- **Interrupt with**: "What role did the candidate play in achieving this outcome?"
- **Redirect to**: Candidate's specific contributions and decision-making
- **Standardize**: Apply same attribution standards across all candidates
## Decision Documentation Framework
### Required Documentation Elements
1. **Final scores** for each assessed competency
2. **Overall recommendation** with supporting rationale
3. **Key strengths** with specific evidence
4. **Development areas** with specific examples
5. **Dissenting opinions** if any, with reasoning
6. **Special considerations** or accommodation needs
7. **Next steps** and timeline for decision communication
### Evidence Quality Standards
- **Specific and observable**: What exactly did the candidate do or say?
- **Job-relevant**: How does this relate to success in the role?
- **Measurable**: Can this be quantified or clearly described?
- **Unbiased**: Would this evidence be interpreted the same way regardless of candidate demographics?
- **Complete**: Does this represent the full picture of their performance in this area?
### Writing Guidelines
- **Use active voice** and specific language
- **Avoid assumptions** about motivations or personality
- **Focus on behaviors** demonstrated during the interview
- **Provide context** for any unusual circumstances
- **Be constructive** in describing development areas
- **Maintain professionalism** and respect for candidate
## Common Debrief Challenges and Solutions
### Challenge: "I just don't think they'd fit our culture"
**Solution**:
- Ask for specific, observable evidence
- Define what "culture fit" means in job-relevant terms
- Challenge assumptions about cultural requirements
- Focus on ability to collaborate and contribute effectively
### Challenge: Scores vary widely with no clear explanation
**Solution**:
- Review if different interviewers assessed different competencies
- Look for question differences that might explain variance
- Consider if candidate performance varied across interviews
- May need additional data gathering or interview
### Challenge: Everyone loved/hated the candidate but can't articulate why
**Solution**:
- Push for specific evidence supporting emotional reactions
- Review competency rubrics together
- Look for halo/horn effects influencing overall impression
- Consider unconscious bias training for team
### Challenge: Technical vs. non-technical interviewers disagree
**Solution**:
- Clarify which competencies each interviewer was assessing
- Ensure technical assessments carry appropriate weight
- Look for different perspectives on same evidence
- Consider specialist input for technical decisions
### Challenge: Senior interviewer dominates decision making
**Solution**:
- Structure discussion to hear from all levels first
- Ask direct questions to junior interviewers
- Challenge opinions that lack supporting evidence
- Remember that assessment ability doesn't correlate with seniority
### Challenge: Team wants to hire but scores don't support it
**Solution**:
- Review if rubrics match actual job requirements
- Check for consistent application of scoring standards
- Consider if additional competencies need assessment
- May indicate need for rubric calibration or role requirement review
## Post-Debrief Actions
### Immediate Actions (Same Day)
- [ ] **Finalize decision documentation** with all evidence
- [ ] **Communicate decision** to recruiting team
- [ ] **Schedule candidate feedback** delivery if applicable
- [ ] **Update interview scheduling** based on decision
- [ ] **Note any process improvements** needed for future
### Follow-up Actions (Within 1 Week)
- [ ] **Deliver candidate feedback** (internal or external)
- [ ] **Update interview feedback** in tracking system
- [ ] **Schedule any additional interviews** if needed
- [ ] **Begin offer process** if hiring
- [ ] **Document lessons learned** for process improvement
### Long-term Actions (Monthly/Quarterly)
- [ ] **Analyze debrief effectiveness** and decision quality
- [ ] **Review interviewer calibration** based on decisions
- [ ] **Update rubrics** based on debrief insights
- [ ] **Provide additional training** if bias patterns identified
- [ ] **Share successful practices** with other hiring teams
## Continuous Improvement Framework
### Debrief Effectiveness Metrics
- **Decision consistency**: Are similar candidates receiving similar decisions?
- **Time to decision**: Are debriefs completing within planned time?
- **Participation quality**: Are all interviewers contributing evidence-based input?
- **Bias incidents**: How often are bias interruptions needed?
- **Decision satisfaction**: Do participants feel good about the process and outcome?
### Regular Review Process
- **Monthly**: Review debrief facilitation effectiveness and interviewer feedback
- **Quarterly**: Analyze decision patterns and potential bias indicators
- **Semi-annually**: Update debrief processes based on hiring outcome data
- **Annually**: Comprehensive review of debrief framework and training needs
### Training and Calibration
- **New facilitators**: Shadow 3-5 debriefs before leading independently
- **All facilitators**: Quarterly calibration sessions on bias interruption
- **Interviewer training**: Include debrief participation expectations
- **Leadership training**: Ensure hiring managers can facilitate effectively
This guide should be adapted to your organization's specific needs while maintaining focus on evidence-based, unbiased decision making.

View File

@@ -0,0 +1,382 @@
# Migration Architect
**Tier:** POWERFUL
**Category:** Engineering - Migration Strategy
**Purpose:** Zero-downtime migration planning, compatibility validation, and rollback strategy generation
## Overview
The Migration Architect skill provides comprehensive tools and methodologies for planning, executing, and validating complex system migrations with minimal business impact. This skill combines proven migration patterns with automated planning tools to ensure successful transitions between systems, databases, and infrastructure.
## Components
### Core Scripts
1. **migration_planner.py** - Automated migration plan generation
2. **compatibility_checker.py** - Schema and API compatibility analysis
3. **rollback_generator.py** - Comprehensive rollback procedure generation
### Reference Documentation
- **migration_patterns_catalog.md** - Detailed catalog of proven migration patterns
- **zero_downtime_techniques.md** - Comprehensive zero-downtime migration techniques
- **data_reconciliation_strategies.md** - Advanced data consistency and reconciliation strategies
### Sample Assets
- **sample_database_migration.json** - Example database migration specification
- **sample_service_migration.json** - Example service migration specification
- **database_schema_before.json** - Sample "before" database schema
- **database_schema_after.json** - Sample "after" database schema
## Quick Start
### 1. Generate a Migration Plan
```bash
python3 scripts/migration_planner.py \
--input assets/sample_database_migration.json \
--output migration_plan.json \
--format both
```
**Input:** Migration specification with source, target, constraints, and requirements
**Output:** Detailed phased migration plan with risk assessment, timeline, and validation gates
### 2. Check Compatibility
```bash
python3 scripts/compatibility_checker.py \
--before assets/database_schema_before.json \
--after assets/database_schema_after.json \
--type database \
--output compatibility_report.json \
--format both
```
**Input:** Before and after schema definitions
**Output:** Compatibility report with breaking changes, migration scripts, and recommendations
### 3. Generate Rollback Procedures
```bash
python3 scripts/rollback_generator.py \
--input migration_plan.json \
--output rollback_runbook.json \
--format both
```
**Input:** Migration plan from step 1
**Output:** Comprehensive rollback runbook with procedures, triggers, and communication templates
## Script Details
### Migration Planner (`migration_planner.py`)
Generates comprehensive migration plans with:
- **Phased approach** with dependencies and validation gates
- **Risk assessment** with mitigation strategies
- **Timeline estimation** based on complexity and constraints
- **Rollback triggers** and success criteria
- **Stakeholder communication** templates
**Usage:**
```bash
python3 scripts/migration_planner.py [OPTIONS]
Options:
--input, -i Input migration specification file (JSON) [required]
--output, -o Output file for migration plan (JSON)
--format, -f Output format: json, text, both (default: both)
--validate Validate migration specification only
```
**Input Format:**
```json
{
"type": "database|service|infrastructure",
"pattern": "schema_change|strangler_fig|blue_green",
"source": "Source system description",
"target": "Target system description",
"constraints": {
"max_downtime_minutes": 30,
"data_volume_gb": 2500,
"dependencies": ["service1", "service2"],
"compliance_requirements": ["GDPR", "SOX"]
}
}
```
### Compatibility Checker (`compatibility_checker.py`)
Analyzes compatibility between schema versions:
- **Breaking change detection** (removed fields, type changes, constraint additions)
- **Data migration requirements** identification
- **Suggested migration scripts** generation
- **Risk assessment** for each change
**Usage:**
```bash
python3 scripts/compatibility_checker.py [OPTIONS]
Options:
--before Before schema file (JSON) [required]
--after After schema file (JSON) [required]
--type Schema type: database, api (default: database)
--output, -o Output file for compatibility report (JSON)
--format, -f Output format: json, text, both (default: both)
```
**Exit Codes:**
- `0`: No compatibility issues
- `1`: Potentially breaking changes found
- `2`: Breaking changes found
### Rollback Generator (`rollback_generator.py`)
Creates comprehensive rollback procedures:
- **Phase-by-phase rollback** steps
- **Automated trigger conditions** for rollback
- **Data recovery procedures**
- **Communication templates** for different audiences
- **Validation checklists** for rollback success
**Usage:**
```bash
python3 scripts/rollback_generator.py [OPTIONS]
Options:
--input, -i Input migration plan file (JSON) [required]
--output, -o Output file for rollback runbook (JSON)
--format, -f Output format: json, text, both (default: both)
```
## Migration Patterns Supported
### Database Migrations
- **Expand-Contract Pattern** - Zero-downtime schema evolution
- **Parallel Schema Pattern** - Side-by-side schema migration
- **Event Sourcing Migration** - Event-driven data migration
### Service Migrations
- **Strangler Fig Pattern** - Gradual legacy system replacement
- **Parallel Run Pattern** - Risk mitigation through dual execution
- **Blue-Green Deployment** - Zero-downtime service updates
### Infrastructure Migrations
- **Lift and Shift** - Quick cloud migration with minimal changes
- **Hybrid Cloud Migration** - Gradual cloud adoption
- **Multi-Cloud Migration** - Distribution across multiple providers
## Sample Workflow
### 1. Database Schema Migration
```bash
# Generate migration plan
python3 scripts/migration_planner.py \
--input assets/sample_database_migration.json \
--output db_migration_plan.json
# Check schema compatibility
python3 scripts/compatibility_checker.py \
--before assets/database_schema_before.json \
--after assets/database_schema_after.json \
--type database \
--output schema_compatibility.json
# Generate rollback procedures
python3 scripts/rollback_generator.py \
--input db_migration_plan.json \
--output db_rollback_runbook.json
```
### 2. Service Migration
```bash
# Generate service migration plan
python3 scripts/migration_planner.py \
--input assets/sample_service_migration.json \
--output service_migration_plan.json
# Generate rollback procedures
python3 scripts/rollback_generator.py \
--input service_migration_plan.json \
--output service_rollback_runbook.json
```
## Output Examples
### Migration Plan Structure
```json
{
"migration_id": "abc123def456",
"source_system": "Legacy User Service",
"target_system": "New User Service",
"migration_type": "service",
"complexity": "medium",
"estimated_duration_hours": 72,
"phases": [
{
"name": "preparation",
"description": "Prepare systems and teams for migration",
"duration_hours": 8,
"validation_criteria": ["All backups completed successfully"],
"rollback_triggers": ["Critical system failure"],
"risk_level": "medium"
}
],
"risks": [
{
"category": "technical",
"description": "Service compatibility issues",
"severity": "high",
"mitigation": "Comprehensive integration testing"
}
]
}
```
### Compatibility Report Structure
```json
{
"overall_compatibility": "potentially_incompatible",
"breaking_changes_count": 2,
"potentially_breaking_count": 3,
"issues": [
{
"type": "required_column_added",
"severity": "breaking",
"description": "Required column 'email_verified_at' added",
"suggested_migration": "Add default value initially"
}
],
"migration_scripts": [
{
"script_type": "sql",
"description": "Add email verification columns",
"script_content": "ALTER TABLE users ADD COLUMN email_verified_at TIMESTAMP;",
"rollback_script": "ALTER TABLE users DROP COLUMN email_verified_at;"
}
]
}
```
## Best Practices
### Planning Phase
1. **Start with risk assessment** - Identify failure modes before planning
2. **Design for rollback** - Every step should have a tested rollback procedure
3. **Validate in staging** - Execute full migration in production-like environment
4. **Plan gradual rollout** - Use feature flags and traffic routing
### Execution Phase
1. **Monitor continuously** - Track technical and business metrics
2. **Communicate proactively** - Keep stakeholders informed
3. **Document everything** - Maintain detailed logs for analysis
4. **Stay flexible** - Be prepared to adjust based on real-world performance
### Validation Phase
1. **Automate validation** - Use automated consistency and performance checks
2. **Test business logic** - Validate critical business processes end-to-end
3. **Load test** - Verify performance under expected production load
4. **Security validation** - Ensure security controls function properly
## Integration
### CI/CD Pipeline Integration
```yaml
# Example GitHub Actions workflow
name: Migration Validation
on: [push, pull_request]
jobs:
validate-migration:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Validate Migration Plan
run: |
python3 scripts/migration_planner.py \
--input migration_spec.json \
--validate
- name: Check Compatibility
run: |
python3 scripts/compatibility_checker.py \
--before schema_before.json \
--after schema_after.json \
--type database
```
### Monitoring Integration
The tools generate metrics and alerts that can be integrated with:
- **Prometheus** - For metrics collection
- **Grafana** - For visualization and dashboards
- **PagerDuty** - For incident management
- **Slack** - For team notifications
## Advanced Features
### Machine Learning Integration
- Anomaly detection for data consistency issues
- Predictive analysis for migration success probability
- Automated pattern recognition for migration optimization
### Performance Optimization
- Parallel processing for large-scale migrations
- Incremental reconciliation strategies
- Statistical sampling for validation
### Compliance Support
- GDPR compliance tracking
- SOX audit trail generation
- HIPAA security validation
## Troubleshooting
### Common Issues
**"Migration plan validation failed"**
- Check JSON syntax in migration specification
- Ensure all required fields are present
- Validate constraint values are realistic
**"Compatibility checker reports false positives"**
- Review excluded fields configuration
- Check data type mapping compatibility
- Adjust tolerance settings for numerical comparisons
**"Rollback procedures seem incomplete"**
- Ensure migration plan includes all phases
- Verify database backup locations are specified
- Check that all dependencies are documented
### Getting Help
1. **Review documentation** - Check reference docs for patterns and techniques
2. **Examine sample files** - Use provided assets as templates
3. **Check expected outputs** - Compare your results with sample outputs
4. **Validate inputs** - Ensure input files match expected format
## Contributing
To extend or modify the Migration Architect skill:
1. **Add new patterns** - Extend pattern templates in migration_planner.py
2. **Enhance compatibility checks** - Add new validation rules in compatibility_checker.py
3. **Improve rollback procedures** - Add specialized rollback steps in rollback_generator.py
4. **Update documentation** - Keep reference docs current with new patterns
## License
This skill is part of the claude-skills repository and follows the same license terms.

View File

@@ -0,0 +1,473 @@
# Migration Architect
**Tier:** POWERFUL
**Category:** Engineering - Migration Strategy
**Purpose:** Zero-downtime migration planning, compatibility validation, and rollback strategy generation
## Overview
The Migration Architect skill provides comprehensive tools and methodologies for planning, executing, and validating complex system migrations with minimal business impact. This skill combines proven migration patterns with automated planning tools to ensure successful transitions between systems, databases, and infrastructure.
## Core Capabilities
### 1. Migration Strategy Planning
- **Phased Migration Planning:** Break complex migrations into manageable phases with clear validation gates
- **Risk Assessment:** Identify potential failure points and mitigation strategies before execution
- **Timeline Estimation:** Generate realistic timelines based on migration complexity and resource constraints
- **Stakeholder Communication:** Create communication templates and progress dashboards
### 2. Compatibility Analysis
- **Schema Evolution:** Analyze database schema changes for backward compatibility issues
- **API Versioning:** Detect breaking changes in REST/GraphQL APIs and microservice interfaces
- **Data Type Validation:** Identify data format mismatches and conversion requirements
- **Constraint Analysis:** Validate referential integrity and business rule changes
### 3. Rollback Strategy Generation
- **Automated Rollback Plans:** Generate comprehensive rollback procedures for each migration phase
- **Data Recovery Scripts:** Create point-in-time data restoration procedures
- **Service Rollback:** Plan service version rollbacks with traffic management
- **Validation Checkpoints:** Define success criteria and rollback triggers
## Migration Patterns
### Database Migrations
#### Schema Evolution Patterns
1. **Expand-Contract Pattern**
- **Expand:** Add new columns/tables alongside existing schema
- **Dual Write:** Application writes to both old and new schema
- **Migration:** Backfill historical data to new schema
- **Contract:** Remove old columns/tables after validation
2. **Parallel Schema Pattern**
- Run new schema in parallel with existing schema
- Use feature flags to route traffic between schemas
- Validate data consistency between parallel systems
- Cutover when confidence is high
3. **Event Sourcing Migration**
- Capture all changes as events during migration window
- Apply events to new schema for consistency
- Enable replay capability for rollback scenarios
#### Data Migration Strategies
1. **Bulk Data Migration**
- **Snapshot Approach:** Full data copy during maintenance window
- **Incremental Sync:** Continuous data synchronization with change tracking
- **Stream Processing:** Real-time data transformation pipelines
2. **Dual-Write Pattern**
- Write to both source and target systems during migration
- Implement compensation patterns for write failures
- Use distributed transactions where consistency is critical
3. **Change Data Capture (CDC)**
- Stream database changes to target system
- Maintain eventual consistency during migration
- Enable zero-downtime migrations for large datasets
### Service Migrations
#### Strangler Fig Pattern
1. **Intercept Requests:** Route traffic through proxy/gateway
2. **Gradually Replace:** Implement new service functionality incrementally
3. **Legacy Retirement:** Remove old service components as new ones prove stable
4. **Monitoring:** Track performance and error rates throughout transition
```mermaid
graph TD
A[Client Requests] --> B[API Gateway]
B --> C{Route Decision}
C -->|Legacy Path| D[Legacy Service]
C -->|New Path| E[New Service]
D --> F[Legacy Database]
E --> G[New Database]
```
#### Parallel Run Pattern
1. **Dual Execution:** Run both old and new services simultaneously
2. **Shadow Traffic:** Route production traffic to both systems
3. **Result Comparison:** Compare outputs to validate correctness
4. **Gradual Cutover:** Shift traffic percentage based on confidence
#### Canary Deployment Pattern
1. **Limited Rollout:** Deploy new service to small percentage of users
2. **Monitoring:** Track key metrics (latency, errors, business KPIs)
3. **Gradual Increase:** Increase traffic percentage as confidence grows
4. **Full Rollout:** Complete migration once validation passes
### Infrastructure Migrations
#### Cloud-to-Cloud Migration
1. **Assessment Phase**
- Inventory existing resources and dependencies
- Map services to target cloud equivalents
- Identify vendor-specific features requiring refactoring
2. **Pilot Migration**
- Migrate non-critical workloads first
- Validate performance and cost models
- Refine migration procedures
3. **Production Migration**
- Use infrastructure as code for consistency
- Implement cross-cloud networking during transition
- Maintain disaster recovery capabilities
#### On-Premises to Cloud Migration
1. **Lift and Shift**
- Minimal changes to existing applications
- Quick migration with optimization later
- Use cloud migration tools and services
2. **Re-architecture**
- Redesign applications for cloud-native patterns
- Adopt microservices, containers, and serverless
- Implement cloud security and scaling practices
3. **Hybrid Approach**
- Keep sensitive data on-premises
- Migrate compute workloads to cloud
- Implement secure connectivity between environments
## Feature Flags for Migrations
### Progressive Feature Rollout
```python
# Example feature flag implementation
class MigrationFeatureFlag:
def __init__(self, flag_name, rollout_percentage=0):
self.flag_name = flag_name
self.rollout_percentage = rollout_percentage
def is_enabled_for_user(self, user_id):
hash_value = hash(f"{self.flag_name}:{user_id}")
return (hash_value % 100) < self.rollout_percentage
def gradual_rollout(self, target_percentage, step_size=10):
while self.rollout_percentage < target_percentage:
self.rollout_percentage = min(
self.rollout_percentage + step_size,
target_percentage
)
yield self.rollout_percentage
```
### Circuit Breaker Pattern
Implement automatic fallback to legacy systems when new systems show degraded performance:
```python
class MigrationCircuitBreaker:
def __init__(self, failure_threshold=5, timeout=60):
self.failure_count = 0
self.failure_threshold = failure_threshold
self.timeout = timeout
self.last_failure_time = None
self.state = 'CLOSED' # CLOSED, OPEN, HALF_OPEN
def call_new_service(self, request):
if self.state == 'OPEN':
if self.should_attempt_reset():
self.state = 'HALF_OPEN'
else:
return self.fallback_to_legacy(request)
try:
response = self.new_service.process(request)
self.on_success()
return response
except Exception as e:
self.on_failure()
return self.fallback_to_legacy(request)
```
## Data Validation and Reconciliation
### Validation Strategies
1. **Row Count Validation**
- Compare record counts between source and target
- Account for soft deletes and filtered records
- Implement threshold-based alerting
2. **Checksums and Hashing**
- Generate checksums for critical data subsets
- Compare hash values to detect data drift
- Use sampling for large datasets
3. **Business Logic Validation**
- Run critical business queries on both systems
- Compare aggregate results (sums, counts, averages)
- Validate derived data and calculations
### Reconciliation Patterns
1. **Delta Detection**
```sql
-- Example delta query for reconciliation
SELECT 'missing_in_target' as issue_type, source_id
FROM source_table s
WHERE NOT EXISTS (
SELECT 1 FROM target_table t
WHERE t.id = s.id
)
UNION ALL
SELECT 'extra_in_target' as issue_type, target_id
FROM target_table t
WHERE NOT EXISTS (
SELECT 1 FROM source_table s
WHERE s.id = t.id
);
```
2. **Automated Correction**
- Implement data repair scripts for common issues
- Use idempotent operations for safe re-execution
- Log all correction actions for audit trails
## Rollback Strategies
### Database Rollback
1. **Schema Rollback**
- Maintain schema version control
- Use backward-compatible migrations when possible
- Keep rollback scripts for each migration step
2. **Data Rollback**
- Point-in-time recovery using database backups
- Transaction log replay for precise rollback points
- Maintain data snapshots at migration checkpoints
### Service Rollback
1. **Blue-Green Deployment**
- Keep previous service version running during migration
- Switch traffic back to blue environment if issues arise
- Maintain parallel infrastructure during migration window
2. **Rolling Rollback**
- Gradually shift traffic back to previous version
- Monitor system health during rollback process
- Implement automated rollback triggers
### Infrastructure Rollback
1. **Infrastructure as Code**
- Version control all infrastructure definitions
- Maintain rollback terraform/CloudFormation templates
- Test rollback procedures in staging environments
2. **Data Persistence**
- Preserve data in original location during migration
- Implement data sync back to original systems
- Maintain backup strategies across both environments
## Risk Assessment Framework
### Risk Categories
1. **Technical Risks**
- Data loss or corruption
- Service downtime or degraded performance
- Integration failures with dependent systems
- Scalability issues under production load
2. **Business Risks**
- Revenue impact from service disruption
- Customer experience degradation
- Compliance and regulatory concerns
- Brand reputation impact
3. **Operational Risks**
- Team knowledge gaps
- Insufficient testing coverage
- Inadequate monitoring and alerting
- Communication breakdowns
### Risk Mitigation Strategies
1. **Technical Mitigations**
- Comprehensive testing (unit, integration, load, chaos)
- Gradual rollout with automated rollback triggers
- Data validation and reconciliation processes
- Performance monitoring and alerting
2. **Business Mitigations**
- Stakeholder communication plans
- Business continuity procedures
- Customer notification strategies
- Revenue protection measures
3. **Operational Mitigations**
- Team training and documentation
- Runbook creation and testing
- On-call rotation planning
- Post-migration review processes
## Migration Runbooks
### Pre-Migration Checklist
- [ ] Migration plan reviewed and approved
- [ ] Rollback procedures tested and validated
- [ ] Monitoring and alerting configured
- [ ] Team roles and responsibilities defined
- [ ] Stakeholder communication plan activated
- [ ] Backup and recovery procedures verified
- [ ] Test environment validation complete
- [ ] Performance benchmarks established
- [ ] Security review completed
- [ ] Compliance requirements verified
### During Migration
- [ ] Execute migration phases in planned order
- [ ] Monitor key performance indicators continuously
- [ ] Validate data consistency at each checkpoint
- [ ] Communicate progress to stakeholders
- [ ] Document any deviations from plan
- [ ] Execute rollback if success criteria not met
- [ ] Coordinate with dependent teams
- [ ] Maintain detailed execution logs
### Post-Migration
- [ ] Validate all success criteria met
- [ ] Perform comprehensive system health checks
- [ ] Execute data reconciliation procedures
- [ ] Monitor system performance over 72 hours
- [ ] Update documentation and runbooks
- [ ] Decommission legacy systems (if applicable)
- [ ] Conduct post-migration retrospective
- [ ] Archive migration artifacts
- [ ] Update disaster recovery procedures
## Communication Templates
### Executive Summary Template
```
Migration Status: [IN_PROGRESS | COMPLETED | ROLLED_BACK]
Start Time: [YYYY-MM-DD HH:MM UTC]
Current Phase: [X of Y]
Overall Progress: [X%]
Key Metrics:
- System Availability: [X.XX%]
- Data Migration Progress: [X.XX%]
- Performance Impact: [+/-X%]
- Issues Encountered: [X]
Next Steps:
1. [Action item 1]
2. [Action item 2]
Risk Assessment: [LOW | MEDIUM | HIGH]
Rollback Status: [AVAILABLE | NOT_AVAILABLE]
```
### Technical Team Update Template
```
Phase: [Phase Name] - [Status]
Duration: [Started] - [Expected End]
Completed Tasks:
✓ [Task 1]
✓ [Task 2]
In Progress:
🔄 [Task 3] - [X% complete]
Upcoming:
⏳ [Task 4] - [Expected start time]
Issues:
⚠️ [Issue description] - [Severity] - [ETA resolution]
Metrics:
- Migration Rate: [X records/minute]
- Error Rate: [X.XX%]
- System Load: [CPU/Memory/Disk]
```
## Success Metrics
### Technical Metrics
- **Migration Completion Rate:** Percentage of data/services successfully migrated
- **Downtime Duration:** Total system unavailability during migration
- **Data Consistency Score:** Percentage of data validation checks passing
- **Performance Delta:** Performance change compared to baseline
- **Error Rate:** Percentage of failed operations during migration
### Business Metrics
- **Customer Impact Score:** Measure of customer experience degradation
- **Revenue Protection:** Percentage of revenue maintained during migration
- **Time to Value:** Duration from migration start to business value realization
- **Stakeholder Satisfaction:** Post-migration stakeholder feedback scores
### Operational Metrics
- **Plan Adherence:** Percentage of migration executed according to plan
- **Issue Resolution Time:** Average time to resolve migration issues
- **Team Efficiency:** Resource utilization and productivity metrics
- **Knowledge Transfer Score:** Team readiness for post-migration operations
## Tools and Technologies
### Migration Planning Tools
- **migration_planner.py:** Automated migration plan generation
- **compatibility_checker.py:** Schema and API compatibility analysis
- **rollback_generator.py:** Comprehensive rollback procedure generation
### Validation Tools
- Database comparison utilities (schema and data)
- API contract testing frameworks
- Performance benchmarking tools
- Data quality validation pipelines
### Monitoring and Alerting
- Real-time migration progress dashboards
- Automated rollback trigger systems
- Business metric monitoring
- Stakeholder notification systems
## Best Practices
### Planning Phase
1. **Start with Risk Assessment:** Identify all potential failure modes before planning
2. **Design for Rollback:** Every migration step should have a tested rollback procedure
3. **Validate in Staging:** Execute full migration process in production-like environment
4. **Plan for Gradual Rollout:** Use feature flags and traffic routing for controlled migration
### Execution Phase
1. **Monitor Continuously:** Track both technical and business metrics throughout
2. **Communicate Proactively:** Keep all stakeholders informed of progress and issues
3. **Document Everything:** Maintain detailed logs for post-migration analysis
4. **Stay Flexible:** Be prepared to adjust timeline based on real-world performance
### Validation Phase
1. **Automate Validation:** Use automated tools for data consistency and performance checks
2. **Business Logic Testing:** Validate critical business processes end-to-end
3. **Load Testing:** Verify system performance under expected production load
4. **Security Validation:** Ensure security controls function properly in new environment
## Integration with Development Lifecycle
### CI/CD Integration
```yaml
# Example migration pipeline stage
migration_validation:
stage: test
script:
- python scripts/compatibility_checker.py --before=old_schema.json --after=new_schema.json
- python scripts/migration_planner.py --config=migration_config.json --validate
artifacts:
reports:
- compatibility_report.json
- migration_plan.json
```
### Infrastructure as Code
```terraform
# Example Terraform for blue-green infrastructure
resource "aws_instance" "blue_environment" {
count = var.migration_phase == "preparation" ? var.instance_count : 0
# Blue environment configuration
}
resource "aws_instance" "green_environment" {
count = var.migration_phase == "execution" ? var.instance_count : 0
# Green environment configuration
}
```
This Migration Architect skill provides a comprehensive framework for planning, executing, and validating complex system migrations while minimizing business impact and technical risk. The combination of automated tools, proven patterns, and detailed procedures enables organizations to confidently undertake even the most complex migration projects.

View File

@@ -0,0 +1,367 @@
{
"schema_version": "2.0",
"database": "user_management_v2",
"tables": {
"users": {
"columns": {
"id": {
"type": "bigint",
"nullable": false,
"primary_key": true,
"auto_increment": true
},
"username": {
"type": "varchar",
"length": 50,
"nullable": false,
"unique": true
},
"email": {
"type": "varchar",
"length": 320,
"nullable": false,
"unique": true
},
"password_hash": {
"type": "varchar",
"length": 255,
"nullable": false
},
"first_name": {
"type": "varchar",
"length": 100,
"nullable": true
},
"last_name": {
"type": "varchar",
"length": 100,
"nullable": true
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"updated_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
},
"is_active": {
"type": "boolean",
"nullable": false,
"default": true
},
"phone": {
"type": "varchar",
"length": 20,
"nullable": true
},
"email_verified_at": {
"type": "timestamp",
"nullable": true,
"comment": "When email was verified"
},
"phone_verified_at": {
"type": "timestamp",
"nullable": true,
"comment": "When phone was verified"
},
"two_factor_enabled": {
"type": "boolean",
"nullable": false,
"default": false
},
"last_login_at": {
"type": "timestamp",
"nullable": true
}
},
"constraints": {
"primary_key": ["id"],
"unique": [
"username",
"email"
],
"foreign_key": [],
"check": [
"email LIKE '%@%'",
"LENGTH(password_hash) >= 60",
"phone IS NULL OR LENGTH(phone) >= 10"
]
},
"indexes": [
{
"name": "idx_users_email",
"columns": ["email"],
"unique": true
},
{
"name": "idx_users_username",
"columns": ["username"],
"unique": true
},
{
"name": "idx_users_created_at",
"columns": ["created_at"]
},
{
"name": "idx_users_email_verified",
"columns": ["email_verified_at"]
},
{
"name": "idx_users_last_login",
"columns": ["last_login_at"]
}
]
},
"user_profiles": {
"columns": {
"id": {
"type": "bigint",
"nullable": false,
"primary_key": true,
"auto_increment": true
},
"user_id": {
"type": "bigint",
"nullable": false
},
"bio": {
"type": "text",
"nullable": true
},
"avatar_url": {
"type": "varchar",
"length": 500,
"nullable": true
},
"birth_date": {
"type": "date",
"nullable": true
},
"location": {
"type": "varchar",
"length": 100,
"nullable": true
},
"website": {
"type": "varchar",
"length": 255,
"nullable": true
},
"privacy_level": {
"type": "varchar",
"length": 20,
"nullable": false,
"default": "public"
},
"timezone": {
"type": "varchar",
"length": 50,
"nullable": true,
"default": "UTC"
},
"language": {
"type": "varchar",
"length": 10,
"nullable": false,
"default": "en"
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"updated_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
}
},
"constraints": {
"primary_key": ["id"],
"unique": [],
"foreign_key": [
{
"columns": ["user_id"],
"references": "users(id)",
"on_delete": "CASCADE"
}
],
"check": [
"privacy_level IN ('public', 'private', 'friends_only')",
"bio IS NULL OR LENGTH(bio) <= 2000",
"language IN ('en', 'es', 'fr', 'de', 'it', 'pt', 'ru', 'ja', 'ko', 'zh')"
]
},
"indexes": [
{
"name": "idx_user_profiles_user_id",
"columns": ["user_id"],
"unique": true
},
{
"name": "idx_user_profiles_privacy",
"columns": ["privacy_level"]
},
{
"name": "idx_user_profiles_language",
"columns": ["language"]
}
]
},
"user_sessions": {
"columns": {
"id": {
"type": "varchar",
"length": 128,
"nullable": false,
"primary_key": true
},
"user_id": {
"type": "bigint",
"nullable": false
},
"ip_address": {
"type": "varchar",
"length": 45,
"nullable": true
},
"user_agent": {
"type": "text",
"nullable": true
},
"expires_at": {
"type": "timestamp",
"nullable": false
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"last_activity": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
},
"session_type": {
"type": "varchar",
"length": 20,
"nullable": false,
"default": "web"
},
"is_mobile": {
"type": "boolean",
"nullable": false,
"default": false
}
},
"constraints": {
"primary_key": ["id"],
"unique": [],
"foreign_key": [
{
"columns": ["user_id"],
"references": "users(id)",
"on_delete": "CASCADE"
}
],
"check": [
"session_type IN ('web', 'mobile', 'api', 'admin')"
]
},
"indexes": [
{
"name": "idx_user_sessions_user_id",
"columns": ["user_id"]
},
{
"name": "idx_user_sessions_expires",
"columns": ["expires_at"]
},
{
"name": "idx_user_sessions_type",
"columns": ["session_type"]
}
]
},
"user_preferences": {
"columns": {
"id": {
"type": "bigint",
"nullable": false,
"primary_key": true,
"auto_increment": true
},
"user_id": {
"type": "bigint",
"nullable": false
},
"preference_key": {
"type": "varchar",
"length": 100,
"nullable": false
},
"preference_value": {
"type": "json",
"nullable": true
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"updated_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
}
},
"constraints": {
"primary_key": ["id"],
"unique": [
["user_id", "preference_key"]
],
"foreign_key": [
{
"columns": ["user_id"],
"references": "users(id)",
"on_delete": "CASCADE"
}
],
"check": []
},
"indexes": [
{
"name": "idx_user_preferences_user_key",
"columns": ["user_id", "preference_key"],
"unique": true
}
]
}
},
"views": {
"active_users": {
"definition": "SELECT u.id, u.username, u.email, u.first_name, u.last_name, u.email_verified_at, u.last_login_at FROM users u WHERE u.is_active = true",
"columns": ["id", "username", "email", "first_name", "last_name", "email_verified_at", "last_login_at"]
},
"verified_users": {
"definition": "SELECT u.id, u.username, u.email FROM users u WHERE u.is_active = true AND u.email_verified_at IS NOT NULL",
"columns": ["id", "username", "email"]
}
},
"procedures": [
{
"name": "cleanup_expired_sessions",
"parameters": [],
"definition": "DELETE FROM user_sessions WHERE expires_at < NOW()"
},
{
"name": "get_user_with_profile",
"parameters": ["user_id BIGINT"],
"definition": "SELECT u.*, p.bio, p.avatar_url, p.privacy_level FROM users u LEFT JOIN user_profiles p ON u.id = p.user_id WHERE u.id = user_id"
}
]
}

View File

@@ -0,0 +1,243 @@
{
"schema_version": "1.0",
"database": "user_management",
"tables": {
"users": {
"columns": {
"id": {
"type": "bigint",
"nullable": false,
"primary_key": true,
"auto_increment": true
},
"username": {
"type": "varchar",
"length": 50,
"nullable": false,
"unique": true
},
"email": {
"type": "varchar",
"length": 255,
"nullable": false,
"unique": true
},
"password_hash": {
"type": "varchar",
"length": 255,
"nullable": false
},
"first_name": {
"type": "varchar",
"length": 100,
"nullable": true
},
"last_name": {
"type": "varchar",
"length": 100,
"nullable": true
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"updated_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
},
"is_active": {
"type": "boolean",
"nullable": false,
"default": true
},
"phone": {
"type": "varchar",
"length": 20,
"nullable": true
}
},
"constraints": {
"primary_key": ["id"],
"unique": [
"username",
"email"
],
"foreign_key": [],
"check": [
"email LIKE '%@%'",
"LENGTH(password_hash) >= 60"
]
},
"indexes": [
{
"name": "idx_users_email",
"columns": ["email"],
"unique": true
},
{
"name": "idx_users_username",
"columns": ["username"],
"unique": true
},
{
"name": "idx_users_created_at",
"columns": ["created_at"]
}
]
},
"user_profiles": {
"columns": {
"id": {
"type": "bigint",
"nullable": false,
"primary_key": true,
"auto_increment": true
},
"user_id": {
"type": "bigint",
"nullable": false
},
"bio": {
"type": "varchar",
"length": 255,
"nullable": true
},
"avatar_url": {
"type": "varchar",
"length": 500,
"nullable": true
},
"birth_date": {
"type": "date",
"nullable": true
},
"location": {
"type": "varchar",
"length": 100,
"nullable": true
},
"website": {
"type": "varchar",
"length": 255,
"nullable": true
},
"privacy_level": {
"type": "varchar",
"length": 20,
"nullable": false,
"default": "public"
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"updated_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
}
},
"constraints": {
"primary_key": ["id"],
"unique": [],
"foreign_key": [
{
"columns": ["user_id"],
"references": "users(id)",
"on_delete": "CASCADE"
}
],
"check": [
"privacy_level IN ('public', 'private', 'friends_only')"
]
},
"indexes": [
{
"name": "idx_user_profiles_user_id",
"columns": ["user_id"],
"unique": true
},
{
"name": "idx_user_profiles_privacy",
"columns": ["privacy_level"]
}
]
},
"user_sessions": {
"columns": {
"id": {
"type": "varchar",
"length": 128,
"nullable": false,
"primary_key": true
},
"user_id": {
"type": "bigint",
"nullable": false
},
"ip_address": {
"type": "varchar",
"length": 45,
"nullable": true
},
"user_agent": {
"type": "varchar",
"length": 500,
"nullable": true
},
"expires_at": {
"type": "timestamp",
"nullable": false
},
"created_at": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
},
"last_activity": {
"type": "timestamp",
"nullable": false,
"default": "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
}
},
"constraints": {
"primary_key": ["id"],
"unique": [],
"foreign_key": [
{
"columns": ["user_id"],
"references": "users(id)",
"on_delete": "CASCADE"
}
],
"check": []
},
"indexes": [
{
"name": "idx_user_sessions_user_id",
"columns": ["user_id"]
},
{
"name": "idx_user_sessions_expires",
"columns": ["expires_at"]
}
]
}
},
"views": {
"active_users": {
"definition": "SELECT u.id, u.username, u.email, u.first_name, u.last_name FROM users u WHERE u.is_active = true",
"columns": ["id", "username", "email", "first_name", "last_name"]
}
},
"procedures": [
{
"name": "cleanup_expired_sessions",
"parameters": [],
"definition": "DELETE FROM user_sessions WHERE expires_at < NOW()"
}
]
}

View File

@@ -0,0 +1,106 @@
{
"type": "database",
"pattern": "schema_change",
"source": "PostgreSQL 13 Production Database",
"target": "PostgreSQL 15 Cloud Database",
"description": "Migrate user management system from on-premises PostgreSQL to cloud with schema updates",
"constraints": {
"max_downtime_minutes": 30,
"data_volume_gb": 2500,
"dependencies": [
"user_service_api",
"authentication_service",
"notification_service",
"analytics_pipeline",
"backup_service"
],
"compliance_requirements": [
"GDPR",
"SOX"
],
"special_requirements": [
"zero_data_loss",
"referential_integrity",
"performance_baseline_maintained"
]
},
"tables_to_migrate": [
{
"name": "users",
"row_count": 1500000,
"size_mb": 450,
"critical": true
},
{
"name": "user_profiles",
"row_count": 1500000,
"size_mb": 890,
"critical": true
},
{
"name": "user_sessions",
"row_count": 25000000,
"size_mb": 1200,
"critical": false
},
{
"name": "audit_logs",
"row_count": 50000000,
"size_mb": 2800,
"critical": false
}
],
"schema_changes": [
{
"table": "users",
"changes": [
{
"type": "add_column",
"column": "email_verified_at",
"data_type": "timestamp",
"nullable": true
},
{
"type": "add_column",
"column": "phone_verified_at",
"data_type": "timestamp",
"nullable": true
}
]
},
{
"table": "user_profiles",
"changes": [
{
"type": "modify_column",
"column": "bio",
"old_type": "varchar(255)",
"new_type": "text"
},
{
"type": "add_constraint",
"constraint_type": "check",
"constraint_name": "bio_length_check",
"definition": "LENGTH(bio) <= 2000"
}
]
}
],
"performance_requirements": {
"max_query_response_time_ms": 100,
"concurrent_connections": 500,
"transactions_per_second": 1000
},
"business_continuity": {
"critical_business_hours": {
"start": "08:00",
"end": "18:00",
"timezone": "UTC"
},
"preferred_migration_window": {
"start": "02:00",
"end": "06:00",
"timezone": "UTC"
}
}
}

View File

@@ -0,0 +1,175 @@
{
"type": "service",
"pattern": "strangler_fig",
"source": "Legacy User Service (Java Spring Boot 2.x)",
"target": "New User Service (Node.js + TypeScript)",
"description": "Migrate legacy user management service to modern microservices architecture",
"constraints": {
"max_downtime_minutes": 0,
"data_volume_gb": 50,
"dependencies": [
"payment_service",
"order_service",
"notification_service",
"analytics_service",
"mobile_app_v1",
"mobile_app_v2",
"web_frontend",
"admin_dashboard"
],
"compliance_requirements": [
"PCI_DSS",
"GDPR"
],
"special_requirements": [
"api_backward_compatibility",
"session_continuity",
"rate_limit_preservation"
]
},
"service_details": {
"legacy_service": {
"endpoints": [
"GET /api/v1/users/{id}",
"POST /api/v1/users",
"PUT /api/v1/users/{id}",
"DELETE /api/v1/users/{id}",
"GET /api/v1/users/{id}/profile",
"PUT /api/v1/users/{id}/profile",
"POST /api/v1/users/{id}/verify-email",
"POST /api/v1/users/login",
"POST /api/v1/users/logout"
],
"current_load": {
"requests_per_second": 850,
"peak_requests_per_second": 2000,
"average_response_time_ms": 120,
"p95_response_time_ms": 300
},
"infrastructure": {
"instances": 4,
"cpu_cores_per_instance": 4,
"memory_gb_per_instance": 8,
"load_balancer": "AWS ELB Classic"
}
},
"new_service": {
"endpoints": [
"GET /api/v2/users/{id}",
"POST /api/v2/users",
"PUT /api/v2/users/{id}",
"DELETE /api/v2/users/{id}",
"GET /api/v2/users/{id}/profile",
"PUT /api/v2/users/{id}/profile",
"POST /api/v2/users/{id}/verify-email",
"POST /api/v2/users/{id}/verify-phone",
"POST /api/v2/auth/login",
"POST /api/v2/auth/logout",
"POST /api/v2/auth/refresh"
],
"target_performance": {
"requests_per_second": 1500,
"peak_requests_per_second": 3000,
"average_response_time_ms": 80,
"p95_response_time_ms": 200
},
"infrastructure": {
"container_platform": "Kubernetes",
"initial_replicas": 3,
"max_replicas": 10,
"cpu_request_millicores": 500,
"cpu_limit_millicores": 1000,
"memory_request_mb": 512,
"memory_limit_mb": 1024,
"load_balancer": "AWS ALB"
}
}
},
"migration_phases": [
{
"phase": "preparation",
"description": "Deploy new service and configure routing",
"estimated_duration_hours": 8
},
{
"phase": "intercept",
"description": "Configure API gateway to route to new service",
"estimated_duration_hours": 2
},
{
"phase": "gradual_migration",
"description": "Gradually increase traffic to new service",
"estimated_duration_hours": 48
},
{
"phase": "validation",
"description": "Validate new service performance and functionality",
"estimated_duration_hours": 24
},
{
"phase": "decommission",
"description": "Remove legacy service after validation",
"estimated_duration_hours": 4
}
],
"feature_flags": [
{
"name": "enable_new_user_service",
"description": "Route user service requests to new implementation",
"initial_percentage": 5,
"rollout_schedule": [
{"percentage": 5, "duration_hours": 24},
{"percentage": 25, "duration_hours": 24},
{"percentage": 50, "duration_hours": 24},
{"percentage": 100, "duration_hours": 0}
]
},
{
"name": "enable_new_auth_endpoints",
"description": "Enable new authentication endpoints",
"initial_percentage": 0,
"rollout_schedule": [
{"percentage": 10, "duration_hours": 12},
{"percentage": 50, "duration_hours": 12},
{"percentage": 100, "duration_hours": 0}
]
}
],
"monitoring": {
"critical_metrics": [
"request_rate",
"error_rate",
"response_time_p95",
"response_time_p99",
"cpu_utilization",
"memory_utilization",
"database_connection_pool"
],
"alert_thresholds": {
"error_rate": 0.05,
"response_time_p95": 250,
"cpu_utilization": 0.80,
"memory_utilization": 0.85
}
},
"rollback_triggers": [
{
"metric": "error_rate",
"threshold": 0.10,
"duration_minutes": 5,
"action": "automatic_rollback"
},
{
"metric": "response_time_p95",
"threshold": 500,
"duration_minutes": 10,
"action": "alert_team"
},
{
"metric": "cpu_utilization",
"threshold": 0.95,
"duration_minutes": 5,
"action": "scale_up"
}
]
}

View File

@@ -0,0 +1,577 @@
{
"runbook_id": "rb_921c0bca",
"migration_id": "23a52ed1507f",
"created_at": "2026-02-16T13:47:31.108500",
"rollback_phases": [
{
"phase_name": "rollback_cleanup",
"description": "Rollback changes made during cleanup phase",
"urgency_level": "medium",
"estimated_duration_minutes": 570,
"prerequisites": [
"Incident commander assigned and briefed",
"All team members notified of rollback initiation",
"Monitoring systems confirmed operational",
"Backup systems verified and accessible"
],
"steps": [
{
"step_id": "rb_validate_0_final",
"name": "Validate rollback completion",
"description": "Comprehensive validation that cleanup rollback completed successfully",
"script_type": "manual",
"script_content": "Execute validation checklist for this phase",
"estimated_duration_minutes": 10,
"dependencies": [],
"validation_commands": [
"SELECT COUNT(*) FROM {table_name};",
"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = '{table_name}';",
"SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '{table_name}' AND column_name = '{column_name}';",
"SELECT COUNT(DISTINCT {primary_key}) FROM {table_name};",
"SELECT MAX({timestamp_column}) FROM {table_name};"
],
"success_criteria": [
"cleanup fully rolled back",
"All validation checks pass"
],
"failure_escalation": "Investigate cleanup rollback failures",
"rollback_order": 99
}
],
"validation_checkpoints": [
"cleanup rollback steps completed",
"System health checks passing",
"No critical errors in logs",
"Key metrics within acceptable ranges",
"Validation command passed: SELECT COUNT(*) FROM {table_name};...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH..."
],
"communication_requirements": [
"Notify incident commander of phase start/completion",
"Update rollback status dashboard",
"Log all actions and decisions"
],
"risk_level": "medium"
},
{
"phase_name": "rollback_contract",
"description": "Rollback changes made during contract phase",
"urgency_level": "medium",
"estimated_duration_minutes": 570,
"prerequisites": [
"Incident commander assigned and briefed",
"All team members notified of rollback initiation",
"Monitoring systems confirmed operational",
"Backup systems verified and accessible",
"Previous rollback phase completed successfully"
],
"steps": [
{
"step_id": "rb_validate_1_final",
"name": "Validate rollback completion",
"description": "Comprehensive validation that contract rollback completed successfully",
"script_type": "manual",
"script_content": "Execute validation checklist for this phase",
"estimated_duration_minutes": 10,
"dependencies": [],
"validation_commands": [
"SELECT COUNT(*) FROM {table_name};",
"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = '{table_name}';",
"SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '{table_name}' AND column_name = '{column_name}';",
"SELECT COUNT(DISTINCT {primary_key}) FROM {table_name};",
"SELECT MAX({timestamp_column}) FROM {table_name};"
],
"success_criteria": [
"contract fully rolled back",
"All validation checks pass"
],
"failure_escalation": "Investigate contract rollback failures",
"rollback_order": 99
}
],
"validation_checkpoints": [
"contract rollback steps completed",
"System health checks passing",
"No critical errors in logs",
"Key metrics within acceptable ranges",
"Validation command passed: SELECT COUNT(*) FROM {table_name};...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH..."
],
"communication_requirements": [
"Notify incident commander of phase start/completion",
"Update rollback status dashboard",
"Log all actions and decisions"
],
"risk_level": "medium"
},
{
"phase_name": "rollback_migrate",
"description": "Rollback changes made during migrate phase",
"urgency_level": "medium",
"estimated_duration_minutes": 570,
"prerequisites": [
"Incident commander assigned and briefed",
"All team members notified of rollback initiation",
"Monitoring systems confirmed operational",
"Backup systems verified and accessible",
"Previous rollback phase completed successfully"
],
"steps": [
{
"step_id": "rb_validate_2_final",
"name": "Validate rollback completion",
"description": "Comprehensive validation that migrate rollback completed successfully",
"script_type": "manual",
"script_content": "Execute validation checklist for this phase",
"estimated_duration_minutes": 10,
"dependencies": [],
"validation_commands": [
"SELECT COUNT(*) FROM {table_name};",
"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = '{table_name}';",
"SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '{table_name}' AND column_name = '{column_name}';",
"SELECT COUNT(DISTINCT {primary_key}) FROM {table_name};",
"SELECT MAX({timestamp_column}) FROM {table_name};"
],
"success_criteria": [
"migrate fully rolled back",
"All validation checks pass"
],
"failure_escalation": "Investigate migrate rollback failures",
"rollback_order": 99
}
],
"validation_checkpoints": [
"migrate rollback steps completed",
"System health checks passing",
"No critical errors in logs",
"Key metrics within acceptable ranges",
"Validation command passed: SELECT COUNT(*) FROM {table_name};...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH..."
],
"communication_requirements": [
"Notify incident commander of phase start/completion",
"Update rollback status dashboard",
"Log all actions and decisions"
],
"risk_level": "medium"
},
{
"phase_name": "rollback_expand",
"description": "Rollback changes made during expand phase",
"urgency_level": "medium",
"estimated_duration_minutes": 570,
"prerequisites": [
"Incident commander assigned and briefed",
"All team members notified of rollback initiation",
"Monitoring systems confirmed operational",
"Backup systems verified and accessible",
"Previous rollback phase completed successfully"
],
"steps": [
{
"step_id": "rb_validate_3_final",
"name": "Validate rollback completion",
"description": "Comprehensive validation that expand rollback completed successfully",
"script_type": "manual",
"script_content": "Execute validation checklist for this phase",
"estimated_duration_minutes": 10,
"dependencies": [],
"validation_commands": [
"SELECT COUNT(*) FROM {table_name};",
"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = '{table_name}';",
"SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '{table_name}' AND column_name = '{column_name}';",
"SELECT COUNT(DISTINCT {primary_key}) FROM {table_name};",
"SELECT MAX({timestamp_column}) FROM {table_name};"
],
"success_criteria": [
"expand fully rolled back",
"All validation checks pass"
],
"failure_escalation": "Investigate expand rollback failures",
"rollback_order": 99
}
],
"validation_checkpoints": [
"expand rollback steps completed",
"System health checks passing",
"No critical errors in logs",
"Key metrics within acceptable ranges",
"Validation command passed: SELECT COUNT(*) FROM {table_name};...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH..."
],
"communication_requirements": [
"Notify incident commander of phase start/completion",
"Update rollback status dashboard",
"Log all actions and decisions"
],
"risk_level": "medium"
},
{
"phase_name": "rollback_preparation",
"description": "Rollback changes made during preparation phase",
"urgency_level": "medium",
"estimated_duration_minutes": 570,
"prerequisites": [
"Incident commander assigned and briefed",
"All team members notified of rollback initiation",
"Monitoring systems confirmed operational",
"Backup systems verified and accessible",
"Previous rollback phase completed successfully"
],
"steps": [
{
"step_id": "rb_schema_4_01",
"name": "Drop migration artifacts",
"description": "Remove temporary migration tables and procedures",
"script_type": "sql",
"script_content": "-- Drop migration artifacts\nDROP TABLE IF EXISTS migration_log;\nDROP PROCEDURE IF EXISTS migrate_data();",
"estimated_duration_minutes": 5,
"dependencies": [],
"validation_commands": [
"SELECT COUNT(*) FROM information_schema.tables WHERE table_name LIKE '%migration%';"
],
"success_criteria": [
"No migration artifacts remain"
],
"failure_escalation": "Manual cleanup required",
"rollback_order": 1
},
{
"step_id": "rb_validate_4_final",
"name": "Validate rollback completion",
"description": "Comprehensive validation that preparation rollback completed successfully",
"script_type": "manual",
"script_content": "Execute validation checklist for this phase",
"estimated_duration_minutes": 10,
"dependencies": [
"rb_schema_4_01"
],
"validation_commands": [
"SELECT COUNT(*) FROM {table_name};",
"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = '{table_name}';",
"SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '{table_name}' AND column_name = '{column_name}';",
"SELECT COUNT(DISTINCT {primary_key}) FROM {table_name};",
"SELECT MAX({timestamp_column}) FROM {table_name};"
],
"success_criteria": [
"preparation fully rolled back",
"All validation checks pass"
],
"failure_escalation": "Investigate preparation rollback failures",
"rollback_order": 99
}
],
"validation_checkpoints": [
"preparation rollback steps completed",
"System health checks passing",
"No critical errors in logs",
"Key metrics within acceptable ranges",
"Validation command passed: SELECT COUNT(*) FROM {table_name};...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...",
"Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH..."
],
"communication_requirements": [
"Notify incident commander of phase start/completion",
"Update rollback status dashboard",
"Log all actions and decisions"
],
"risk_level": "medium"
}
],
"trigger_conditions": [
{
"trigger_id": "error_rate_spike",
"name": "Error Rate Spike",
"condition": "error_rate > baseline * 5 for 5 minutes",
"metric_threshold": {
"metric": "error_rate",
"operator": "greater_than",
"value": "baseline_error_rate * 5",
"duration_minutes": 5
},
"evaluation_window_minutes": 5,
"auto_execute": true,
"escalation_contacts": [
"on_call_engineer",
"migration_lead"
]
},
{
"trigger_id": "response_time_degradation",
"name": "Response Time Degradation",
"condition": "p95_response_time > baseline * 3 for 10 minutes",
"metric_threshold": {
"metric": "p95_response_time",
"operator": "greater_than",
"value": "baseline_p95 * 3",
"duration_minutes": 10
},
"evaluation_window_minutes": 10,
"auto_execute": false,
"escalation_contacts": [
"performance_team",
"migration_lead"
]
},
{
"trigger_id": "availability_drop",
"name": "Service Availability Drop",
"condition": "availability < 95% for 2 minutes",
"metric_threshold": {
"metric": "availability",
"operator": "less_than",
"value": 0.95,
"duration_minutes": 2
},
"evaluation_window_minutes": 2,
"auto_execute": true,
"escalation_contacts": [
"sre_team",
"incident_commander"
]
},
{
"trigger_id": "data_integrity_failure",
"name": "Data Integrity Check Failure",
"condition": "data_validation_failures > 0",
"metric_threshold": {
"metric": "data_validation_failures",
"operator": "greater_than",
"value": 0,
"duration_minutes": 1
},
"evaluation_window_minutes": 1,
"auto_execute": true,
"escalation_contacts": [
"dba_team",
"data_team"
]
},
{
"trigger_id": "migration_progress_stalled",
"name": "Migration Progress Stalled",
"condition": "migration_progress unchanged for 30 minutes",
"metric_threshold": {
"metric": "migration_progress_rate",
"operator": "equals",
"value": 0,
"duration_minutes": 30
},
"evaluation_window_minutes": 30,
"auto_execute": false,
"escalation_contacts": [
"migration_team",
"dba_team"
]
}
],
"data_recovery_plan": {
"recovery_method": "point_in_time",
"backup_location": "/backups/pre_migration_{migration_id}_{timestamp}.sql",
"recovery_scripts": [
"pg_restore -d production -c /backups/pre_migration_backup.sql",
"SELECT pg_create_restore_point('rollback_point');",
"VACUUM ANALYZE; -- Refresh statistics after restore"
],
"data_validation_queries": [
"SELECT COUNT(*) FROM critical_business_table;",
"SELECT MAX(created_at) FROM audit_log;",
"SELECT COUNT(DISTINCT user_id) FROM user_sessions;",
"SELECT SUM(amount) FROM financial_transactions WHERE date = CURRENT_DATE;"
],
"estimated_recovery_time_minutes": 45,
"recovery_dependencies": [
"database_instance_running",
"backup_file_accessible"
]
},
"communication_templates": [
{
"template_type": "rollback_start",
"audience": "technical",
"subject": "ROLLBACK INITIATED: {migration_name}",
"body": "Team,\n\nWe have initiated rollback for migration: {migration_name}\nRollback ID: {rollback_id}\nStart Time: {start_time}\nEstimated Duration: {estimated_duration}\n\nReason: {rollback_reason}\n\nCurrent Status: Rolling back phase {current_phase}\n\nNext Updates: Every 15 minutes or upon phase completion\n\nActions Required:\n- Monitor system health dashboards\n- Stand by for escalation if needed\n- Do not make manual changes during rollback\n\nIncident Commander: {incident_commander}\n",
"urgency": "medium",
"delivery_methods": [
"email",
"slack"
]
},
{
"template_type": "rollback_start",
"audience": "business",
"subject": "System Rollback In Progress - {system_name}",
"body": "Business Stakeholders,\n\nWe are currently performing a planned rollback of the {system_name} migration due to {rollback_reason}.\n\nImpact: {business_impact}\nExpected Resolution: {estimated_completion_time}\nAffected Services: {affected_services}\n\nWe will provide updates every 30 minutes.\n\nContact: {business_contact}\n",
"urgency": "medium",
"delivery_methods": [
"email"
]
},
{
"template_type": "rollback_start",
"audience": "executive",
"subject": "EXEC ALERT: Critical System Rollback - {system_name}",
"body": "Executive Team,\n\nA critical rollback is in progress for {system_name}.\n\nSummary:\n- Rollback Reason: {rollback_reason}\n- Business Impact: {business_impact}\n- Expected Resolution: {estimated_completion_time}\n- Customer Impact: {customer_impact}\n\nWe are following established procedures and will update hourly.\n\nEscalation: {escalation_contact}\n",
"urgency": "high",
"delivery_methods": [
"email"
]
},
{
"template_type": "rollback_complete",
"audience": "technical",
"subject": "ROLLBACK COMPLETED: {migration_name}",
"body": "Team,\n\nRollback has been successfully completed for migration: {migration_name}\n\nSummary:\n- Start Time: {start_time}\n- End Time: {end_time}\n- Duration: {actual_duration}\n- Phases Completed: {completed_phases}\n\nValidation Results:\n{validation_results}\n\nSystem Status: {system_status}\n\nNext Steps:\n- Continue monitoring for 24 hours\n- Post-rollback review scheduled for {review_date}\n- Root cause analysis to begin\n\nAll clear to resume normal operations.\n\nIncident Commander: {incident_commander}\n",
"urgency": "medium",
"delivery_methods": [
"email",
"slack"
]
},
{
"template_type": "emergency_escalation",
"audience": "executive",
"subject": "CRITICAL: Rollback Emergency - {migration_name}",
"body": "CRITICAL SITUATION - IMMEDIATE ATTENTION REQUIRED\n\nMigration: {migration_name}\nIssue: Rollback procedure has encountered critical failures\n\nCurrent Status: {current_status}\nFailed Components: {failed_components}\nBusiness Impact: {business_impact}\nCustomer Impact: {customer_impact}\n\nImmediate Actions:\n1. Emergency response team activated\n2. {emergency_action_1}\n3. {emergency_action_2}\n\nWar Room: {war_room_location}\nBridge Line: {conference_bridge}\n\nNext Update: {next_update_time}\n\nIncident Commander: {incident_commander}\nExecutive On-Call: {executive_on_call}\n",
"urgency": "emergency",
"delivery_methods": [
"email",
"sms",
"phone_call"
]
}
],
"escalation_matrix": {
"level_1": {
"trigger": "Single component failure",
"response_time_minutes": 5,
"contacts": [
"on_call_engineer",
"migration_lead"
],
"actions": [
"Investigate issue",
"Attempt automated remediation",
"Monitor closely"
]
},
"level_2": {
"trigger": "Multiple component failures or single critical failure",
"response_time_minutes": 2,
"contacts": [
"senior_engineer",
"team_lead",
"devops_lead"
],
"actions": [
"Initiate rollback",
"Establish war room",
"Notify stakeholders"
]
},
"level_3": {
"trigger": "System-wide failure or data corruption",
"response_time_minutes": 1,
"contacts": [
"engineering_manager",
"cto",
"incident_commander"
],
"actions": [
"Emergency rollback",
"All hands on deck",
"Executive notification"
]
},
"emergency": {
"trigger": "Business-critical failure with customer impact",
"response_time_minutes": 0,
"contacts": [
"ceo",
"cto",
"head_of_operations"
],
"actions": [
"Emergency procedures",
"Customer communication",
"Media preparation if needed"
]
}
},
"validation_checklist": [
"Verify system is responding to health checks",
"Confirm error rates are within normal parameters",
"Validate response times meet SLA requirements",
"Check all critical business processes are functioning",
"Verify monitoring and alerting systems are operational",
"Confirm no data corruption has occurred",
"Validate security controls are functioning properly",
"Check backup systems are working correctly",
"Verify integration points with downstream systems",
"Confirm user authentication and authorization working",
"Validate database schema matches expected state",
"Confirm referential integrity constraints",
"Check database performance metrics",
"Verify data consistency across related tables",
"Validate indexes and statistics are optimal",
"Confirm transaction logs are clean",
"Check database connections and connection pooling"
],
"post_rollback_procedures": [
"Monitor system stability for 24-48 hours post-rollback",
"Conduct thorough post-rollback testing of all critical paths",
"Review and analyze rollback metrics and timing",
"Document lessons learned and rollback procedure improvements",
"Schedule post-mortem meeting with all stakeholders",
"Update rollback procedures based on actual experience",
"Communicate rollback completion to all stakeholders",
"Archive rollback logs and artifacts for future reference",
"Review and update monitoring thresholds if needed",
"Plan for next migration attempt with improved procedures",
"Conduct security review to ensure no vulnerabilities introduced",
"Update disaster recovery procedures if affected by rollback",
"Review capacity planning based on rollback resource usage",
"Update documentation with rollback experience and timings"
],
"emergency_contacts": [
{
"role": "Incident Commander",
"name": "TBD - Assigned during migration",
"primary_phone": "+1-XXX-XXX-XXXX",
"email": "incident.commander@company.com",
"backup_contact": "backup.commander@company.com"
},
{
"role": "Technical Lead",
"name": "TBD - Migration technical owner",
"primary_phone": "+1-XXX-XXX-XXXX",
"email": "tech.lead@company.com",
"backup_contact": "senior.engineer@company.com"
},
{
"role": "Business Owner",
"name": "TBD - Business stakeholder",
"primary_phone": "+1-XXX-XXX-XXXX",
"email": "business.owner@company.com",
"backup_contact": "product.manager@company.com"
},
{
"role": "On-Call Engineer",
"name": "Current on-call rotation",
"primary_phone": "+1-XXX-XXX-XXXX",
"email": "oncall@company.com",
"backup_contact": "backup.oncall@company.com"
},
{
"role": "Executive Escalation",
"name": "CTO/VP Engineering",
"primary_phone": "+1-XXX-XXX-XXXX",
"email": "cto@company.com",
"backup_contact": "vp.engineering@company.com"
}
]
}

View File

@@ -0,0 +1,282 @@
================================================================================
ROLLBACK RUNBOOK: rb_921c0bca
================================================================================
Migration ID: 23a52ed1507f
Created: 2026-02-16T13:47:31.108500
EMERGENCY CONTACTS
----------------------------------------
Incident Commander: TBD - Assigned during migration
Phone: +1-XXX-XXX-XXXX
Email: incident.commander@company.com
Backup: backup.commander@company.com
Technical Lead: TBD - Migration technical owner
Phone: +1-XXX-XXX-XXXX
Email: tech.lead@company.com
Backup: senior.engineer@company.com
Business Owner: TBD - Business stakeholder
Phone: +1-XXX-XXX-XXXX
Email: business.owner@company.com
Backup: product.manager@company.com
On-Call Engineer: Current on-call rotation
Phone: +1-XXX-XXX-XXXX
Email: oncall@company.com
Backup: backup.oncall@company.com
Executive Escalation: CTO/VP Engineering
Phone: +1-XXX-XXX-XXXX
Email: cto@company.com
Backup: vp.engineering@company.com
ESCALATION MATRIX
----------------------------------------
LEVEL_1:
Trigger: Single component failure
Response Time: 5 minutes
Contacts: on_call_engineer, migration_lead
Actions: Investigate issue, Attempt automated remediation, Monitor closely
LEVEL_2:
Trigger: Multiple component failures or single critical failure
Response Time: 2 minutes
Contacts: senior_engineer, team_lead, devops_lead
Actions: Initiate rollback, Establish war room, Notify stakeholders
LEVEL_3:
Trigger: System-wide failure or data corruption
Response Time: 1 minutes
Contacts: engineering_manager, cto, incident_commander
Actions: Emergency rollback, All hands on deck, Executive notification
EMERGENCY:
Trigger: Business-critical failure with customer impact
Response Time: 0 minutes
Contacts: ceo, cto, head_of_operations
Actions: Emergency procedures, Customer communication, Media preparation if needed
AUTOMATIC ROLLBACK TRIGGERS
----------------------------------------
• Error Rate Spike
Condition: error_rate > baseline * 5 for 5 minutes
Auto-Execute: Yes
Evaluation Window: 5 minutes
Contacts: on_call_engineer, migration_lead
• Response Time Degradation
Condition: p95_response_time > baseline * 3 for 10 minutes
Auto-Execute: No
Evaluation Window: 10 minutes
Contacts: performance_team, migration_lead
• Service Availability Drop
Condition: availability < 95% for 2 minutes
Auto-Execute: Yes
Evaluation Window: 2 minutes
Contacts: sre_team, incident_commander
• Data Integrity Check Failure
Condition: data_validation_failures > 0
Auto-Execute: Yes
Evaluation Window: 1 minutes
Contacts: dba_team, data_team
• Migration Progress Stalled
Condition: migration_progress unchanged for 30 minutes
Auto-Execute: No
Evaluation Window: 30 minutes
Contacts: migration_team, dba_team
ROLLBACK PHASES
----------------------------------------
1. ROLLBACK_CLEANUP
Description: Rollback changes made during cleanup phase
Urgency: MEDIUM
Duration: 570 minutes
Risk Level: MEDIUM
Prerequisites:
✓ Incident commander assigned and briefed
✓ All team members notified of rollback initiation
✓ Monitoring systems confirmed operational
✓ Backup systems verified and accessible
Steps:
99. Validate rollback completion
Duration: 10 min
Type: manual
Success Criteria: cleanup fully rolled back, All validation checks pass
Validation Checkpoints:
☐ cleanup rollback steps completed
☐ System health checks passing
☐ No critical errors in logs
☐ Key metrics within acceptable ranges
☐ Validation command passed: SELECT COUNT(*) FROM {table_name};...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH...
2. ROLLBACK_CONTRACT
Description: Rollback changes made during contract phase
Urgency: MEDIUM
Duration: 570 minutes
Risk Level: MEDIUM
Prerequisites:
✓ Incident commander assigned and briefed
✓ All team members notified of rollback initiation
✓ Monitoring systems confirmed operational
✓ Backup systems verified and accessible
✓ Previous rollback phase completed successfully
Steps:
99. Validate rollback completion
Duration: 10 min
Type: manual
Success Criteria: contract fully rolled back, All validation checks pass
Validation Checkpoints:
☐ contract rollback steps completed
☐ System health checks passing
☐ No critical errors in logs
☐ Key metrics within acceptable ranges
☐ Validation command passed: SELECT COUNT(*) FROM {table_name};...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH...
3. ROLLBACK_MIGRATE
Description: Rollback changes made during migrate phase
Urgency: MEDIUM
Duration: 570 minutes
Risk Level: MEDIUM
Prerequisites:
✓ Incident commander assigned and briefed
✓ All team members notified of rollback initiation
✓ Monitoring systems confirmed operational
✓ Backup systems verified and accessible
✓ Previous rollback phase completed successfully
Steps:
99. Validate rollback completion
Duration: 10 min
Type: manual
Success Criteria: migrate fully rolled back, All validation checks pass
Validation Checkpoints:
☐ migrate rollback steps completed
☐ System health checks passing
☐ No critical errors in logs
☐ Key metrics within acceptable ranges
☐ Validation command passed: SELECT COUNT(*) FROM {table_name};...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH...
4. ROLLBACK_EXPAND
Description: Rollback changes made during expand phase
Urgency: MEDIUM
Duration: 570 minutes
Risk Level: MEDIUM
Prerequisites:
✓ Incident commander assigned and briefed
✓ All team members notified of rollback initiation
✓ Monitoring systems confirmed operational
✓ Backup systems verified and accessible
✓ Previous rollback phase completed successfully
Steps:
99. Validate rollback completion
Duration: 10 min
Type: manual
Success Criteria: expand fully rolled back, All validation checks pass
Validation Checkpoints:
☐ expand rollback steps completed
☐ System health checks passing
☐ No critical errors in logs
☐ Key metrics within acceptable ranges
☐ Validation command passed: SELECT COUNT(*) FROM {table_name};...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH...
5. ROLLBACK_PREPARATION
Description: Rollback changes made during preparation phase
Urgency: MEDIUM
Duration: 570 minutes
Risk Level: MEDIUM
Prerequisites:
✓ Incident commander assigned and briefed
✓ All team members notified of rollback initiation
✓ Monitoring systems confirmed operational
✓ Backup systems verified and accessible
✓ Previous rollback phase completed successfully
Steps:
1. Drop migration artifacts
Duration: 5 min
Type: sql
Script:
-- Drop migration artifacts
DROP TABLE IF EXISTS migration_log;
DROP PROCEDURE IF EXISTS migrate_data();
Success Criteria: No migration artifacts remain
99. Validate rollback completion
Duration: 10 min
Type: manual
Success Criteria: preparation fully rolled back, All validation checks pass
Validation Checkpoints:
☐ preparation rollback steps completed
☐ System health checks passing
☐ No critical errors in logs
☐ Key metrics within acceptable ranges
☐ Validation command passed: SELECT COUNT(*) FROM {table_name};...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.tables WHE...
☐ Validation command passed: SELECT COUNT(*) FROM information_schema.columns WH...
DATA RECOVERY PLAN
----------------------------------------
Recovery Method: point_in_time
Backup Location: /backups/pre_migration_{migration_id}_{timestamp}.sql
Estimated Recovery Time: 45 minutes
Recovery Scripts:
• pg_restore -d production -c /backups/pre_migration_backup.sql
• SELECT pg_create_restore_point('rollback_point');
• VACUUM ANALYZE; -- Refresh statistics after restore
Validation Queries:
• SELECT COUNT(*) FROM critical_business_table;
• SELECT MAX(created_at) FROM audit_log;
• SELECT COUNT(DISTINCT user_id) FROM user_sessions;
• SELECT SUM(amount) FROM financial_transactions WHERE date = CURRENT_DATE;
POST-ROLLBACK VALIDATION CHECKLIST
----------------------------------------
1. ☐ Verify system is responding to health checks
2. ☐ Confirm error rates are within normal parameters
3. ☐ Validate response times meet SLA requirements
4. ☐ Check all critical business processes are functioning
5. ☐ Verify monitoring and alerting systems are operational
6. ☐ Confirm no data corruption has occurred
7. ☐ Validate security controls are functioning properly
8. ☐ Check backup systems are working correctly
9. ☐ Verify integration points with downstream systems
10. ☐ Confirm user authentication and authorization working
11. ☐ Validate database schema matches expected state
12. ☐ Confirm referential integrity constraints
13. ☐ Check database performance metrics
14. ☐ Verify data consistency across related tables
15. ☐ Validate indexes and statistics are optimal
16. ☐ Confirm transaction logs are clean
17. ☐ Check database connections and connection pooling
POST-ROLLBACK PROCEDURES
----------------------------------------
1. Monitor system stability for 24-48 hours post-rollback
2. Conduct thorough post-rollback testing of all critical paths
3. Review and analyze rollback metrics and timing
4. Document lessons learned and rollback procedure improvements
5. Schedule post-mortem meeting with all stakeholders
6. Update rollback procedures based on actual experience
7. Communicate rollback completion to all stakeholders
8. Archive rollback logs and artifacts for future reference
9. Review and update monitoring thresholds if needed
10. Plan for next migration attempt with improved procedures
11. Conduct security review to ensure no vulnerabilities introduced
12. Update disaster recovery procedures if affected by rollback
13. Review capacity planning based on rollback resource usage
14. Update documentation with rollback experience and timings

View File

@@ -0,0 +1,317 @@
{
"migration_id": "23a52ed1507f",
"source_system": "PostgreSQL 13 Production Database",
"target_system": "PostgreSQL 15 Cloud Database",
"migration_type": "database",
"complexity": "critical",
"estimated_duration_hours": 95,
"phases": [
{
"name": "preparation",
"description": "Prepare systems and teams for migration",
"duration_hours": 19,
"dependencies": [],
"validation_criteria": [
"All backups completed successfully",
"Monitoring systems operational",
"Team members briefed and ready",
"Rollback procedures tested"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Backup source system",
"Set up monitoring and alerting",
"Prepare rollback procedures",
"Communicate migration timeline",
"Validate prerequisites"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "expand",
"description": "Execute expand phase",
"duration_hours": 19,
"dependencies": [
"preparation"
],
"validation_criteria": [
"Expand phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete expand activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "migrate",
"description": "Execute migrate phase",
"duration_hours": 19,
"dependencies": [
"expand"
],
"validation_criteria": [
"Migrate phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete migrate activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "contract",
"description": "Execute contract phase",
"duration_hours": 19,
"dependencies": [
"migrate"
],
"validation_criteria": [
"Contract phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete contract activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "cleanup",
"description": "Execute cleanup phase",
"duration_hours": 19,
"dependencies": [
"contract"
],
"validation_criteria": [
"Cleanup phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete cleanup activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
}
],
"risks": [
{
"category": "technical",
"description": "Data corruption during migration",
"probability": "low",
"impact": "critical",
"severity": "high",
"mitigation": "Implement comprehensive backup and validation procedures",
"owner": "DBA Team"
},
{
"category": "technical",
"description": "Extended downtime due to migration complexity",
"probability": "medium",
"impact": "high",
"severity": "high",
"mitigation": "Use blue-green deployment and phased migration approach",
"owner": "DevOps Team"
},
{
"category": "business",
"description": "Business process disruption",
"probability": "medium",
"impact": "high",
"severity": "high",
"mitigation": "Communicate timeline and provide alternate workflows",
"owner": "Business Owner"
},
{
"category": "operational",
"description": "Insufficient rollback testing",
"probability": "high",
"impact": "critical",
"severity": "critical",
"mitigation": "Execute full rollback procedures in staging environment",
"owner": "QA Team"
},
{
"category": "business",
"description": "Zero-downtime requirement increases complexity",
"probability": "high",
"impact": "medium",
"severity": "high",
"mitigation": "Implement blue-green deployment or rolling update strategy",
"owner": "DevOps Team"
},
{
"category": "compliance",
"description": "Regulatory compliance requirements",
"probability": "medium",
"impact": "high",
"severity": "high",
"mitigation": "Ensure all compliance checks are integrated into migration process",
"owner": "Compliance Team"
}
],
"success_criteria": [
"All data successfully migrated with 100% integrity",
"System performance meets or exceeds baseline",
"All business processes functioning normally",
"No critical security vulnerabilities introduced",
"Stakeholder acceptance criteria met",
"Documentation and runbooks updated"
],
"rollback_plan": {
"rollback_phases": [
{
"phase": "cleanup",
"rollback_actions": [
"Revert cleanup changes",
"Restore pre-cleanup state",
"Validate cleanup rollback success"
],
"validation_criteria": [
"System restored to pre-cleanup state",
"All cleanup changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 285
},
{
"phase": "contract",
"rollback_actions": [
"Revert contract changes",
"Restore pre-contract state",
"Validate contract rollback success"
],
"validation_criteria": [
"System restored to pre-contract state",
"All contract changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 285
},
{
"phase": "migrate",
"rollback_actions": [
"Revert migrate changes",
"Restore pre-migrate state",
"Validate migrate rollback success"
],
"validation_criteria": [
"System restored to pre-migrate state",
"All migrate changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 285
},
{
"phase": "expand",
"rollback_actions": [
"Revert expand changes",
"Restore pre-expand state",
"Validate expand rollback success"
],
"validation_criteria": [
"System restored to pre-expand state",
"All expand changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 285
},
{
"phase": "preparation",
"rollback_actions": [
"Revert preparation changes",
"Restore pre-preparation state",
"Validate preparation rollback success"
],
"validation_criteria": [
"System restored to pre-preparation state",
"All preparation changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 285
}
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Migration timeline exceeded by > 50%",
"Business-critical functionality unavailable",
"Security breach detected",
"Stakeholder decision to abort"
],
"rollback_decision_matrix": {
"low_severity": "Continue with monitoring",
"medium_severity": "Assess and decide within 15 minutes",
"high_severity": "Immediate rollback initiation",
"critical_severity": "Emergency rollback - all hands"
},
"rollback_contacts": [
"Migration Lead",
"Technical Lead",
"Business Owner",
"On-call Engineer"
]
},
"stakeholders": [
"Business Owner",
"Technical Lead",
"DevOps Team",
"QA Team",
"Security Team",
"End Users"
],
"created_at": "2026-02-16T13:47:23.704502"
}

View File

@@ -0,0 +1,161 @@
================================================================================
MIGRATION PLAN: 23a52ed1507f
================================================================================
Source System: PostgreSQL 13 Production Database
Target System: PostgreSQL 15 Cloud Database
Migration Type: DATABASE
Complexity Level: CRITICAL
Estimated Duration: 95 hours (4.0 days)
Created: 2026-02-16T13:47:23.704502
MIGRATION PHASES
----------------------------------------
1. PREPARATION (19h)
Description: Prepare systems and teams for migration
Risk Level: MEDIUM
Tasks:
• Backup source system
• Set up monitoring and alerting
• Prepare rollback procedures
• Communicate migration timeline
• Validate prerequisites
Success Criteria:
✓ All backups completed successfully
✓ Monitoring systems operational
✓ Team members briefed and ready
✓ Rollback procedures tested
2. EXPAND (19h)
Description: Execute expand phase
Risk Level: MEDIUM
Dependencies: preparation
Tasks:
• Complete expand activities
Success Criteria:
✓ Expand phase completed successfully
3. MIGRATE (19h)
Description: Execute migrate phase
Risk Level: MEDIUM
Dependencies: expand
Tasks:
• Complete migrate activities
Success Criteria:
✓ Migrate phase completed successfully
4. CONTRACT (19h)
Description: Execute contract phase
Risk Level: MEDIUM
Dependencies: migrate
Tasks:
• Complete contract activities
Success Criteria:
✓ Contract phase completed successfully
5. CLEANUP (19h)
Description: Execute cleanup phase
Risk Level: MEDIUM
Dependencies: contract
Tasks:
• Complete cleanup activities
Success Criteria:
✓ Cleanup phase completed successfully
RISK ASSESSMENT
----------------------------------------
CRITICAL SEVERITY RISKS:
• Insufficient rollback testing
Category: operational
Probability: high | Impact: critical
Mitigation: Execute full rollback procedures in staging environment
Owner: QA Team
HIGH SEVERITY RISKS:
• Data corruption during migration
Category: technical
Probability: low | Impact: critical
Mitigation: Implement comprehensive backup and validation procedures
Owner: DBA Team
• Extended downtime due to migration complexity
Category: technical
Probability: medium | Impact: high
Mitigation: Use blue-green deployment and phased migration approach
Owner: DevOps Team
• Business process disruption
Category: business
Probability: medium | Impact: high
Mitigation: Communicate timeline and provide alternate workflows
Owner: Business Owner
• Zero-downtime requirement increases complexity
Category: business
Probability: high | Impact: medium
Mitigation: Implement blue-green deployment or rolling update strategy
Owner: DevOps Team
• Regulatory compliance requirements
Category: compliance
Probability: medium | Impact: high
Mitigation: Ensure all compliance checks are integrated into migration process
Owner: Compliance Team
ROLLBACK STRATEGY
----------------------------------------
Rollback Triggers:
• Critical system failure
• Data corruption detected
• Migration timeline exceeded by > 50%
• Business-critical functionality unavailable
• Security breach detected
• Stakeholder decision to abort
Rollback Phases:
CLEANUP:
- Revert cleanup changes
- Restore pre-cleanup state
- Validate cleanup rollback success
Estimated Time: 285 minutes
CONTRACT:
- Revert contract changes
- Restore pre-contract state
- Validate contract rollback success
Estimated Time: 285 minutes
MIGRATE:
- Revert migrate changes
- Restore pre-migrate state
- Validate migrate rollback success
Estimated Time: 285 minutes
EXPAND:
- Revert expand changes
- Restore pre-expand state
- Validate expand rollback success
Estimated Time: 285 minutes
PREPARATION:
- Revert preparation changes
- Restore pre-preparation state
- Validate preparation rollback success
Estimated Time: 285 minutes
SUCCESS CRITERIA
----------------------------------------
✓ All data successfully migrated with 100% integrity
✓ System performance meets or exceeds baseline
✓ All business processes functioning normally
✓ No critical security vulnerabilities introduced
✓ Stakeholder acceptance criteria met
✓ Documentation and runbooks updated
STAKEHOLDERS
----------------------------------------
• Business Owner
• Technical Lead
• DevOps Team
• QA Team
• Security Team
• End Users

View File

@@ -0,0 +1,310 @@
{
"migration_id": "21031930da18",
"source_system": "Legacy User Service (Java Spring Boot 2.x)",
"target_system": "New User Service (Node.js + TypeScript)",
"migration_type": "service",
"complexity": "critical",
"estimated_duration_hours": 500,
"phases": [
{
"name": "intercept",
"description": "Execute intercept phase",
"duration_hours": 100,
"dependencies": [],
"validation_criteria": [
"Intercept phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete intercept activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "implement",
"description": "Execute implement phase",
"duration_hours": 100,
"dependencies": [
"intercept"
],
"validation_criteria": [
"Implement phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete implement activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "redirect",
"description": "Execute redirect phase",
"duration_hours": 100,
"dependencies": [
"implement"
],
"validation_criteria": [
"Redirect phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete redirect activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "validate",
"description": "Execute validate phase",
"duration_hours": 100,
"dependencies": [
"redirect"
],
"validation_criteria": [
"Validate phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete validate activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
},
{
"name": "retire",
"description": "Execute retire phase",
"duration_hours": 100,
"dependencies": [
"validate"
],
"validation_criteria": [
"Retire phase completed successfully"
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
],
"tasks": [
"Complete retire activities"
],
"risk_level": "medium",
"resources_required": [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
}
],
"risks": [
{
"category": "technical",
"description": "Service compatibility issues",
"probability": "medium",
"impact": "high",
"severity": "high",
"mitigation": "Implement comprehensive integration testing",
"owner": "Development Team"
},
{
"category": "technical",
"description": "Performance degradation",
"probability": "medium",
"impact": "medium",
"severity": "medium",
"mitigation": "Conduct load testing and performance benchmarking",
"owner": "DevOps Team"
},
{
"category": "business",
"description": "Feature parity gaps",
"probability": "high",
"impact": "high",
"severity": "high",
"mitigation": "Document feature mapping and acceptance criteria",
"owner": "Product Owner"
},
{
"category": "operational",
"description": "Monitoring gap during transition",
"probability": "medium",
"impact": "medium",
"severity": "medium",
"mitigation": "Set up dual monitoring and alerting systems",
"owner": "SRE Team"
},
{
"category": "business",
"description": "Zero-downtime requirement increases complexity",
"probability": "high",
"impact": "medium",
"severity": "high",
"mitigation": "Implement blue-green deployment or rolling update strategy",
"owner": "DevOps Team"
},
{
"category": "compliance",
"description": "Regulatory compliance requirements",
"probability": "medium",
"impact": "high",
"severity": "high",
"mitigation": "Ensure all compliance checks are integrated into migration process",
"owner": "Compliance Team"
}
],
"success_criteria": [
"All data successfully migrated with 100% integrity",
"System performance meets or exceeds baseline",
"All business processes functioning normally",
"No critical security vulnerabilities introduced",
"Stakeholder acceptance criteria met",
"Documentation and runbooks updated"
],
"rollback_plan": {
"rollback_phases": [
{
"phase": "retire",
"rollback_actions": [
"Revert retire changes",
"Restore pre-retire state",
"Validate retire rollback success"
],
"validation_criteria": [
"System restored to pre-retire state",
"All retire changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 1500
},
{
"phase": "validate",
"rollback_actions": [
"Revert validate changes",
"Restore pre-validate state",
"Validate validate rollback success"
],
"validation_criteria": [
"System restored to pre-validate state",
"All validate changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 1500
},
{
"phase": "redirect",
"rollback_actions": [
"Revert redirect changes",
"Restore pre-redirect state",
"Validate redirect rollback success"
],
"validation_criteria": [
"System restored to pre-redirect state",
"All redirect changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 1500
},
{
"phase": "implement",
"rollback_actions": [
"Revert implement changes",
"Restore pre-implement state",
"Validate implement rollback success"
],
"validation_criteria": [
"System restored to pre-implement state",
"All implement changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 1500
},
{
"phase": "intercept",
"rollback_actions": [
"Revert intercept changes",
"Restore pre-intercept state",
"Validate intercept rollback success"
],
"validation_criteria": [
"System restored to pre-intercept state",
"All intercept changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": 1500
}
],
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Migration timeline exceeded by > 50%",
"Business-critical functionality unavailable",
"Security breach detected",
"Stakeholder decision to abort"
],
"rollback_decision_matrix": {
"low_severity": "Continue with monitoring",
"medium_severity": "Assess and decide within 15 minutes",
"high_severity": "Immediate rollback initiation",
"critical_severity": "Emergency rollback - all hands"
},
"rollback_contacts": [
"Migration Lead",
"Technical Lead",
"Business Owner",
"On-call Engineer"
]
},
"stakeholders": [
"Business Owner",
"Technical Lead",
"DevOps Team",
"QA Team",
"Security Team",
"End Users"
],
"created_at": "2026-02-16T13:47:34.565896"
}

View File

@@ -0,0 +1,154 @@
================================================================================
MIGRATION PLAN: 21031930da18
================================================================================
Source System: Legacy User Service (Java Spring Boot 2.x)
Target System: New User Service (Node.js + TypeScript)
Migration Type: SERVICE
Complexity Level: CRITICAL
Estimated Duration: 500 hours (20.8 days)
Created: 2026-02-16T13:47:34.565896
MIGRATION PHASES
----------------------------------------
1. INTERCEPT (100h)
Description: Execute intercept phase
Risk Level: MEDIUM
Tasks:
• Complete intercept activities
Success Criteria:
✓ Intercept phase completed successfully
2. IMPLEMENT (100h)
Description: Execute implement phase
Risk Level: MEDIUM
Dependencies: intercept
Tasks:
• Complete implement activities
Success Criteria:
✓ Implement phase completed successfully
3. REDIRECT (100h)
Description: Execute redirect phase
Risk Level: MEDIUM
Dependencies: implement
Tasks:
• Complete redirect activities
Success Criteria:
✓ Redirect phase completed successfully
4. VALIDATE (100h)
Description: Execute validate phase
Risk Level: MEDIUM
Dependencies: redirect
Tasks:
• Complete validate activities
Success Criteria:
✓ Validate phase completed successfully
5. RETIRE (100h)
Description: Execute retire phase
Risk Level: MEDIUM
Dependencies: validate
Tasks:
• Complete retire activities
Success Criteria:
✓ Retire phase completed successfully
RISK ASSESSMENT
----------------------------------------
HIGH SEVERITY RISKS:
• Service compatibility issues
Category: technical
Probability: medium | Impact: high
Mitigation: Implement comprehensive integration testing
Owner: Development Team
• Feature parity gaps
Category: business
Probability: high | Impact: high
Mitigation: Document feature mapping and acceptance criteria
Owner: Product Owner
• Zero-downtime requirement increases complexity
Category: business
Probability: high | Impact: medium
Mitigation: Implement blue-green deployment or rolling update strategy
Owner: DevOps Team
• Regulatory compliance requirements
Category: compliance
Probability: medium | Impact: high
Mitigation: Ensure all compliance checks are integrated into migration process
Owner: Compliance Team
MEDIUM SEVERITY RISKS:
• Performance degradation
Category: technical
Probability: medium | Impact: medium
Mitigation: Conduct load testing and performance benchmarking
Owner: DevOps Team
• Monitoring gap during transition
Category: operational
Probability: medium | Impact: medium
Mitigation: Set up dual monitoring and alerting systems
Owner: SRE Team
ROLLBACK STRATEGY
----------------------------------------
Rollback Triggers:
• Critical system failure
• Data corruption detected
• Migration timeline exceeded by > 50%
• Business-critical functionality unavailable
• Security breach detected
• Stakeholder decision to abort
Rollback Phases:
RETIRE:
- Revert retire changes
- Restore pre-retire state
- Validate retire rollback success
Estimated Time: 1500 minutes
VALIDATE:
- Revert validate changes
- Restore pre-validate state
- Validate validate rollback success
Estimated Time: 1500 minutes
REDIRECT:
- Revert redirect changes
- Restore pre-redirect state
- Validate redirect rollback success
Estimated Time: 1500 minutes
IMPLEMENT:
- Revert implement changes
- Restore pre-implement state
- Validate implement rollback success
Estimated Time: 1500 minutes
INTERCEPT:
- Revert intercept changes
- Restore pre-intercept state
- Validate intercept rollback success
Estimated Time: 1500 minutes
SUCCESS CRITERIA
----------------------------------------
✓ All data successfully migrated with 100% integrity
✓ System performance meets or exceeds baseline
✓ All business processes functioning normally
✓ No critical security vulnerabilities introduced
✓ Stakeholder acceptance criteria met
✓ Documentation and runbooks updated
STAKEHOLDERS
----------------------------------------
• Business Owner
• Technical Lead
• DevOps Team
• QA Team
• Security Team
• End Users

View File

@@ -0,0 +1,192 @@
{
"schema_before": "{\n \"schema_version\": \"1.0\",\n \"database\": \"user_management\",\n \"tables\": {\n \"users\": {\n \"columns\": {\n \"id\": {\n \"type\": \"bigint\",\n \"nullable\": false,\n \"primary_key\": true,\n \"auto_increment\": true\n },\n \"username\": {\n \"type\": \"varchar\",\n \"length\": 50,\n \"nullable\": false,\n \"unique\": true\n },\n \"email\": {\n \"type\": \"varchar\",\n \"length\": 255,\n \"nullable\": false,\n...",
"schema_after": "{\n \"schema_version\": \"2.0\",\n \"database\": \"user_management_v2\",\n \"tables\": {\n \"users\": {\n \"columns\": {\n \"id\": {\n \"type\": \"bigint\",\n \"nullable\": false,\n \"primary_key\": true,\n \"auto_increment\": true\n },\n \"username\": {\n \"type\": \"varchar\",\n \"length\": 50,\n \"nullable\": false,\n \"unique\": true\n },\n \"email\": {\n \"type\": \"varchar\",\n \"length\": 320,\n \"nullable\": fals...",
"analysis_date": "2026-02-16T13:47:27.050459",
"overall_compatibility": "potentially_incompatible",
"breaking_changes_count": 0,
"potentially_breaking_count": 4,
"non_breaking_changes_count": 0,
"additive_changes_count": 0,
"issues": [
{
"type": "check_added",
"severity": "potentially_breaking",
"description": "New check constraint 'phone IS NULL OR LENGTH(phone) >= 10' added to table 'users'",
"field_path": "tables.users.constraints.check",
"old_value": null,
"new_value": "phone IS NULL OR LENGTH(phone) >= 10",
"impact": "New check constraint may reject existing data",
"suggested_migration": "Validate existing data complies with new constraint",
"affected_operations": [
"INSERT",
"UPDATE"
]
},
{
"type": "check_added",
"severity": "potentially_breaking",
"description": "New check constraint 'bio IS NULL OR LENGTH(bio) <= 2000' added to table 'user_profiles'",
"field_path": "tables.user_profiles.constraints.check",
"old_value": null,
"new_value": "bio IS NULL OR LENGTH(bio) <= 2000",
"impact": "New check constraint may reject existing data",
"suggested_migration": "Validate existing data complies with new constraint",
"affected_operations": [
"INSERT",
"UPDATE"
]
},
{
"type": "check_added",
"severity": "potentially_breaking",
"description": "New check constraint 'language IN ('en', 'es', 'fr', 'de', 'it', 'pt', 'ru', 'ja', 'ko', 'zh')' added to table 'user_profiles'",
"field_path": "tables.user_profiles.constraints.check",
"old_value": null,
"new_value": "language IN ('en', 'es', 'fr', 'de', 'it', 'pt', 'ru', 'ja', 'ko', 'zh')",
"impact": "New check constraint may reject existing data",
"suggested_migration": "Validate existing data complies with new constraint",
"affected_operations": [
"INSERT",
"UPDATE"
]
},
{
"type": "check_added",
"severity": "potentially_breaking",
"description": "New check constraint 'session_type IN ('web', 'mobile', 'api', 'admin')' added to table 'user_sessions'",
"field_path": "tables.user_sessions.constraints.check",
"old_value": null,
"new_value": "session_type IN ('web', 'mobile', 'api', 'admin')",
"impact": "New check constraint may reject existing data",
"suggested_migration": "Validate existing data complies with new constraint",
"affected_operations": [
"INSERT",
"UPDATE"
]
}
],
"migration_scripts": [
{
"script_type": "sql",
"description": "Create new table user_preferences",
"script_content": "CREATE TABLE user_preferences (\n id bigint NOT NULL,\n user_id bigint NOT NULL,\n preference_key varchar NOT NULL,\n preference_value json,\n created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP\n);",
"rollback_script": "DROP TABLE IF EXISTS user_preferences;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.tables WHERE table_name = 'user_preferences';"
},
{
"script_type": "sql",
"description": "Add column email_verified_at to table users",
"script_content": "ALTER TABLE users ADD COLUMN email_verified_at timestamp;",
"rollback_script": "ALTER TABLE users DROP COLUMN email_verified_at;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'users' AND column_name = 'email_verified_at';"
},
{
"script_type": "sql",
"description": "Add column phone_verified_at to table users",
"script_content": "ALTER TABLE users ADD COLUMN phone_verified_at timestamp;",
"rollback_script": "ALTER TABLE users DROP COLUMN phone_verified_at;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'users' AND column_name = 'phone_verified_at';"
},
{
"script_type": "sql",
"description": "Add column two_factor_enabled to table users",
"script_content": "ALTER TABLE users ADD COLUMN two_factor_enabled boolean NOT NULL DEFAULT False;",
"rollback_script": "ALTER TABLE users DROP COLUMN two_factor_enabled;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'users' AND column_name = 'two_factor_enabled';"
},
{
"script_type": "sql",
"description": "Add column last_login_at to table users",
"script_content": "ALTER TABLE users ADD COLUMN last_login_at timestamp;",
"rollback_script": "ALTER TABLE users DROP COLUMN last_login_at;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'users' AND column_name = 'last_login_at';"
},
{
"script_type": "sql",
"description": "Add check constraint to users",
"script_content": "ALTER TABLE users ADD CONSTRAINT check_users CHECK (phone IS NULL OR LENGTH(phone) >= 10);",
"rollback_script": "ALTER TABLE users DROP CONSTRAINT check_users;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.table_constraints WHERE table_name = 'users' AND constraint_type = 'CHECK';"
},
{
"script_type": "sql",
"description": "Add column timezone to table user_profiles",
"script_content": "ALTER TABLE user_profiles ADD COLUMN timezone varchar DEFAULT UTC;",
"rollback_script": "ALTER TABLE user_profiles DROP COLUMN timezone;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'user_profiles' AND column_name = 'timezone';"
},
{
"script_type": "sql",
"description": "Add column language to table user_profiles",
"script_content": "ALTER TABLE user_profiles ADD COLUMN language varchar NOT NULL DEFAULT en;",
"rollback_script": "ALTER TABLE user_profiles DROP COLUMN language;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'user_profiles' AND column_name = 'language';"
},
{
"script_type": "sql",
"description": "Add check constraint to user_profiles",
"script_content": "ALTER TABLE user_profiles ADD CONSTRAINT check_user_profiles CHECK (bio IS NULL OR LENGTH(bio) <= 2000);",
"rollback_script": "ALTER TABLE user_profiles DROP CONSTRAINT check_user_profiles;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.table_constraints WHERE table_name = 'user_profiles' AND constraint_type = 'CHECK';"
},
{
"script_type": "sql",
"description": "Add check constraint to user_profiles",
"script_content": "ALTER TABLE user_profiles ADD CONSTRAINT check_user_profiles CHECK (language IN ('en', 'es', 'fr', 'de', 'it', 'pt', 'ru', 'ja', 'ko', 'zh'));",
"rollback_script": "ALTER TABLE user_profiles DROP CONSTRAINT check_user_profiles;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.table_constraints WHERE table_name = 'user_profiles' AND constraint_type = 'CHECK';"
},
{
"script_type": "sql",
"description": "Add column session_type to table user_sessions",
"script_content": "ALTER TABLE user_sessions ADD COLUMN session_type varchar NOT NULL DEFAULT web;",
"rollback_script": "ALTER TABLE user_sessions DROP COLUMN session_type;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'user_sessions' AND column_name = 'session_type';"
},
{
"script_type": "sql",
"description": "Add column is_mobile to table user_sessions",
"script_content": "ALTER TABLE user_sessions ADD COLUMN is_mobile boolean NOT NULL DEFAULT False;",
"rollback_script": "ALTER TABLE user_sessions DROP COLUMN is_mobile;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'user_sessions' AND column_name = 'is_mobile';"
},
{
"script_type": "sql",
"description": "Add check constraint to user_sessions",
"script_content": "ALTER TABLE user_sessions ADD CONSTRAINT check_user_sessions CHECK (session_type IN ('web', 'mobile', 'api', 'admin'));",
"rollback_script": "ALTER TABLE user_sessions DROP CONSTRAINT check_user_sessions;",
"dependencies": [],
"validation_query": "SELECT COUNT(*) FROM information_schema.table_constraints WHERE table_name = 'user_sessions' AND constraint_type = 'CHECK';"
}
],
"risk_assessment": {
"overall_risk": "medium",
"deployment_risk": "safe_independent_deployment",
"rollback_complexity": "low",
"testing_requirements": [
"integration_testing",
"regression_testing",
"data_migration_testing"
]
},
"recommendations": [
"Conduct thorough testing with realistic data volumes",
"Implement monitoring for migration success metrics",
"Test all migration scripts in staging environment",
"Implement migration progress monitoring",
"Create detailed communication plan for stakeholders",
"Implement feature flags for gradual rollout"
]
}

View File

@@ -0,0 +1,129 @@
================================================================================
COMPATIBILITY ANALYSIS REPORT
================================================================================
Analysis Date: 2026-02-16T13:47:27.050459
Overall Compatibility: POTENTIALLY_INCOMPATIBLE
SUMMARY
----------------------------------------
Breaking Changes: 0
Potentially Breaking: 4
Non-Breaking Changes: 0
Additive Changes: 0
Total Issues Found: 4
RISK ASSESSMENT
----------------------------------------
Overall Risk: medium
Deployment Risk: safe_independent_deployment
Rollback Complexity: low
Testing Requirements: ['integration_testing', 'regression_testing', 'data_migration_testing']
POTENTIALLY BREAKING ISSUES
----------------------------------------
• New check constraint 'phone IS NULL OR LENGTH(phone) >= 10' added to table 'users'
Field: tables.users.constraints.check
Impact: New check constraint may reject existing data
Migration: Validate existing data complies with new constraint
Affected Operations: INSERT, UPDATE
• New check constraint 'bio IS NULL OR LENGTH(bio) <= 2000' added to table 'user_profiles'
Field: tables.user_profiles.constraints.check
Impact: New check constraint may reject existing data
Migration: Validate existing data complies with new constraint
Affected Operations: INSERT, UPDATE
• New check constraint 'language IN ('en', 'es', 'fr', 'de', 'it', 'pt', 'ru', 'ja', 'ko', 'zh')' added to table 'user_profiles'
Field: tables.user_profiles.constraints.check
Impact: New check constraint may reject existing data
Migration: Validate existing data complies with new constraint
Affected Operations: INSERT, UPDATE
• New check constraint 'session_type IN ('web', 'mobile', 'api', 'admin')' added to table 'user_sessions'
Field: tables.user_sessions.constraints.check
Impact: New check constraint may reject existing data
Migration: Validate existing data complies with new constraint
Affected Operations: INSERT, UPDATE
SUGGESTED MIGRATION SCRIPTS
----------------------------------------
1. Create new table user_preferences
Type: sql
Script:
CREATE TABLE user_preferences (
id bigint NOT NULL,
user_id bigint NOT NULL,
preference_key varchar NOT NULL,
preference_value json,
created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
2. Add column email_verified_at to table users
Type: sql
Script:
ALTER TABLE users ADD COLUMN email_verified_at timestamp;
3. Add column phone_verified_at to table users
Type: sql
Script:
ALTER TABLE users ADD COLUMN phone_verified_at timestamp;
4. Add column two_factor_enabled to table users
Type: sql
Script:
ALTER TABLE users ADD COLUMN two_factor_enabled boolean NOT NULL DEFAULT False;
5. Add column last_login_at to table users
Type: sql
Script:
ALTER TABLE users ADD COLUMN last_login_at timestamp;
6. Add check constraint to users
Type: sql
Script:
ALTER TABLE users ADD CONSTRAINT check_users CHECK (phone IS NULL OR LENGTH(phone) >= 10);
7. Add column timezone to table user_profiles
Type: sql
Script:
ALTER TABLE user_profiles ADD COLUMN timezone varchar DEFAULT UTC;
8. Add column language to table user_profiles
Type: sql
Script:
ALTER TABLE user_profiles ADD COLUMN language varchar NOT NULL DEFAULT en;
9. Add check constraint to user_profiles
Type: sql
Script:
ALTER TABLE user_profiles ADD CONSTRAINT check_user_profiles CHECK (bio IS NULL OR LENGTH(bio) <= 2000);
10. Add check constraint to user_profiles
Type: sql
Script:
ALTER TABLE user_profiles ADD CONSTRAINT check_user_profiles CHECK (language IN ('en', 'es', 'fr', 'de', 'it', 'pt', 'ru', 'ja', 'ko', 'zh'));
11. Add column session_type to table user_sessions
Type: sql
Script:
ALTER TABLE user_sessions ADD COLUMN session_type varchar NOT NULL DEFAULT web;
12. Add column is_mobile to table user_sessions
Type: sql
Script:
ALTER TABLE user_sessions ADD COLUMN is_mobile boolean NOT NULL DEFAULT False;
13. Add check constraint to user_sessions
Type: sql
Script:
ALTER TABLE user_sessions ADD CONSTRAINT check_user_sessions CHECK (session_type IN ('web', 'mobile', 'api', 'admin'));
RECOMMENDATIONS
----------------------------------------
1. Conduct thorough testing with realistic data volumes
2. Implement monitoring for migration success metrics
3. Test all migration scripts in staging environment
4. Implement migration progress monitoring
5. Create detailed communication plan for stakeholders
6. Implement feature flags for gradual rollout

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,705 @@
# Migration Patterns Catalog
## Overview
This catalog provides detailed descriptions of proven migration patterns, their use cases, implementation guidelines, and best practices. Each pattern includes code examples, diagrams, and lessons learned from real-world implementations.
## Database Migration Patterns
### 1. Expand-Contract Pattern
**Use Case:** Schema evolution with zero downtime
**Complexity:** Medium
**Risk Level:** Low-Medium
#### Description
The Expand-Contract pattern allows for schema changes without downtime by following a three-phase approach:
1. **Expand:** Add new schema elements alongside existing ones
2. **Migrate:** Dual-write to both old and new schema during transition
3. **Contract:** Remove old schema elements after validation
#### Implementation Steps
```sql
-- Phase 1: Expand
ALTER TABLE users ADD COLUMN email_new VARCHAR(255);
CREATE INDEX CONCURRENTLY idx_users_email_new ON users(email_new);
-- Phase 2: Migrate (Application Code)
-- Write to both columns during transition period
INSERT INTO users (name, email, email_new) VALUES (?, ?, ?);
-- Backfill existing data
UPDATE users SET email_new = email WHERE email_new IS NULL;
-- Phase 3: Contract (after validation)
ALTER TABLE users DROP COLUMN email;
ALTER TABLE users RENAME COLUMN email_new TO email;
```
#### Pros and Cons
**Pros:**
- Zero downtime deployments
- Safe rollback at any point
- Gradual transition with validation
**Cons:**
- Increased storage during transition
- More complex application logic
- Extended migration timeline
### 2. Parallel Schema Pattern
**Use Case:** Major database restructuring
**Complexity:** High
**Risk Level:** Medium
#### Description
Run new and old schemas in parallel, using feature flags to gradually route traffic to the new schema while maintaining the ability to rollback quickly.
#### Implementation Example
```python
class DatabaseRouter:
def __init__(self, feature_flag_service):
self.feature_flags = feature_flag_service
self.old_db = OldDatabaseConnection()
self.new_db = NewDatabaseConnection()
def route_query(self, user_id, query_type):
if self.feature_flags.is_enabled("new_schema", user_id):
return self.new_db.execute(query_type)
else:
return self.old_db.execute(query_type)
def dual_write(self, data):
# Write to both databases for consistency
success_old = self.old_db.write(data)
success_new = self.new_db.write(transform_data(data))
if not (success_old and success_new):
# Handle partial failures
self.handle_dual_write_failure(data, success_old, success_new)
```
#### Best Practices
- Implement data consistency checks between schemas
- Use circuit breakers for automatic failover
- Monitor performance impact of dual writes
- Plan for data reconciliation processes
### 3. Event Sourcing Migration
**Use Case:** Migrating systems with complex business logic
**Complexity:** High
**Risk Level:** Medium-High
#### Description
Capture all changes as events during migration, enabling replay and reconciliation capabilities.
#### Event Store Schema
```sql
CREATE TABLE migration_events (
event_id UUID PRIMARY KEY,
aggregate_id UUID NOT NULL,
event_type VARCHAR(100) NOT NULL,
event_data JSONB NOT NULL,
event_version INTEGER NOT NULL,
occurred_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
processed_at TIMESTAMP WITH TIME ZONE
);
```
#### Migration Event Handler
```python
class MigrationEventHandler:
def __init__(self, old_store, new_store):
self.old_store = old_store
self.new_store = new_store
self.event_log = []
def handle_update(self, entity_id, old_data, new_data):
# Log the change as an event
event = MigrationEvent(
entity_id=entity_id,
event_type="entity_migrated",
old_data=old_data,
new_data=new_data,
timestamp=datetime.now()
)
self.event_log.append(event)
# Apply to new store
success = self.new_store.update(entity_id, new_data)
if not success:
# Mark for retry
event.status = "failed"
self.schedule_retry(event)
return success
def replay_events(self, from_timestamp=None):
"""Replay events for reconciliation"""
events = self.get_events_since(from_timestamp)
for event in events:
self.apply_event(event)
```
## Service Migration Patterns
### 1. Strangler Fig Pattern
**Use Case:** Legacy system replacement
**Complexity:** Medium-High
**Risk Level:** Medium
#### Description
Gradually replace legacy functionality by intercepting calls and routing them to new services, eventually "strangling" the legacy system.
#### Implementation Architecture
```yaml
# API Gateway Configuration
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: user-service-migration
spec:
http:
- match:
- headers:
migration-flag:
exact: "new"
route:
- destination:
host: user-service-v2
- route:
- destination:
host: user-service-v1
```
#### Strangler Proxy Implementation
```python
class StranglerProxy:
def __init__(self):
self.legacy_service = LegacyUserService()
self.new_service = NewUserService()
self.feature_flags = FeatureFlagService()
def handle_request(self, request):
route = self.determine_route(request)
if route == "new":
return self.handle_with_new_service(request)
elif route == "both":
return self.handle_with_both_services(request)
else:
return self.handle_with_legacy_service(request)
def determine_route(self, request):
user_id = request.get('user_id')
if self.feature_flags.is_enabled("new_user_service", user_id):
if self.feature_flags.is_enabled("dual_write", user_id):
return "both"
else:
return "new"
else:
return "legacy"
```
### 2. Parallel Run Pattern
**Use Case:** Risk mitigation for critical services
**Complexity:** Medium
**Risk Level:** Low-Medium
#### Description
Run both old and new services simultaneously, comparing outputs to validate correctness before switching traffic.
#### Implementation
```python
class ParallelRunManager:
def __init__(self):
self.primary_service = PrimaryService()
self.candidate_service = CandidateService()
self.comparator = ResponseComparator()
self.metrics = MetricsCollector()
async def parallel_execute(self, request):
# Execute both services concurrently
primary_task = asyncio.create_task(
self.primary_service.process(request)
)
candidate_task = asyncio.create_task(
self.candidate_service.process(request)
)
# Always wait for primary
primary_result = await primary_task
try:
# Wait for candidate with timeout
candidate_result = await asyncio.wait_for(
candidate_task, timeout=5.0
)
# Compare results
comparison = self.comparator.compare(
primary_result, candidate_result
)
# Record metrics
self.metrics.record_comparison(comparison)
except asyncio.TimeoutError:
self.metrics.record_timeout("candidate")
except Exception as e:
self.metrics.record_error("candidate", str(e))
# Always return primary result
return primary_result
```
### 3. Blue-Green Deployment Pattern
**Use Case:** Zero-downtime service updates
**Complexity:** Low-Medium
**Risk Level:** Low
#### Description
Maintain two identical production environments (blue and green), switching traffic between them for deployments.
#### Kubernetes Implementation
```yaml
# Blue Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
labels:
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: myapp:v1.0.0
---
# Green Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
labels:
version: green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:v2.0.0
---
# Service (switches between blue and green)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
version: blue # Change to green for deployment
ports:
- port: 80
targetPort: 8080
```
## Infrastructure Migration Patterns
### 1. Lift and Shift Pattern
**Use Case:** Quick cloud migration with minimal changes
**Complexity:** Low-Medium
**Risk Level:** Low
#### Description
Migrate applications to cloud infrastructure with minimal or no code changes, focusing on infrastructure compatibility.
#### Migration Checklist
```yaml
Pre-Migration Assessment:
- inventory_current_infrastructure:
- servers_and_specifications
- network_configuration
- storage_requirements
- security_configurations
- identify_dependencies:
- database_connections
- external_service_integrations
- file_system_dependencies
- assess_compatibility:
- operating_system_versions
- runtime_dependencies
- license_requirements
Migration Execution:
- provision_target_infrastructure:
- compute_instances
- storage_volumes
- network_configuration
- security_groups
- migrate_data:
- database_backup_restore
- file_system_replication
- configuration_files
- update_configurations:
- connection_strings
- environment_variables
- dns_records
- validate_functionality:
- application_health_checks
- end_to_end_testing
- performance_validation
```
### 2. Hybrid Cloud Migration
**Use Case:** Gradual cloud adoption with on-premises integration
**Complexity:** High
**Risk Level:** Medium-High
#### Description
Maintain some components on-premises while migrating others to cloud, requiring secure connectivity and data synchronization.
#### Network Architecture
```hcl
# Terraform configuration for hybrid connectivity
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_vpn_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "hybrid-vpn-gateway"
}
}
resource "aws_customer_gateway" "main" {
bgp_asn = 65000
ip_address = var.on_premises_public_ip
type = "ipsec.1"
tags = {
Name = "on-premises-gateway"
}
}
resource "aws_vpn_connection" "main" {
vpn_gateway_id = aws_vpn_gateway.main.id
customer_gateway_id = aws_customer_gateway.main.id
type = "ipsec.1"
static_routes_only = true
}
```
#### Data Synchronization Pattern
```python
class HybridDataSync:
def __init__(self):
self.on_prem_db = OnPremiseDatabase()
self.cloud_db = CloudDatabase()
self.sync_log = SyncLogManager()
async def bidirectional_sync(self):
"""Synchronize data between on-premises and cloud"""
# Get last sync timestamp
last_sync = self.sync_log.get_last_sync_time()
# Sync on-prem changes to cloud
on_prem_changes = self.on_prem_db.get_changes_since(last_sync)
for change in on_prem_changes:
await self.apply_change_to_cloud(change)
# Sync cloud changes to on-prem
cloud_changes = self.cloud_db.get_changes_since(last_sync)
for change in cloud_changes:
await self.apply_change_to_on_prem(change)
# Handle conflicts
conflicts = self.detect_conflicts(on_prem_changes, cloud_changes)
for conflict in conflicts:
await self.resolve_conflict(conflict)
# Update sync timestamp
self.sync_log.record_sync_completion()
async def apply_change_to_cloud(self, change):
"""Apply on-premises change to cloud database"""
try:
if change.operation == "INSERT":
await self.cloud_db.insert(change.table, change.data)
elif change.operation == "UPDATE":
await self.cloud_db.update(change.table, change.key, change.data)
elif change.operation == "DELETE":
await self.cloud_db.delete(change.table, change.key)
self.sync_log.record_success(change.id, "cloud")
except Exception as e:
self.sync_log.record_failure(change.id, "cloud", str(e))
raise
```
### 3. Multi-Cloud Migration
**Use Case:** Avoiding vendor lock-in or regulatory requirements
**Complexity:** Very High
**Risk Level:** High
#### Description
Distribute workloads across multiple cloud providers for resilience, compliance, or cost optimization.
#### Service Mesh Configuration
```yaml
# Istio configuration for multi-cloud service mesh
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: aws-service
spec:
hosts:
- aws-service.company.com
ports:
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: multi-cloud-routing
spec:
hosts:
- user-service
http:
- match:
- headers:
region:
exact: "us-east"
route:
- destination:
host: aws-service.company.com
weight: 100
- match:
- headers:
region:
exact: "eu-west"
route:
- destination:
host: gcp-service.company.com
weight: 100
- route: # Default routing
- destination:
host: user-service
subset: local
weight: 80
- destination:
host: aws-service.company.com
weight: 20
```
## Feature Flag Patterns
### 1. Progressive Rollout Pattern
**Use Case:** Gradual feature deployment with risk mitigation
**Implementation:**
```python
class ProgressiveRollout:
def __init__(self, feature_name):
self.feature_name = feature_name
self.rollout_percentage = 0
self.user_buckets = {}
def is_enabled_for_user(self, user_id):
# Consistent user bucketing
user_hash = hashlib.md5(f"{self.feature_name}:{user_id}".encode()).hexdigest()
bucket = int(user_hash, 16) % 100
return bucket < self.rollout_percentage
def increase_rollout(self, target_percentage, step_size=10):
"""Gradually increase rollout percentage"""
while self.rollout_percentage < target_percentage:
self.rollout_percentage = min(
self.rollout_percentage + step_size,
target_percentage
)
# Monitor metrics before next increase
yield self.rollout_percentage
time.sleep(300) # Wait 5 minutes between increases
```
### 2. Circuit Breaker Pattern
**Use Case:** Automatic fallback during migration issues
```python
class MigrationCircuitBreaker:
def __init__(self, failure_threshold=5, timeout=60):
self.failure_count = 0
self.failure_threshold = failure_threshold
self.timeout = timeout
self.last_failure_time = None
self.state = 'CLOSED' # CLOSED, OPEN, HALF_OPEN
def call_new_service(self, request):
if self.state == 'OPEN':
if self.should_attempt_reset():
self.state = 'HALF_OPEN'
else:
return self.fallback_to_legacy(request)
try:
response = self.new_service.process(request)
self.on_success()
return response
except Exception as e:
self.on_failure()
return self.fallback_to_legacy(request)
def on_success(self):
self.failure_count = 0
self.state = 'CLOSED'
def on_failure(self):
self.failure_count += 1
self.last_failure_time = time.time()
if self.failure_count >= self.failure_threshold:
self.state = 'OPEN'
def should_attempt_reset(self):
return (time.time() - self.last_failure_time) >= self.timeout
```
## Migration Anti-Patterns
### 1. Big Bang Migration (Anti-Pattern)
**Why to Avoid:**
- High risk of complete system failure
- Difficult to rollback
- Extended downtime
- All-or-nothing deployment
**Better Alternative:** Use incremental migration patterns like Strangler Fig or Parallel Run.
### 2. No Rollback Plan (Anti-Pattern)
**Why to Avoid:**
- Cannot recover from failures
- Increases business risk
- Panic-driven decisions during issues
**Better Alternative:** Always implement comprehensive rollback procedures before migration.
### 3. Insufficient Testing (Anti-Pattern)
**Why to Avoid:**
- Unknown compatibility issues
- Performance degradation
- Data corruption risks
**Better Alternative:** Implement comprehensive testing at each migration phase.
## Pattern Selection Matrix
| Migration Type | Complexity | Downtime Tolerance | Recommended Pattern |
|---------------|------------|-------------------|-------------------|
| Schema Change | Low | Zero | Expand-Contract |
| Schema Change | High | Zero | Parallel Schema |
| Service Replace | Medium | Zero | Strangler Fig |
| Service Update | Low | Zero | Blue-Green |
| Data Migration | High | Some | Event Sourcing |
| Infrastructure | Low | Some | Lift and Shift |
| Infrastructure | High | Zero | Hybrid Cloud |
## Success Metrics
### Technical Metrics
- Migration completion rate
- System availability during migration
- Performance impact (response time, throughput)
- Error rate changes
- Rollback execution time
### Business Metrics
- Customer impact score
- Revenue protection
- Time to value realization
- Stakeholder satisfaction
### Operational Metrics
- Team efficiency
- Knowledge transfer effectiveness
- Post-migration support requirements
- Documentation completeness
## Lessons Learned
### Common Pitfalls
1. **Underestimating data dependencies** - Always map all data relationships
2. **Insufficient monitoring** - Implement comprehensive observability before migration
3. **Poor communication** - Keep all stakeholders informed throughout the process
4. **Rushed timelines** - Allow adequate time for testing and validation
5. **Ignoring performance impact** - Benchmark before and after migration
### Best Practices
1. **Start with low-risk migrations** - Build confidence and experience
2. **Automate everything possible** - Reduce human error and increase repeatability
3. **Test rollback procedures** - Ensure you can recover from any failure
4. **Monitor continuously** - Use real-time dashboards and alerting
5. **Document everything** - Create comprehensive runbooks and documentation
This catalog serves as a reference for selecting appropriate migration patterns based on specific requirements, risk tolerance, and technical constraints.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,883 @@
#!/usr/bin/env python3
"""
Compatibility Checker - Analyze schema and API compatibility between versions
This tool analyzes schema and API changes between versions and identifies backward
compatibility issues including breaking changes, data type mismatches, missing fields,
constraint violations, and generates migration scripts suggestions.
Author: Migration Architect Skill
Version: 1.0.0
License: MIT
"""
import json
import argparse
import sys
import re
import datetime
from typing import Dict, List, Any, Optional, Tuple, Set
from dataclasses import dataclass, asdict
from enum import Enum
class ChangeType(Enum):
"""Types of changes detected"""
BREAKING = "breaking"
POTENTIALLY_BREAKING = "potentially_breaking"
NON_BREAKING = "non_breaking"
ADDITIVE = "additive"
class CompatibilityLevel(Enum):
"""Compatibility assessment levels"""
FULLY_COMPATIBLE = "fully_compatible"
BACKWARD_COMPATIBLE = "backward_compatible"
POTENTIALLY_INCOMPATIBLE = "potentially_incompatible"
BREAKING_CHANGES = "breaking_changes"
@dataclass
class CompatibilityIssue:
"""Individual compatibility issue"""
type: str
severity: str
description: str
field_path: str
old_value: Any
new_value: Any
impact: str
suggested_migration: str
affected_operations: List[str]
@dataclass
class MigrationScript:
"""Migration script suggestion"""
script_type: str # sql, api, config
description: str
script_content: str
rollback_script: str
dependencies: List[str]
validation_query: str
@dataclass
class CompatibilityReport:
"""Complete compatibility analysis report"""
schema_before: str
schema_after: str
analysis_date: str
overall_compatibility: str
breaking_changes_count: int
potentially_breaking_count: int
non_breaking_changes_count: int
additive_changes_count: int
issues: List[CompatibilityIssue]
migration_scripts: List[MigrationScript]
risk_assessment: Dict[str, Any]
recommendations: List[str]
class SchemaCompatibilityChecker:
"""Main schema compatibility checker class"""
def __init__(self):
self.type_compatibility_matrix = self._build_type_compatibility_matrix()
self.constraint_implications = self._build_constraint_implications()
def _build_type_compatibility_matrix(self) -> Dict[str, Dict[str, str]]:
"""Build data type compatibility matrix"""
return {
# SQL data types compatibility
"varchar": {
"text": "compatible",
"char": "potentially_breaking", # length might be different
"nvarchar": "compatible",
"int": "breaking",
"bigint": "breaking",
"decimal": "breaking",
"datetime": "breaking",
"boolean": "breaking"
},
"int": {
"bigint": "compatible",
"smallint": "potentially_breaking", # range reduction
"decimal": "compatible",
"float": "potentially_breaking", # precision loss
"varchar": "breaking",
"boolean": "breaking"
},
"bigint": {
"int": "potentially_breaking", # range reduction
"decimal": "compatible",
"varchar": "breaking",
"boolean": "breaking"
},
"decimal": {
"float": "potentially_breaking", # precision loss
"int": "potentially_breaking", # precision loss
"bigint": "potentially_breaking", # precision loss
"varchar": "breaking",
"boolean": "breaking"
},
"datetime": {
"timestamp": "compatible",
"date": "potentially_breaking", # time component lost
"varchar": "breaking",
"int": "breaking"
},
"boolean": {
"tinyint": "compatible",
"varchar": "breaking",
"int": "breaking"
},
# JSON/API field types
"string": {
"number": "breaking",
"boolean": "breaking",
"array": "breaking",
"object": "breaking",
"null": "potentially_breaking"
},
"number": {
"string": "breaking",
"boolean": "breaking",
"array": "breaking",
"object": "breaking",
"null": "potentially_breaking"
},
"boolean": {
"string": "breaking",
"number": "breaking",
"array": "breaking",
"object": "breaking",
"null": "potentially_breaking"
},
"array": {
"string": "breaking",
"number": "breaking",
"boolean": "breaking",
"object": "breaking",
"null": "potentially_breaking"
},
"object": {
"string": "breaking",
"number": "breaking",
"boolean": "breaking",
"array": "breaking",
"null": "potentially_breaking"
}
}
def _build_constraint_implications(self) -> Dict[str, Dict[str, str]]:
"""Build constraint change implications"""
return {
"required": {
"added": "breaking", # Previously optional field now required
"removed": "non_breaking" # Previously required field now optional
},
"not_null": {
"added": "breaking", # Previously nullable now NOT NULL
"removed": "non_breaking" # Previously NOT NULL now nullable
},
"unique": {
"added": "potentially_breaking", # May fail if duplicates exist
"removed": "non_breaking" # No longer enforcing uniqueness
},
"primary_key": {
"added": "breaking", # Major structural change
"removed": "breaking", # Major structural change
"modified": "breaking" # Primary key change is always breaking
},
"foreign_key": {
"added": "potentially_breaking", # May fail if referential integrity violated
"removed": "potentially_breaking", # May allow orphaned records
"modified": "breaking" # Reference change is breaking
},
"check": {
"added": "potentially_breaking", # May fail if existing data violates check
"removed": "non_breaking", # No longer enforcing check
"modified": "potentially_breaking" # Different validation rules
},
"index": {
"added": "non_breaking", # Performance improvement
"removed": "non_breaking", # Performance impact only
"modified": "non_breaking" # Performance impact only
}
}
def analyze_database_schema(self, before_schema: Dict[str, Any],
after_schema: Dict[str, Any]) -> CompatibilityReport:
"""Analyze database schema compatibility"""
issues = []
migration_scripts = []
before_tables = before_schema.get("tables", {})
after_tables = after_schema.get("tables", {})
# Check for removed tables
for table_name in before_tables:
if table_name not in after_tables:
issues.append(CompatibilityIssue(
type="table_removed",
severity="breaking",
description=f"Table '{table_name}' has been removed",
field_path=f"tables.{table_name}",
old_value=before_tables[table_name],
new_value=None,
impact="All operations on this table will fail",
suggested_migration=f"CREATE VIEW {table_name} AS SELECT * FROM replacement_table;",
affected_operations=["SELECT", "INSERT", "UPDATE", "DELETE"]
))
# Check for added tables
for table_name in after_tables:
if table_name not in before_tables:
migration_scripts.append(MigrationScript(
script_type="sql",
description=f"Create new table {table_name}",
script_content=self._generate_create_table_sql(table_name, after_tables[table_name]),
rollback_script=f"DROP TABLE IF EXISTS {table_name};",
dependencies=[],
validation_query=f"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = '{table_name}';"
))
# Check for modified tables
for table_name in set(before_tables.keys()) & set(after_tables.keys()):
table_issues, table_scripts = self._analyze_table_changes(
table_name, before_tables[table_name], after_tables[table_name]
)
issues.extend(table_issues)
migration_scripts.extend(table_scripts)
return self._build_compatibility_report(
before_schema, after_schema, issues, migration_scripts
)
def analyze_api_schema(self, before_schema: Dict[str, Any],
after_schema: Dict[str, Any]) -> CompatibilityReport:
"""Analyze REST API schema compatibility"""
issues = []
migration_scripts = []
# Analyze endpoints
before_paths = before_schema.get("paths", {})
after_paths = after_schema.get("paths", {})
# Check for removed endpoints
for path in before_paths:
if path not in after_paths:
for method in before_paths[path]:
issues.append(CompatibilityIssue(
type="endpoint_removed",
severity="breaking",
description=f"Endpoint {method.upper()} {path} has been removed",
field_path=f"paths.{path}.{method}",
old_value=before_paths[path][method],
new_value=None,
impact="Client requests to this endpoint will fail with 404",
suggested_migration=f"Implement redirect to replacement endpoint or maintain backward compatibility stub",
affected_operations=[f"{method.upper()} {path}"]
))
# Check for modified endpoints
for path in set(before_paths.keys()) & set(after_paths.keys()):
path_issues, path_scripts = self._analyze_endpoint_changes(
path, before_paths[path], after_paths[path]
)
issues.extend(path_issues)
migration_scripts.extend(path_scripts)
# Analyze data models
before_components = before_schema.get("components", {}).get("schemas", {})
after_components = after_schema.get("components", {}).get("schemas", {})
for model_name in set(before_components.keys()) & set(after_components.keys()):
model_issues, model_scripts = self._analyze_model_changes(
model_name, before_components[model_name], after_components[model_name]
)
issues.extend(model_issues)
migration_scripts.extend(model_scripts)
return self._build_compatibility_report(
before_schema, after_schema, issues, migration_scripts
)
def _analyze_table_changes(self, table_name: str, before_table: Dict[str, Any],
after_table: Dict[str, Any]) -> Tuple[List[CompatibilityIssue], List[MigrationScript]]:
"""Analyze changes to a specific table"""
issues = []
scripts = []
before_columns = before_table.get("columns", {})
after_columns = after_table.get("columns", {})
# Check for removed columns
for col_name in before_columns:
if col_name not in after_columns:
issues.append(CompatibilityIssue(
type="column_removed",
severity="breaking",
description=f"Column '{col_name}' removed from table '{table_name}'",
field_path=f"tables.{table_name}.columns.{col_name}",
old_value=before_columns[col_name],
new_value=None,
impact="SELECT statements including this column will fail",
suggested_migration=f"ALTER TABLE {table_name} ADD COLUMN {col_name}_deprecated AS computed_value;",
affected_operations=["SELECT", "INSERT", "UPDATE"]
))
# Check for added columns
for col_name in after_columns:
if col_name not in before_columns:
col_def = after_columns[col_name]
is_required = col_def.get("nullable", True) == False and col_def.get("default") is None
if is_required:
issues.append(CompatibilityIssue(
type="required_column_added",
severity="breaking",
description=f"Required column '{col_name}' added to table '{table_name}'",
field_path=f"tables.{table_name}.columns.{col_name}",
old_value=None,
new_value=col_def,
impact="INSERT statements without this column will fail",
suggested_migration=f"Add default value or make column nullable initially",
affected_operations=["INSERT"]
))
scripts.append(MigrationScript(
script_type="sql",
description=f"Add column {col_name} to table {table_name}",
script_content=f"ALTER TABLE {table_name} ADD COLUMN {self._generate_column_definition(col_name, col_def)};",
rollback_script=f"ALTER TABLE {table_name} DROP COLUMN {col_name};",
dependencies=[],
validation_query=f"SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '{table_name}' AND column_name = '{col_name}';"
))
# Check for modified columns
for col_name in set(before_columns.keys()) & set(after_columns.keys()):
col_issues, col_scripts = self._analyze_column_changes(
table_name, col_name, before_columns[col_name], after_columns[col_name]
)
issues.extend(col_issues)
scripts.extend(col_scripts)
# Check constraint changes
before_constraints = before_table.get("constraints", {})
after_constraints = after_table.get("constraints", {})
constraint_issues, constraint_scripts = self._analyze_constraint_changes(
table_name, before_constraints, after_constraints
)
issues.extend(constraint_issues)
scripts.extend(constraint_scripts)
return issues, scripts
def _analyze_column_changes(self, table_name: str, col_name: str,
before_col: Dict[str, Any], after_col: Dict[str, Any]) -> Tuple[List[CompatibilityIssue], List[MigrationScript]]:
"""Analyze changes to a specific column"""
issues = []
scripts = []
# Check data type changes
before_type = before_col.get("type", "").lower()
after_type = after_col.get("type", "").lower()
if before_type != after_type:
compatibility = self.type_compatibility_matrix.get(before_type, {}).get(after_type, "breaking")
if compatibility == "breaking":
issues.append(CompatibilityIssue(
type="incompatible_type_change",
severity="breaking",
description=f"Column '{col_name}' type changed from {before_type} to {after_type}",
field_path=f"tables.{table_name}.columns.{col_name}.type",
old_value=before_type,
new_value=after_type,
impact="Data conversion may fail or lose precision",
suggested_migration=f"Add conversion logic and validate data integrity",
affected_operations=["SELECT", "INSERT", "UPDATE", "WHERE clauses"]
))
scripts.append(MigrationScript(
script_type="sql",
description=f"Convert column {col_name} from {before_type} to {after_type}",
script_content=f"ALTER TABLE {table_name} ALTER COLUMN {col_name} TYPE {after_type} USING {col_name}::{after_type};",
rollback_script=f"ALTER TABLE {table_name} ALTER COLUMN {col_name} TYPE {before_type};",
dependencies=[f"backup_{table_name}"],
validation_query=f"SELECT COUNT(*) FROM {table_name} WHERE {col_name} IS NOT NULL;"
))
elif compatibility == "potentially_breaking":
issues.append(CompatibilityIssue(
type="risky_type_change",
severity="potentially_breaking",
description=f"Column '{col_name}' type changed from {before_type} to {after_type} - may lose data",
field_path=f"tables.{table_name}.columns.{col_name}.type",
old_value=before_type,
new_value=after_type,
impact="Potential data loss or precision reduction",
suggested_migration=f"Validate all existing data can be converted safely",
affected_operations=["Data integrity"]
))
# Check nullability changes
before_nullable = before_col.get("nullable", True)
after_nullable = after_col.get("nullable", True)
if before_nullable != after_nullable:
if before_nullable and not after_nullable: # null -> not null
issues.append(CompatibilityIssue(
type="nullability_restriction",
severity="breaking",
description=f"Column '{col_name}' changed from nullable to NOT NULL",
field_path=f"tables.{table_name}.columns.{col_name}.nullable",
old_value=before_nullable,
new_value=after_nullable,
impact="Existing NULL values will cause constraint violations",
suggested_migration=f"Update NULL values to valid defaults before applying NOT NULL constraint",
affected_operations=["INSERT", "UPDATE"]
))
scripts.append(MigrationScript(
script_type="sql",
description=f"Make column {col_name} NOT NULL",
script_content=f"""
-- Update NULL values first
UPDATE {table_name} SET {col_name} = 'DEFAULT_VALUE' WHERE {col_name} IS NULL;
-- Add NOT NULL constraint
ALTER TABLE {table_name} ALTER COLUMN {col_name} SET NOT NULL;
""",
rollback_script=f"ALTER TABLE {table_name} ALTER COLUMN {col_name} DROP NOT NULL;",
dependencies=[],
validation_query=f"SELECT COUNT(*) FROM {table_name} WHERE {col_name} IS NULL;"
))
# Check length/precision changes
before_length = before_col.get("length")
after_length = after_col.get("length")
if before_length and after_length and before_length != after_length:
if after_length < before_length:
issues.append(CompatibilityIssue(
type="length_reduction",
severity="potentially_breaking",
description=f"Column '{col_name}' length reduced from {before_length} to {after_length}",
field_path=f"tables.{table_name}.columns.{col_name}.length",
old_value=before_length,
new_value=after_length,
impact="Data truncation may occur for values exceeding new length",
suggested_migration=f"Validate no existing data exceeds new length limit",
affected_operations=["INSERT", "UPDATE"]
))
return issues, scripts
def _analyze_constraint_changes(self, table_name: str, before_constraints: Dict[str, Any],
after_constraints: Dict[str, Any]) -> Tuple[List[CompatibilityIssue], List[MigrationScript]]:
"""Analyze constraint changes"""
issues = []
scripts = []
for constraint_type in ["primary_key", "foreign_key", "unique", "check"]:
before_constraint = before_constraints.get(constraint_type, [])
after_constraint = after_constraints.get(constraint_type, [])
# Convert to sets for comparison
before_set = set(str(c) for c in before_constraint) if isinstance(before_constraint, list) else {str(before_constraint)} if before_constraint else set()
after_set = set(str(c) for c in after_constraint) if isinstance(after_constraint, list) else {str(after_constraint)} if after_constraint else set()
# Check for removed constraints
for constraint in before_set - after_set:
implication = self.constraint_implications.get(constraint_type, {}).get("removed", "non_breaking")
issues.append(CompatibilityIssue(
type=f"{constraint_type}_removed",
severity=implication,
description=f"{constraint_type.replace('_', ' ').title()} constraint '{constraint}' removed from table '{table_name}'",
field_path=f"tables.{table_name}.constraints.{constraint_type}",
old_value=constraint,
new_value=None,
impact=f"No longer enforcing {constraint_type} constraint",
suggested_migration=f"Consider application-level validation for removed constraint",
affected_operations=["INSERT", "UPDATE", "DELETE"]
))
# Check for added constraints
for constraint in after_set - before_set:
implication = self.constraint_implications.get(constraint_type, {}).get("added", "potentially_breaking")
issues.append(CompatibilityIssue(
type=f"{constraint_type}_added",
severity=implication,
description=f"New {constraint_type.replace('_', ' ')} constraint '{constraint}' added to table '{table_name}'",
field_path=f"tables.{table_name}.constraints.{constraint_type}",
old_value=None,
new_value=constraint,
impact=f"New {constraint_type} constraint may reject existing data",
suggested_migration=f"Validate existing data complies with new constraint",
affected_operations=["INSERT", "UPDATE"]
))
scripts.append(MigrationScript(
script_type="sql",
description=f"Add {constraint_type} constraint to {table_name}",
script_content=f"ALTER TABLE {table_name} ADD CONSTRAINT {constraint_type}_{table_name} {constraint_type.upper()} ({constraint});",
rollback_script=f"ALTER TABLE {table_name} DROP CONSTRAINT {constraint_type}_{table_name};",
dependencies=[],
validation_query=f"SELECT COUNT(*) FROM information_schema.table_constraints WHERE table_name = '{table_name}' AND constraint_type = '{constraint_type.upper()}';"
))
return issues, scripts
def _analyze_endpoint_changes(self, path: str, before_endpoint: Dict[str, Any],
after_endpoint: Dict[str, Any]) -> Tuple[List[CompatibilityIssue], List[MigrationScript]]:
"""Analyze changes to an API endpoint"""
issues = []
scripts = []
for method in set(before_endpoint.keys()) & set(after_endpoint.keys()):
before_method = before_endpoint[method]
after_method = after_endpoint[method]
# Check parameter changes
before_params = before_method.get("parameters", [])
after_params = after_method.get("parameters", [])
before_param_names = {p["name"] for p in before_params}
after_param_names = {p["name"] for p in after_params}
# Check for removed required parameters
for param_name in before_param_names - after_param_names:
param = next(p for p in before_params if p["name"] == param_name)
if param.get("required", False):
issues.append(CompatibilityIssue(
type="required_parameter_removed",
severity="breaking",
description=f"Required parameter '{param_name}' removed from {method.upper()} {path}",
field_path=f"paths.{path}.{method}.parameters",
old_value=param,
new_value=None,
impact="Client requests with this parameter will fail",
suggested_migration="Implement parameter validation with backward compatibility",
affected_operations=[f"{method.upper()} {path}"]
))
# Check for added required parameters
for param_name in after_param_names - before_param_names:
param = next(p for p in after_params if p["name"] == param_name)
if param.get("required", False):
issues.append(CompatibilityIssue(
type="required_parameter_added",
severity="breaking",
description=f"New required parameter '{param_name}' added to {method.upper()} {path}",
field_path=f"paths.{path}.{method}.parameters",
old_value=None,
new_value=param,
impact="Client requests without this parameter will fail",
suggested_migration="Provide default value or make parameter optional initially",
affected_operations=[f"{method.upper()} {path}"]
))
# Check response schema changes
before_responses = before_method.get("responses", {})
after_responses = after_method.get("responses", {})
for status_code in before_responses:
if status_code in after_responses:
before_schema = before_responses[status_code].get("content", {}).get("application/json", {}).get("schema", {})
after_schema = after_responses[status_code].get("content", {}).get("application/json", {}).get("schema", {})
if before_schema != after_schema:
issues.append(CompatibilityIssue(
type="response_schema_changed",
severity="potentially_breaking",
description=f"Response schema changed for {method.upper()} {path} (status {status_code})",
field_path=f"paths.{path}.{method}.responses.{status_code}",
old_value=before_schema,
new_value=after_schema,
impact="Client response parsing may fail",
suggested_migration="Implement versioned API responses",
affected_operations=[f"{method.upper()} {path}"]
))
return issues, scripts
def _analyze_model_changes(self, model_name: str, before_model: Dict[str, Any],
after_model: Dict[str, Any]) -> Tuple[List[CompatibilityIssue], List[MigrationScript]]:
"""Analyze changes to an API data model"""
issues = []
scripts = []
before_props = before_model.get("properties", {})
after_props = after_model.get("properties", {})
before_required = set(before_model.get("required", []))
after_required = set(after_model.get("required", []))
# Check for removed properties
for prop_name in set(before_props.keys()) - set(after_props.keys()):
issues.append(CompatibilityIssue(
type="property_removed",
severity="breaking",
description=f"Property '{prop_name}' removed from model '{model_name}'",
field_path=f"components.schemas.{model_name}.properties.{prop_name}",
old_value=before_props[prop_name],
new_value=None,
impact="Client code expecting this property will fail",
suggested_migration="Use API versioning to maintain backward compatibility",
affected_operations=["Serialization", "Deserialization"]
))
# Check for newly required properties
for prop_name in after_required - before_required:
issues.append(CompatibilityIssue(
type="property_made_required",
severity="breaking",
description=f"Property '{prop_name}' is now required in model '{model_name}'",
field_path=f"components.schemas.{model_name}.required",
old_value=list(before_required),
new_value=list(after_required),
impact="Client requests without this property will fail validation",
suggested_migration="Provide default values or implement gradual rollout",
affected_operations=["Request validation"]
))
# Check for property type changes
for prop_name in set(before_props.keys()) & set(after_props.keys()):
before_type = before_props[prop_name].get("type")
after_type = after_props[prop_name].get("type")
if before_type != after_type:
compatibility = self.type_compatibility_matrix.get(before_type, {}).get(after_type, "breaking")
issues.append(CompatibilityIssue(
type="property_type_changed",
severity=compatibility,
description=f"Property '{prop_name}' type changed from {before_type} to {after_type} in model '{model_name}'",
field_path=f"components.schemas.{model_name}.properties.{prop_name}.type",
old_value=before_type,
new_value=after_type,
impact="Client serialization/deserialization may fail",
suggested_migration="Implement type coercion or API versioning",
affected_operations=["Serialization", "Deserialization"]
))
return issues, scripts
def _build_compatibility_report(self, before_schema: Dict[str, Any], after_schema: Dict[str, Any],
issues: List[CompatibilityIssue], migration_scripts: List[MigrationScript]) -> CompatibilityReport:
"""Build the final compatibility report"""
# Count issues by severity
breaking_count = sum(1 for issue in issues if issue.severity == "breaking")
potentially_breaking_count = sum(1 for issue in issues if issue.severity == "potentially_breaking")
non_breaking_count = sum(1 for issue in issues if issue.severity == "non_breaking")
additive_count = sum(1 for issue in issues if issue.type == "additive")
# Determine overall compatibility
if breaking_count > 0:
overall_compatibility = "breaking_changes"
elif potentially_breaking_count > 0:
overall_compatibility = "potentially_incompatible"
elif non_breaking_count > 0:
overall_compatibility = "backward_compatible"
else:
overall_compatibility = "fully_compatible"
# Generate risk assessment
risk_assessment = {
"overall_risk": "high" if breaking_count > 0 else "medium" if potentially_breaking_count > 0 else "low",
"deployment_risk": "requires_coordinated_deployment" if breaking_count > 0 else "safe_independent_deployment",
"rollback_complexity": "high" if breaking_count > 3 else "medium" if breaking_count > 0 else "low",
"testing_requirements": ["integration_testing", "regression_testing"] +
(["data_migration_testing"] if any(s.script_type == "sql" for s in migration_scripts) else [])
}
# Generate recommendations
recommendations = []
if breaking_count > 0:
recommendations.append("Implement API versioning to maintain backward compatibility")
recommendations.append("Plan for coordinated deployment with all clients")
recommendations.append("Implement comprehensive rollback procedures")
if potentially_breaking_count > 0:
recommendations.append("Conduct thorough testing with realistic data volumes")
recommendations.append("Implement monitoring for migration success metrics")
if migration_scripts:
recommendations.append("Test all migration scripts in staging environment")
recommendations.append("Implement migration progress monitoring")
recommendations.append("Create detailed communication plan for stakeholders")
recommendations.append("Implement feature flags for gradual rollout")
return CompatibilityReport(
schema_before=json.dumps(before_schema, indent=2)[:500] + "..." if len(json.dumps(before_schema)) > 500 else json.dumps(before_schema, indent=2),
schema_after=json.dumps(after_schema, indent=2)[:500] + "..." if len(json.dumps(after_schema)) > 500 else json.dumps(after_schema, indent=2),
analysis_date=datetime.datetime.now().isoformat(),
overall_compatibility=overall_compatibility,
breaking_changes_count=breaking_count,
potentially_breaking_count=potentially_breaking_count,
non_breaking_changes_count=non_breaking_count,
additive_changes_count=additive_count,
issues=issues,
migration_scripts=migration_scripts,
risk_assessment=risk_assessment,
recommendations=recommendations
)
def _generate_create_table_sql(self, table_name: str, table_def: Dict[str, Any]) -> str:
"""Generate CREATE TABLE SQL statement"""
columns = []
for col_name, col_def in table_def.get("columns", {}).items():
columns.append(self._generate_column_definition(col_name, col_def))
return f"CREATE TABLE {table_name} (\n " + ",\n ".join(columns) + "\n);"
def _generate_column_definition(self, col_name: str, col_def: Dict[str, Any]) -> str:
"""Generate column definition for SQL"""
col_type = col_def.get("type", "VARCHAR(255)")
nullable = "" if col_def.get("nullable", True) else " NOT NULL"
default = f" DEFAULT {col_def.get('default')}" if col_def.get("default") is not None else ""
return f"{col_name} {col_type}{nullable}{default}"
def generate_human_readable_report(self, report: CompatibilityReport) -> str:
"""Generate human-readable compatibility report"""
output = []
output.append("=" * 80)
output.append("COMPATIBILITY ANALYSIS REPORT")
output.append("=" * 80)
output.append(f"Analysis Date: {report.analysis_date}")
output.append(f"Overall Compatibility: {report.overall_compatibility.upper()}")
output.append("")
# Summary
output.append("SUMMARY")
output.append("-" * 40)
output.append(f"Breaking Changes: {report.breaking_changes_count}")
output.append(f"Potentially Breaking: {report.potentially_breaking_count}")
output.append(f"Non-Breaking Changes: {report.non_breaking_changes_count}")
output.append(f"Additive Changes: {report.additive_changes_count}")
output.append(f"Total Issues Found: {len(report.issues)}")
output.append("")
# Risk Assessment
output.append("RISK ASSESSMENT")
output.append("-" * 40)
for key, value in report.risk_assessment.items():
output.append(f"{key.replace('_', ' ').title()}: {value}")
output.append("")
# Issues by Severity
issues_by_severity = {}
for issue in report.issues:
if issue.severity not in issues_by_severity:
issues_by_severity[issue.severity] = []
issues_by_severity[issue.severity].append(issue)
for severity in ["breaking", "potentially_breaking", "non_breaking"]:
if severity in issues_by_severity:
output.append(f"{severity.upper().replace('_', ' ')} ISSUES")
output.append("-" * 40)
for issue in issues_by_severity[severity]:
output.append(f"{issue.description}")
output.append(f" Field: {issue.field_path}")
output.append(f" Impact: {issue.impact}")
output.append(f" Migration: {issue.suggested_migration}")
if issue.affected_operations:
output.append(f" Affected Operations: {', '.join(issue.affected_operations)}")
output.append("")
# Migration Scripts
if report.migration_scripts:
output.append("SUGGESTED MIGRATION SCRIPTS")
output.append("-" * 40)
for i, script in enumerate(report.migration_scripts, 1):
output.append(f"{i}. {script.description}")
output.append(f" Type: {script.script_type}")
output.append(" Script:")
for line in script.script_content.split('\n'):
output.append(f" {line}")
output.append("")
# Recommendations
output.append("RECOMMENDATIONS")
output.append("-" * 40)
for i, rec in enumerate(report.recommendations, 1):
output.append(f"{i}. {rec}")
output.append("")
return "\n".join(output)
def main():
"""Main function with command line interface"""
parser = argparse.ArgumentParser(description="Analyze schema and API compatibility between versions")
parser.add_argument("--before", required=True, help="Before schema file (JSON)")
parser.add_argument("--after", required=True, help="After schema file (JSON)")
parser.add_argument("--type", choices=["database", "api"], default="database", help="Schema type to analyze")
parser.add_argument("--output", "-o", help="Output file for compatibility report (JSON)")
parser.add_argument("--format", "-f", choices=["json", "text", "both"], default="both", help="Output format")
args = parser.parse_args()
try:
# Load schemas
with open(args.before, 'r') as f:
before_schema = json.load(f)
with open(args.after, 'r') as f:
after_schema = json.load(f)
# Analyze compatibility
checker = SchemaCompatibilityChecker()
if args.type == "database":
report = checker.analyze_database_schema(before_schema, after_schema)
else: # api
report = checker.analyze_api_schema(before_schema, after_schema)
# Output results
if args.format in ["json", "both"]:
report_dict = asdict(report)
if args.output:
with open(args.output, 'w') as f:
json.dump(report_dict, f, indent=2)
print(f"Compatibility report saved to {args.output}")
else:
print(json.dumps(report_dict, indent=2))
if args.format in ["text", "both"]:
human_report = checker.generate_human_readable_report(report)
text_output = args.output.replace('.json', '.txt') if args.output else None
if text_output:
with open(text_output, 'w') as f:
f.write(human_report)
print(f"Human-readable report saved to {text_output}")
else:
print("\n" + "="*80)
print("HUMAN-READABLE COMPATIBILITY REPORT")
print("="*80)
print(human_report)
# Return exit code based on compatibility
if report.breaking_changes_count > 0:
return 2 # Breaking changes found
elif report.potentially_breaking_count > 0:
return 1 # Potentially breaking changes found
else:
return 0 # No compatibility issues
except FileNotFoundError as e:
print(f"Error: File not found: {e}", file=sys.stderr)
return 1
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON: {e}", file=sys.stderr)
return 1
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,661 @@
#!/usr/bin/env python3
"""
Migration Planner - Generate comprehensive migration plans with risk assessment
This tool analyzes migration specifications and generates detailed, phased migration plans
including pre-migration checklists, validation gates, rollback triggers, timeline estimates,
and risk matrices.
Author: Migration Architect Skill
Version: 1.0.0
License: MIT
"""
import json
import argparse
import sys
import datetime
import hashlib
import math
from typing import Dict, List, Any, Optional, Tuple
from dataclasses import dataclass, asdict
from enum import Enum
class MigrationType(Enum):
"""Migration type enumeration"""
DATABASE = "database"
SERVICE = "service"
INFRASTRUCTURE = "infrastructure"
DATA = "data"
API = "api"
class MigrationComplexity(Enum):
"""Migration complexity levels"""
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
class RiskLevel(Enum):
"""Risk assessment levels"""
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
@dataclass
class MigrationConstraint:
"""Migration constraint definition"""
type: str
description: str
impact: str
mitigation: str
@dataclass
class MigrationPhase:
"""Individual migration phase"""
name: str
description: str
duration_hours: int
dependencies: List[str]
validation_criteria: List[str]
rollback_triggers: List[str]
tasks: List[str]
risk_level: str
resources_required: List[str]
@dataclass
class RiskItem:
"""Individual risk assessment item"""
category: str
description: str
probability: str # low, medium, high
impact: str # low, medium, high
severity: str # low, medium, high, critical
mitigation: str
owner: str
@dataclass
class MigrationPlan:
"""Complete migration plan structure"""
migration_id: str
source_system: str
target_system: str
migration_type: str
complexity: str
estimated_duration_hours: int
phases: List[MigrationPhase]
risks: List[RiskItem]
success_criteria: List[str]
rollback_plan: Dict[str, Any]
stakeholders: List[str]
created_at: str
class MigrationPlanner:
"""Main migration planner class"""
def __init__(self):
self.migration_patterns = self._load_migration_patterns()
self.risk_templates = self._load_risk_templates()
def _load_migration_patterns(self) -> Dict[str, Any]:
"""Load predefined migration patterns"""
return {
"database": {
"schema_change": {
"phases": ["preparation", "expand", "migrate", "contract", "cleanup"],
"base_duration": 24,
"complexity_multiplier": {"low": 1.0, "medium": 1.5, "high": 2.5, "critical": 4.0}
},
"data_migration": {
"phases": ["assessment", "setup", "bulk_copy", "delta_sync", "validation", "cutover"],
"base_duration": 48,
"complexity_multiplier": {"low": 1.2, "medium": 2.0, "high": 3.0, "critical": 5.0}
}
},
"service": {
"strangler_fig": {
"phases": ["intercept", "implement", "redirect", "validate", "retire"],
"base_duration": 168, # 1 week
"complexity_multiplier": {"low": 0.8, "medium": 1.0, "high": 1.8, "critical": 3.0}
},
"parallel_run": {
"phases": ["setup", "deploy", "shadow", "compare", "cutover", "cleanup"],
"base_duration": 72,
"complexity_multiplier": {"low": 1.0, "medium": 1.3, "high": 2.0, "critical": 3.5}
}
},
"infrastructure": {
"cloud_migration": {
"phases": ["assessment", "design", "pilot", "migration", "optimization", "decommission"],
"base_duration": 720, # 30 days
"complexity_multiplier": {"low": 0.6, "medium": 1.0, "high": 1.5, "critical": 2.5}
},
"on_prem_to_cloud": {
"phases": ["discovery", "planning", "pilot", "migration", "validation", "cutover"],
"base_duration": 480, # 20 days
"complexity_multiplier": {"low": 0.8, "medium": 1.2, "high": 2.0, "critical": 3.0}
}
}
}
def _load_risk_templates(self) -> Dict[str, List[RiskItem]]:
"""Load risk templates for different migration types"""
return {
"database": [
RiskItem("technical", "Data corruption during migration", "low", "critical", "high",
"Implement comprehensive backup and validation procedures", "DBA Team"),
RiskItem("technical", "Extended downtime due to migration complexity", "medium", "high", "high",
"Use blue-green deployment and phased migration approach", "DevOps Team"),
RiskItem("business", "Business process disruption", "medium", "high", "high",
"Communicate timeline and provide alternate workflows", "Business Owner"),
RiskItem("operational", "Insufficient rollback testing", "high", "critical", "critical",
"Execute full rollback procedures in staging environment", "QA Team")
],
"service": [
RiskItem("technical", "Service compatibility issues", "medium", "high", "high",
"Implement comprehensive integration testing", "Development Team"),
RiskItem("technical", "Performance degradation", "medium", "medium", "medium",
"Conduct load testing and performance benchmarking", "DevOps Team"),
RiskItem("business", "Feature parity gaps", "high", "high", "high",
"Document feature mapping and acceptance criteria", "Product Owner"),
RiskItem("operational", "Monitoring gap during transition", "medium", "medium", "medium",
"Set up dual monitoring and alerting systems", "SRE Team")
],
"infrastructure": [
RiskItem("technical", "Network connectivity issues", "medium", "critical", "high",
"Implement redundant network paths and monitoring", "Network Team"),
RiskItem("technical", "Security configuration drift", "high", "high", "high",
"Automated security scanning and compliance checks", "Security Team"),
RiskItem("business", "Cost overrun during transition", "high", "medium", "medium",
"Implement cost monitoring and budget alerts", "Finance Team"),
RiskItem("operational", "Team knowledge gaps", "high", "medium", "medium",
"Provide training and create detailed documentation", "Platform Team")
]
}
def _calculate_complexity(self, spec: Dict[str, Any]) -> str:
"""Calculate migration complexity based on specification"""
complexity_score = 0
# Data volume complexity
data_volume = spec.get("constraints", {}).get("data_volume_gb", 0)
if data_volume > 10000:
complexity_score += 3
elif data_volume > 1000:
complexity_score += 2
elif data_volume > 100:
complexity_score += 1
# System dependencies
dependencies = len(spec.get("constraints", {}).get("dependencies", []))
if dependencies > 10:
complexity_score += 3
elif dependencies > 5:
complexity_score += 2
elif dependencies > 2:
complexity_score += 1
# Downtime constraints
max_downtime = spec.get("constraints", {}).get("max_downtime_minutes", 480)
if max_downtime < 60:
complexity_score += 3
elif max_downtime < 240:
complexity_score += 2
elif max_downtime < 480:
complexity_score += 1
# Special requirements
special_reqs = spec.get("constraints", {}).get("special_requirements", [])
complexity_score += len(special_reqs)
if complexity_score >= 8:
return "critical"
elif complexity_score >= 5:
return "high"
elif complexity_score >= 3:
return "medium"
else:
return "low"
def _estimate_duration(self, migration_type: str, migration_pattern: str, complexity: str) -> int:
"""Estimate migration duration based on type, pattern, and complexity"""
pattern_info = self.migration_patterns.get(migration_type, {}).get(migration_pattern, {})
base_duration = pattern_info.get("base_duration", 48)
multiplier = pattern_info.get("complexity_multiplier", {}).get(complexity, 1.5)
return int(base_duration * multiplier)
def _generate_phases(self, spec: Dict[str, Any]) -> List[MigrationPhase]:
"""Generate migration phases based on specification"""
migration_type = spec.get("type")
migration_pattern = spec.get("pattern", "")
complexity = self._calculate_complexity(spec)
pattern_info = self.migration_patterns.get(migration_type, {})
if migration_pattern in pattern_info:
phase_names = pattern_info[migration_pattern]["phases"]
else:
# Default phases based on migration type
phase_names = {
"database": ["preparation", "migration", "validation", "cutover"],
"service": ["preparation", "deployment", "testing", "cutover"],
"infrastructure": ["assessment", "preparation", "migration", "validation"]
}.get(migration_type, ["preparation", "execution", "validation", "cleanup"])
phases = []
total_duration = self._estimate_duration(migration_type, migration_pattern, complexity)
phase_duration = total_duration // len(phase_names)
for i, phase_name in enumerate(phase_names):
phase = self._create_phase(phase_name, phase_duration, complexity, i, phase_names)
phases.append(phase)
return phases
def _create_phase(self, phase_name: str, duration: int, complexity: str,
phase_index: int, all_phases: List[str]) -> MigrationPhase:
"""Create a detailed migration phase"""
phase_templates = {
"preparation": {
"description": "Prepare systems and teams for migration",
"tasks": [
"Backup source system",
"Set up monitoring and alerting",
"Prepare rollback procedures",
"Communicate migration timeline",
"Validate prerequisites"
],
"validation_criteria": [
"All backups completed successfully",
"Monitoring systems operational",
"Team members briefed and ready",
"Rollback procedures tested"
],
"risk_level": "medium"
},
"assessment": {
"description": "Assess current state and migration requirements",
"tasks": [
"Inventory existing systems and dependencies",
"Analyze data volumes and complexity",
"Identify integration points",
"Document current architecture",
"Create migration mapping"
],
"validation_criteria": [
"Complete system inventory documented",
"Dependencies mapped and validated",
"Migration scope clearly defined",
"Resource requirements identified"
],
"risk_level": "low"
},
"migration": {
"description": "Execute core migration processes",
"tasks": [
"Begin data/service migration",
"Monitor migration progress",
"Validate data consistency",
"Handle migration errors",
"Update configuration"
],
"validation_criteria": [
"Migration progress within expected parameters",
"Data consistency checks passing",
"Error rates within acceptable limits",
"Performance metrics stable"
],
"risk_level": "high"
},
"validation": {
"description": "Validate migration success and system health",
"tasks": [
"Execute comprehensive testing",
"Validate business processes",
"Check system performance",
"Verify data integrity",
"Confirm security controls"
],
"validation_criteria": [
"All critical tests passing",
"Performance within acceptable range",
"Security controls functioning",
"Business processes operational"
],
"risk_level": "medium"
},
"cutover": {
"description": "Switch production traffic to new system",
"tasks": [
"Update DNS/load balancer configuration",
"Redirect production traffic",
"Monitor system performance",
"Validate end-user experience",
"Confirm business operations"
],
"validation_criteria": [
"Traffic successfully redirected",
"System performance stable",
"User experience satisfactory",
"Business operations normal"
],
"risk_level": "critical"
}
}
template = phase_templates.get(phase_name, {
"description": f"Execute {phase_name} phase",
"tasks": [f"Complete {phase_name} activities"],
"validation_criteria": [f"{phase_name.title()} phase completed successfully"],
"risk_level": "medium"
})
dependencies = []
if phase_index > 0:
dependencies.append(all_phases[phase_index - 1])
rollback_triggers = [
"Critical system failure",
"Data corruption detected",
"Performance degradation > 50%",
"Business process failure"
]
resources_required = [
"Technical team availability",
"System access and permissions",
"Monitoring and alerting systems",
"Communication channels"
]
return MigrationPhase(
name=phase_name,
description=template["description"],
duration_hours=duration,
dependencies=dependencies,
validation_criteria=template["validation_criteria"],
rollback_triggers=rollback_triggers,
tasks=template["tasks"],
risk_level=template["risk_level"],
resources_required=resources_required
)
def _assess_risks(self, spec: Dict[str, Any]) -> List[RiskItem]:
"""Generate risk assessment for migration"""
migration_type = spec.get("type")
base_risks = self.risk_templates.get(migration_type, [])
# Add specification-specific risks
additional_risks = []
constraints = spec.get("constraints", {})
if constraints.get("max_downtime_minutes", 480) < 60:
additional_risks.append(
RiskItem("business", "Zero-downtime requirement increases complexity", "high", "medium", "high",
"Implement blue-green deployment or rolling update strategy", "DevOps Team")
)
if constraints.get("data_volume_gb", 0) > 5000:
additional_risks.append(
RiskItem("technical", "Large data volumes may cause extended migration time", "high", "medium", "medium",
"Implement parallel processing and progress monitoring", "Data Team")
)
compliance_reqs = constraints.get("compliance_requirements", [])
if compliance_reqs:
additional_risks.append(
RiskItem("compliance", "Regulatory compliance requirements", "medium", "high", "high",
"Ensure all compliance checks are integrated into migration process", "Compliance Team")
)
return base_risks + additional_risks
def _generate_rollback_plan(self, phases: List[MigrationPhase]) -> Dict[str, Any]:
"""Generate comprehensive rollback plan"""
rollback_phases = []
for phase in reversed(phases):
rollback_phase = {
"phase": phase.name,
"rollback_actions": [
f"Revert {phase.name} changes",
f"Restore pre-{phase.name} state",
f"Validate {phase.name} rollback success"
],
"validation_criteria": [
f"System restored to pre-{phase.name} state",
f"All {phase.name} changes successfully reverted",
"System functionality confirmed"
],
"estimated_time_minutes": phase.duration_hours * 15 # 25% of original phase time
}
rollback_phases.append(rollback_phase)
return {
"rollback_phases": rollback_phases,
"rollback_triggers": [
"Critical system failure",
"Data corruption detected",
"Migration timeline exceeded by > 50%",
"Business-critical functionality unavailable",
"Security breach detected",
"Stakeholder decision to abort"
],
"rollback_decision_matrix": {
"low_severity": "Continue with monitoring",
"medium_severity": "Assess and decide within 15 minutes",
"high_severity": "Immediate rollback initiation",
"critical_severity": "Emergency rollback - all hands"
},
"rollback_contacts": [
"Migration Lead",
"Technical Lead",
"Business Owner",
"On-call Engineer"
]
}
def generate_plan(self, spec: Dict[str, Any]) -> MigrationPlan:
"""Generate complete migration plan from specification"""
migration_id = hashlib.md5(json.dumps(spec, sort_keys=True).encode()).hexdigest()[:12]
complexity = self._calculate_complexity(spec)
phases = self._generate_phases(spec)
risks = self._assess_risks(spec)
total_duration = sum(phase.duration_hours for phase in phases)
rollback_plan = self._generate_rollback_plan(phases)
success_criteria = [
"All data successfully migrated with 100% integrity",
"System performance meets or exceeds baseline",
"All business processes functioning normally",
"No critical security vulnerabilities introduced",
"Stakeholder acceptance criteria met",
"Documentation and runbooks updated"
]
stakeholders = [
"Business Owner",
"Technical Lead",
"DevOps Team",
"QA Team",
"Security Team",
"End Users"
]
return MigrationPlan(
migration_id=migration_id,
source_system=spec.get("source", "Unknown"),
target_system=spec.get("target", "Unknown"),
migration_type=spec.get("type", "Unknown"),
complexity=complexity,
estimated_duration_hours=total_duration,
phases=phases,
risks=risks,
success_criteria=success_criteria,
rollback_plan=rollback_plan,
stakeholders=stakeholders,
created_at=datetime.datetime.now().isoformat()
)
def generate_human_readable_plan(self, plan: MigrationPlan) -> str:
"""Generate human-readable migration plan"""
output = []
output.append("=" * 80)
output.append(f"MIGRATION PLAN: {plan.migration_id}")
output.append("=" * 80)
output.append(f"Source System: {plan.source_system}")
output.append(f"Target System: {plan.target_system}")
output.append(f"Migration Type: {plan.migration_type.upper()}")
output.append(f"Complexity Level: {plan.complexity.upper()}")
output.append(f"Estimated Duration: {plan.estimated_duration_hours} hours ({plan.estimated_duration_hours/24:.1f} days)")
output.append(f"Created: {plan.created_at}")
output.append("")
# Phases
output.append("MIGRATION PHASES")
output.append("-" * 40)
for i, phase in enumerate(plan.phases, 1):
output.append(f"{i}. {phase.name.upper()} ({phase.duration_hours}h)")
output.append(f" Description: {phase.description}")
output.append(f" Risk Level: {phase.risk_level.upper()}")
if phase.dependencies:
output.append(f" Dependencies: {', '.join(phase.dependencies)}")
output.append(" Tasks:")
for task in phase.tasks:
output.append(f"{task}")
output.append(" Success Criteria:")
for criteria in phase.validation_criteria:
output.append(f"{criteria}")
output.append("")
# Risk Assessment
output.append("RISK ASSESSMENT")
output.append("-" * 40)
risk_by_severity = {}
for risk in plan.risks:
if risk.severity not in risk_by_severity:
risk_by_severity[risk.severity] = []
risk_by_severity[risk.severity].append(risk)
for severity in ["critical", "high", "medium", "low"]:
if severity in risk_by_severity:
output.append(f"{severity.upper()} SEVERITY RISKS:")
for risk in risk_by_severity[severity]:
output.append(f"{risk.description}")
output.append(f" Category: {risk.category}")
output.append(f" Probability: {risk.probability} | Impact: {risk.impact}")
output.append(f" Mitigation: {risk.mitigation}")
output.append(f" Owner: {risk.owner}")
output.append("")
# Rollback Plan
output.append("ROLLBACK STRATEGY")
output.append("-" * 40)
output.append("Rollback Triggers:")
for trigger in plan.rollback_plan["rollback_triggers"]:
output.append(f"{trigger}")
output.append("")
output.append("Rollback Phases:")
for rb_phase in plan.rollback_plan["rollback_phases"]:
output.append(f" {rb_phase['phase'].upper()}:")
for action in rb_phase["rollback_actions"]:
output.append(f" - {action}")
output.append(f" Estimated Time: {rb_phase['estimated_time_minutes']} minutes")
output.append("")
# Success Criteria
output.append("SUCCESS CRITERIA")
output.append("-" * 40)
for criteria in plan.success_criteria:
output.append(f"{criteria}")
output.append("")
# Stakeholders
output.append("STAKEHOLDERS")
output.append("-" * 40)
for stakeholder in plan.stakeholders:
output.append(f"{stakeholder}")
output.append("")
return "\n".join(output)
def main():
"""Main function with command line interface"""
parser = argparse.ArgumentParser(description="Generate comprehensive migration plans")
parser.add_argument("--input", "-i", required=True, help="Input migration specification file (JSON)")
parser.add_argument("--output", "-o", help="Output file for migration plan (JSON)")
parser.add_argument("--format", "-f", choices=["json", "text", "both"], default="both",
help="Output format")
parser.add_argument("--validate", action="store_true", help="Validate migration specification only")
args = parser.parse_args()
try:
# Load migration specification
with open(args.input, 'r') as f:
spec = json.load(f)
# Validate required fields
required_fields = ["type", "source", "target"]
for field in required_fields:
if field not in spec:
print(f"Error: Missing required field '{field}' in specification", file=sys.stderr)
return 1
if args.validate:
print("Migration specification is valid")
return 0
# Generate migration plan
planner = MigrationPlanner()
plan = planner.generate_plan(spec)
# Output results
if args.format in ["json", "both"]:
plan_dict = asdict(plan)
if args.output:
with open(args.output, 'w') as f:
json.dump(plan_dict, f, indent=2)
print(f"Migration plan saved to {args.output}")
else:
print(json.dumps(plan_dict, indent=2))
if args.format in ["text", "both"]:
human_plan = planner.generate_human_readable_plan(plan)
text_output = args.output.replace('.json', '.txt') if args.output else None
if text_output:
with open(text_output, 'w') as f:
f.write(human_plan)
print(f"Human-readable plan saved to {text_output}")
else:
print("\n" + "="*80)
print("HUMAN-READABLE MIGRATION PLAN")
print("="*80)
print(human_plan)
except FileNotFoundError:
print(f"Error: Input file '{args.input}' not found", file=sys.stderr)
return 1
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in input file: {e}", file=sys.stderr)
return 1
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
return 1
return 0
if __name__ == "__main__":
sys.exit(main())

File diff suppressed because it is too large Load Diff