fix(skill): restructure tech-stack-evaluator with Progressive Disclosure (#64) (#120)

Restructure skill to follow Progressive Disclosure Architecture:

Structure Changes:
- Move Python scripts to scripts/ directory
- Move sample JSON files to assets/ directory
- Create references/ directory with extracted content
- Remove redundant HOW_TO_USE.md and README.md

New Reference Files:
- references/metrics.md: Detailed scoring algorithms and formulas
- references/examples.md: Concrete input/output examples
- references/workflows.md: Step-by-step evaluation workflows

SKILL.md Improvements:
- Reduced from 430 lines to ~180 lines
- Added table of contents
- Added trigger phrases in description
- Consistent imperative voice
- Points to references for details

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Alireza Rezvani
2026-01-30 06:28:42 +01:00
committed by GitHub
parent 829a197c2b
commit a10a4f2c4b
17 changed files with 1114 additions and 1266 deletions

View File

@@ -0,0 +1,383 @@
# Technology Evaluation Examples
Concrete examples showing input formats and expected outputs.
---
## Table of Contents
- [Quick Comparison Example](#quick-comparison-example)
- [TCO Analysis Example](#tco-analysis-example)
- [Ecosystem Analysis Example](#ecosystem-analysis-example)
- [Migration Assessment Example](#migration-assessment-example)
- [Multi-Technology Comparison](#multi-technology-comparison)
---
## Quick Comparison Example
### Input (Text Format)
```
Compare React vs Vue for building a SaaS dashboard.
Focus on: developer productivity, ecosystem maturity, performance.
```
### Output
```
TECHNOLOGY COMPARISON: React vs Vue for SaaS Dashboard
=======================================================
RECOMMENDATION: React
Confidence: 78% (Medium-High)
COMPARISON MATRIX
-----------------
| Category | Weight | React | Vue |
|----------------------|--------|-------|------|
| Performance | 15% | 82.0 | 85.0 |
| Scalability | 15% | 88.0 | 80.0 |
| Developer Experience | 20% | 85.0 | 90.0 |
| Ecosystem | 15% | 92.0 | 78.0 |
| Learning Curve | 10% | 70.0 | 85.0 |
| Documentation | 10% | 88.0 | 82.0 |
| Community Support | 10% | 90.0 | 75.0 |
| Enterprise Readiness | 5% | 85.0 | 72.0 |
|----------------------|--------|-------|------|
| WEIGHTED TOTAL | 100% | 85.2 | 81.1 |
KEY DECISION FACTORS
--------------------
1. Ecosystem (15%): React leads with 92.0 - larger npm ecosystem
2. Developer Experience (20%): Vue leads with 90.0 - gentler learning curve
3. Community Support (10%): React leads with 90.0 - more Stack Overflow resources
PROS/CONS SUMMARY
-----------------
React:
✓ Excellent ecosystem (92.0/100)
✓ Strong community support (90.0/100)
✓ Excellent scalability (88.0/100)
✗ Steeper learning curve (70.0/100)
Vue:
✓ Excellent developer experience (90.0/100)
✓ Good performance (85.0/100)
✓ Easier learning curve (85.0/100)
✗ Smaller enterprise presence (72.0/100)
```
---
## TCO Analysis Example
### Input (JSON Format)
```json
{
"technology": "Next.js on Vercel",
"team_size": 8,
"timeline_years": 5,
"initial_costs": {
"licensing": 0,
"training_hours_per_dev": 24,
"developer_hourly_rate": 85,
"migration": 15000,
"setup": 5000
},
"operational_costs": {
"monthly_hosting": 2500,
"annual_support": 0,
"maintenance_hours_per_dev_monthly": 16
},
"scaling_params": {
"initial_users": 5000,
"annual_growth_rate": 0.40,
"initial_servers": 3,
"cost_per_server_monthly": 150
}
}
```
### Output
```
TCO ANALYSIS: Next.js on Vercel (5-Year Projection)
====================================================
EXECUTIVE SUMMARY
-----------------
Total TCO: $1,247,320
Net TCO (after productivity gains): $987,320
Average Yearly Cost: $249,464
INITIAL COSTS (One-Time)
------------------------
| Component | Cost |
|----------------|-----------|
| Licensing | $0 |
| Training | $16,820 |
| Migration | $15,000 |
| Setup | $5,000 |
|----------------|-----------|
| TOTAL INITIAL | $36,820 |
OPERATIONAL COSTS (Per Year)
----------------------------
| Year | Hosting | Maintenance | Total |
|------|----------|-------------|-----------|
| 1 | $30,000 | $130,560 | $160,560 |
| 2 | $42,000 | $130,560 | $172,560 |
| 3 | $58,800 | $130,560 | $189,360 |
| 4 | $82,320 | $130,560 | $212,880 |
| 5 | $115,248 | $130,560 | $245,808 |
SCALING ANALYSIS
----------------
User Projections: 5,000 → 7,000 → 9,800 → 13,720 → 19,208
Cost per User: $32.11 → $24.65 → $19.32 → $15.52 → $12.79
Scaling Efficiency: Excellent - economies of scale achieved
KEY COST DRIVERS
----------------
1. Developer maintenance time ($652,800 over 5 years)
2. Infrastructure/hosting ($328,368 over 5 years)
OPTIMIZATION OPPORTUNITIES
--------------------------
• Consider automation to reduce maintenance hours
• Evaluate reserved capacity pricing for hosting
```
---
## Ecosystem Analysis Example
### Input
```yaml
technology: "Svelte"
github:
stars: 78000
forks: 4100
contributors: 680
commits_last_month: 45
avg_issue_response_hours: 36
issue_resolution_rate: 0.72
releases_per_year: 8
active_maintainers: 5
npm:
weekly_downloads: 420000
version: "4.2.8"
dependencies_count: 0
days_since_last_publish: 21
community:
stackoverflow_questions: 8500
job_postings: 1200
tutorials_count: 350
forum_members: 25000
corporate_backing:
type: "community_led"
funding_millions: 0
```
### Output
```
ECOSYSTEM ANALYSIS: Svelte
==========================
OVERALL HEALTH SCORE: 71.5/100
HEALTH SCORE BREAKDOWN
----------------------
| Category | Score | Assessment |
|--------------------|-------|-------------------|
| GitHub Health | 78.0 | Strong |
| npm Health | 72.0 | Good |
| Community Health | 58.0 | Moderate |
| Corporate Backing | 40.0 | Community-led |
| Maintenance Health | 75.0 | Active |
VIABILITY ASSESSMENT
--------------------
Overall: Good - Solid viability with minor concerns
Risk Level: Low-Medium
Strengths:
• Strong GitHub presence with active community
• Active maintenance with responsive issue resolution
• High npm adoption with stable releases
Risks:
• Weak corporate backing - sustainability concerns
• Smaller community - limited enterprise resources
RECOMMENDATION
--------------
Suitable for adoption - monitor corporate backing situation.
Consider alternative with stronger backing for mission-critical enterprise apps.
KEY METRICS
-----------
GitHub: 78,000 stars, 680 contributors, 45 commits/month
npm: 420,000 weekly downloads, v4.2.8
Community: 8,500 SO questions, 1,200 job postings
```
---
## Migration Assessment Example
### Input
```
Assess migration from Angular.js (1.x) to React.
Application: 50,000 lines of code, 200 components.
Team: 6 developers, 2 have React experience.
Timeline: Need completion within 8 months.
```
### Output
```
MIGRATION ASSESSMENT: Angular.js 1.x → React
=============================================
COMPLEXITY SCORE: 7.8/10 (High)
COMPLEXITY BREAKDOWN
--------------------
| Factor | Score | Impact |
|---------------------|-------|-------------|
| Code Changes | 8/10 | 50K LOC |
| Architecture Impact | 9/10 | Complete |
| Data Migration | 5/10 | State only |
| Downtime Risk | 6/10 | Mitigatable |
EFFORT ESTIMATION
-----------------
| Phase | Hours | Duration |
|--------------------------|--------|----------|
| Planning & Prototyping | 240 | 3 weeks |
| Core Migration | 2,400 | 16 weeks |
| Testing & Validation | 480 | 5 weeks |
| Deployment & Monitoring | 160 | 2 weeks |
|--------------------------|--------|----------|
| TOTAL | 3,280 | 26 weeks |
Timeline Assessment: 26 weeks exceeds 8-month target (32 weeks)
Recommendation: Feasible with buffer
RISK ASSESSMENT
---------------
Technical Risks:
• High - Two-way binding to unidirectional data flow
• Medium - Dependency injection to hooks/context
• Medium - Performance regression during transition
Business Risks:
• Medium - Feature freeze during migration
• Low - Customer impact with phased approach
Team Risks:
• High - 4 of 6 developers need React training
• Medium - Learning curve affects velocity initially
MIGRATION STRATEGY
------------------
Recommended Approach: Strangler Fig Pattern
Phase 1 (Weeks 1-8): Strangler setup
• Create React shell application
• Set up micro-frontend routing
• Migrate shared utilities
Phase 2 (Weeks 9-18): Component migration
• Migrate components by feature area
• Maintain Angular.js for untouched areas
• Continuous integration testing
Phase 3 (Weeks 19-24): Completion
• Migrate remaining components
• Remove Angular.js dependencies
• Performance optimization
Phase 4 (Weeks 25-26): Stabilization
• Production deployment
• Monitoring and fixes
```
---
## Multi-Technology Comparison
### Input
```
Compare: React, Vue, Svelte, Angular for enterprise SaaS.
Use case: Large team (20+ developers), complex state management.
Generate comprehensive decision matrix.
```
### Output
```
MULTI-TECHNOLOGY COMPARISON: Enterprise SaaS Platform
======================================================
RECOMMENDATION: React
Confidence: 72% (Medium)
Runner-up: Angular (close second for enterprise)
DECISION MATRIX
---------------
| Category | Wt | React | Vue | Svelte | Angular |
|----------------------|------|-------|------|--------|---------|
| Performance | 15% | 82 | 85 | 95 | 78 |
| Scalability | 15% | 90 | 82 | 75 | 92 |
| Developer Experience | 20% | 85 | 90 | 88 | 75 |
| Ecosystem | 15% | 95 | 80 | 65 | 88 |
| Learning Curve | 10% | 70 | 85 | 80 | 60 |
| Documentation | 10% | 90 | 85 | 75 | 92 |
| Community Support | 10% | 92 | 78 | 55 | 85 |
| Enterprise Readiness | 5% | 88 | 72 | 50 | 95 |
|----------------------|------|-------|------|--------|---------|
| WEIGHTED TOTAL | 100% | 86.3 | 83.1 | 76.2 | 83.0 |
FRAMEWORK PROFILES
------------------
React: Best for large ecosystem, hiring pool
Angular: Best for enterprise structure, TypeScript-first
Vue: Best for developer experience, gradual adoption
Svelte: Best for performance, smaller bundles
RECOMMENDATION RATIONALE
------------------------
For 20+ developer team with complex state management:
1. React (Recommended)
• Largest talent pool for hiring
• Extensive enterprise libraries (Redux, React Query)
• Meta backing ensures long-term support
• Most Stack Overflow resources
2. Angular (Strong Alternative)
• Built-in structure for large teams
• TypeScript-first reduces bugs
• Comprehensive CLI and tooling
• Google enterprise backing
3. Vue (Consider for DX)
• Excellent documentation
• Easier onboarding
• Growing enterprise adoption
• Consider if DX is top priority
4. Svelte (Not Recommended for This Use Case)
• Smaller ecosystem for enterprise
• Limited hiring pool
• State management options less mature
• Better for smaller teams/projects
```

View File

@@ -0,0 +1,242 @@
# Technology Evaluation Metrics
Detailed metrics and calculations used in technology stack evaluation.
---
## Table of Contents
- [Scoring and Comparison](#scoring-and-comparison)
- [Financial Calculations](#financial-calculations)
- [Ecosystem Health Metrics](#ecosystem-health-metrics)
- [Security Metrics](#security-metrics)
- [Migration Metrics](#migration-metrics)
- [Performance Benchmarks](#performance-benchmarks)
---
## Scoring and Comparison
### Technology Comparison Matrix
| Metric | Scale | Description |
|--------|-------|-------------|
| Feature Completeness | 0-100 | Coverage of required features |
| Learning Curve | Easy/Medium/Hard | Time to developer proficiency |
| Developer Experience | 0-100 | Tooling, debugging, workflow quality |
| Documentation Quality | 0-10 | Completeness, clarity, examples |
### Weighted Scoring Algorithm
The comparator uses normalized weighted scoring:
```python
# Default category weights (sum to 100%)
weights = {
"performance": 15,
"scalability": 15,
"developer_experience": 20,
"ecosystem": 15,
"learning_curve": 10,
"documentation": 10,
"community_support": 10,
"enterprise_readiness": 5
}
# Final score calculation
weighted_score = sum(category_score * weight / 100 for each category)
```
### Confidence Scoring
Confidence is calculated based on score gap between top options:
| Score Gap | Confidence Level |
|-----------|------------------|
| < 5 points | Low (40-50%) |
| 5-15 points | Medium (50-70%) |
| > 15 points | High (70-100%) |
---
## Financial Calculations
### TCO Components
**Initial Costs (One-Time)**
- Licensing fees
- Training: `team_size * hours_per_dev * hourly_rate + materials`
- Migration costs
- Setup and tooling
**Operational Costs (Annual)**
- Licensing renewals
- Hosting: `base_cost * (1 + growth_rate)^(year - 1)`
- Support contracts
- Maintenance: `team_size * hours_per_dev_monthly * hourly_rate * 12`
**Scaling Costs**
- Infrastructure: `servers * cost_per_server * 12`
- Cost per user: `total_yearly_cost / user_count`
### ROI Calculations
```
productivity_value = additional_features_per_year * avg_feature_value
net_tco = total_cost - (productivity_value * years)
roi_percentage = (benefits - costs) / costs * 100
```
### Cost Per Metric Reference
| Metric | Description |
|--------|-------------|
| Cost per user | Monthly or yearly per active user |
| Cost per API request | Average cost per 1000 requests |
| Cost per GB | Storage and transfer costs |
| Cost per compute hour | Processing time costs |
---
## Ecosystem Health Metrics
### GitHub Health Score (0-100)
| Metric | Max Points | Thresholds |
|--------|------------|------------|
| Stars | 30 | 50K+: 30, 20K+: 25, 10K+: 20, 5K+: 15, 1K+: 10 |
| Forks | 20 | 10K+: 20, 5K+: 15, 2K+: 12, 1K+: 10 |
| Contributors | 20 | 500+: 20, 200+: 15, 100+: 12, 50+: 10 |
| Commits/month | 30 | 100+: 30, 50+: 25, 25+: 20, 10+: 15 |
### npm Health Score (0-100)
| Metric | Max Points | Thresholds |
|--------|------------|------------|
| Weekly downloads | 40 | 1M+: 40, 500K+: 35, 100K+: 30, 50K+: 25, 10K+: 20 |
| Major version | 20 | v5+: 20, v3+: 15, v1+: 10 |
| Dependencies | 20 | ≤10: 20, ≤25: 15, ≤50: 10 (fewer is better) |
| Days since publish | 20 | ≤30: 20, ≤90: 15, ≤180: 10, ≤365: 5 |
### Community Health Score (0-100)
| Metric | Max Points | Thresholds |
|--------|------------|------------|
| Stack Overflow questions | 25 | 50K+: 25, 20K+: 20, 10K+: 15, 5K+: 10 |
| Job postings | 25 | 5K+: 25, 2K+: 20, 1K+: 15, 500+: 10 |
| Tutorials | 25 | 1K+: 25, 500+: 20, 200+: 15, 100+: 10 |
| Forum/Discord members | 25 | 50K+: 25, 20K+: 20, 10K+: 15, 5K+: 10 |
### Corporate Backing Score
| Backing Type | Score |
|--------------|-------|
| Major tech company (Google, Microsoft, Meta) | 100 |
| Established company (Vercel, HashiCorp) | 80 |
| Funded startup | 60 |
| Community-led (strong community) | 40 |
| Individual maintainers | 20 |
---
## Security Metrics
### Security Scoring Components
| Metric | Description |
|--------|-------------|
| CVE Count (12 months) | Known vulnerabilities in last year |
| CVE Count (3 years) | Longer-term vulnerability history |
| Severity Distribution | Critical/High/Medium/Low counts |
| Patch Frequency | Average days to patch vulnerabilities |
### Compliance Readiness Levels
| Level | Score Range | Description |
|-------|-------------|-------------|
| Ready | 90-100% | Meets compliance requirements |
| Mostly Ready | 70-89% | Minor gaps to address |
| Partial | 50-69% | Significant work needed |
| Not Ready | < 50% | Major gaps exist |
### Compliance Framework Coverage
**GDPR**
- Data privacy features
- Consent management
- Data portability
- Right to deletion
**SOC2**
- Access controls
- Encryption at rest/transit
- Audit logging
- Change management
**HIPAA**
- PHI handling
- Encryption standards
- Access controls
- Audit trails
---
## Migration Metrics
### Complexity Scoring (1-10 Scale)
| Factor | Weight | Description |
|--------|--------|-------------|
| Code Changes | 30% | Lines of code affected |
| Architecture Impact | 25% | Breaking changes, API compatibility |
| Data Migration | 25% | Schema changes, data transformation |
| Downtime Requirements | 20% | Zero-downtime possible vs planned outage |
### Effort Estimation
| Phase | Components |
|-------|------------|
| Development | Hours per component * complexity factor |
| Testing | Unit + integration + E2E hours |
| Training | Team size * learning curve hours |
| Buffer | 20-30% for unknowns |
### Risk Assessment Matrix
| Risk Category | Factors Evaluated |
|---------------|-------------------|
| Technical | API incompatibilities, performance regressions |
| Business | Downtime impact, feature parity gaps |
| Team | Learning curve, skill gaps |
---
## Performance Benchmarks
### Throughput/Latency Metrics
| Metric | Description |
|--------|-------------|
| RPS | Requests per second |
| Avg Response Time | Mean response latency (ms) |
| P95 Latency | 95th percentile response time |
| P99 Latency | 99th percentile response time |
| Concurrent Users | Maximum simultaneous connections |
### Resource Usage Metrics
| Metric | Unit |
|--------|------|
| Memory | MB/GB per instance |
| CPU | Utilization percentage |
| Storage | GB required |
| Network | Bandwidth MB/s |
### Scalability Characteristics
| Type | Description |
|------|-------------|
| Horizontal | Add more instances, efficiency factor |
| Vertical | CPU/memory limits per instance |
| Cost per Performance | Dollar per 1000 RPS |
| Scaling Inflection | Point where cost efficiency changes |

View File

@@ -0,0 +1,362 @@
# Technology Evaluation Workflows
Step-by-step workflows for common evaluation scenarios.
---
## Table of Contents
- [Framework Comparison Workflow](#framework-comparison-workflow)
- [TCO Analysis Workflow](#tco-analysis-workflow)
- [Migration Assessment Workflow](#migration-assessment-workflow)
- [Security Evaluation Workflow](#security-evaluation-workflow)
- [Cloud Provider Selection Workflow](#cloud-provider-selection-workflow)
---
## Framework Comparison Workflow
Use this workflow when comparing frontend/backend frameworks or libraries.
### Step 1: Define Requirements
1. Identify the use case:
- What type of application? (SaaS, e-commerce, real-time, etc.)
- What scale? (users, requests, data volume)
- What team size and skill level?
2. Set priorities (weights must sum to 100%):
- Performance: ____%
- Scalability: ____%
- Developer Experience: ____%
- Ecosystem: ____%
- Learning Curve: ____%
- Other: ____%
3. List constraints:
- Budget limitations
- Timeline requirements
- Compliance needs
- Existing infrastructure
### Step 2: Run Comparison
```bash
python scripts/stack_comparator.py \
--technologies "React,Vue,Angular" \
--use-case "enterprise-saas" \
--weights "performance:20,ecosystem:25,scalability:20,developer_experience:35"
```
### Step 3: Analyze Results
1. Review weighted total scores
2. Check confidence level (High/Medium/Low)
3. Examine strengths and weaknesses for each option
4. Review decision factors
### Step 4: Validate Recommendation
1. Match recommendation to your constraints
2. Consider team skills and hiring market
3. Evaluate ecosystem for your specific needs
4. Check corporate backing and long-term viability
### Step 5: Document Decision
Record:
- Final selection with rationale
- Trade-offs accepted
- Risks identified
- Mitigation strategies
---
## TCO Analysis Workflow
Use this workflow for comprehensive cost analysis over multiple years.
### Step 1: Gather Cost Data
**Initial Costs:**
- [ ] Licensing fees (if any)
- [ ] Training hours per developer
- [ ] Developer hourly rate
- [ ] Migration costs
- [ ] Setup and tooling costs
**Operational Costs:**
- [ ] Monthly hosting costs
- [ ] Annual support contracts
- [ ] Maintenance hours per developer per month
**Scaling Parameters:**
- [ ] Initial user count
- [ ] Expected annual growth rate
- [ ] Infrastructure scaling approach
### Step 2: Run TCO Calculator
```bash
python scripts/tco_calculator.py \
--input assets/sample_input_tco.json \
--years 5 \
--output tco_report.json
```
### Step 3: Analyze Cost Breakdown
1. Review initial vs. operational costs ratio
2. Examine year-over-year cost growth
3. Check cost per user trends
4. Identify scaling efficiency
### Step 4: Identify Optimization Opportunities
Review:
- Can hosting costs be reduced with reserved pricing?
- Can automation reduce maintenance hours?
- Are there cheaper alternatives for specific components?
### Step 5: Compare Multiple Options
Run TCO analysis for each technology option:
1. Current state (baseline)
2. Option A
3. Option B
Compare:
- 5-year total cost
- Break-even point
- Risk-adjusted costs
---
## Migration Assessment Workflow
Use this workflow when planning technology migrations.
### Step 1: Document Current State
1. Count lines of code
2. List all components/modules
3. Identify dependencies
4. Document current architecture
5. Note existing pain points
### Step 2: Define Target State
1. Target technology/framework
2. Target architecture
3. Expected benefits
4. Success criteria
### Step 3: Assess Team Readiness
- How many developers have target technology experience?
- What training is needed?
- What is the team's capacity during migration?
### Step 4: Run Migration Analysis
```bash
python scripts/migration_analyzer.py \
--from "angular-1.x" \
--to "react" \
--codebase-size 50000 \
--components 200 \
--team-size 6
```
### Step 5: Review Risk Assessment
For each risk category:
1. Identify specific risks
2. Assess probability and impact
3. Define mitigation strategies
4. Assign risk owners
### Step 6: Plan Migration Phases
1. **Phase 1: Foundation**
- Setup new infrastructure
- Create migration utilities
- Train team
2. **Phase 2: Incremental Migration**
- Migrate by feature area
- Maintain parallel systems
- Continuous testing
3. **Phase 3: Completion**
- Remove legacy code
- Optimize performance
- Complete documentation
4. **Phase 4: Stabilization**
- Monitor production
- Address issues
- Gather metrics
### Step 7: Define Rollback Plan
Document:
- Trigger conditions for rollback
- Rollback procedure
- Data recovery steps
- Communication plan
---
## Security Evaluation Workflow
Use this workflow for security and compliance assessment.
### Step 1: Identify Requirements
1. List applicable compliance standards:
- [ ] GDPR
- [ ] SOC2
- [ ] HIPAA
- [ ] PCI-DSS
- [ ] Other: _____
2. Define security priorities:
- Data encryption requirements
- Access control needs
- Audit logging requirements
- Incident response expectations
### Step 2: Gather Security Data
For each technology:
- [ ] CVE count (last 12 months)
- [ ] CVE count (last 3 years)
- [ ] Severity distribution
- [ ] Average patch time
- [ ] Security features list
### Step 3: Run Security Assessment
```bash
python scripts/security_assessor.py \
--technology "express-js" \
--compliance "soc2,gdpr" \
--output security_report.json
```
### Step 4: Analyze Results
Review:
1. Overall security score
2. Vulnerability trends
3. Patch responsiveness
4. Compliance readiness per standard
### Step 5: Identify Gaps
For each compliance standard:
1. List missing requirements
2. Estimate remediation effort
3. Identify workarounds if available
4. Calculate compliance cost
### Step 6: Make Risk-Based Decision
Consider:
- Acceptable risk level
- Cost of remediation
- Alternative technologies
- Business impact of compliance gaps
---
## Cloud Provider Selection Workflow
Use this workflow for AWS vs Azure vs GCP decisions.
### Step 1: Define Workload Requirements
1. Workload type:
- [ ] Web application
- [ ] API services
- [ ] Data analytics
- [ ] Machine learning
- [ ] IoT
- [ ] Other: _____
2. Resource requirements:
- Compute: ____ instances, ____ cores, ____ GB RAM
- Storage: ____ TB, type (block/object/file)
- Database: ____ type, ____ size
- Network: ____ GB/month transfer
3. Special requirements:
- [ ] GPU/TPU for ML
- [ ] Edge computing
- [ ] Multi-region
- [ ] Specific compliance certifications
### Step 2: Evaluate Feature Availability
For each provider, verify:
- Required services exist
- Service maturity level
- Regional availability
- SLA guarantees
### Step 3: Run Cost Comparison
```bash
python scripts/tco_calculator.py \
--providers "aws,azure,gcp" \
--workload-config workload.json \
--years 3
```
### Step 4: Assess Ecosystem Fit
Consider:
- Team's existing expertise
- Development tooling preferences
- CI/CD integration
- Monitoring and observability tools
### Step 5: Evaluate Vendor Lock-in
For each provider:
1. List proprietary services you'll use
2. Estimate migration cost if switching
3. Identify portable alternatives
4. Calculate lock-in risk score
### Step 6: Make Final Selection
Weight factors:
- Cost: ____%
- Features: ____%
- Team expertise: ____%
- Lock-in risk: ____%
- Support quality: ____%
Select provider with highest weighted score.
---
## Best Practices
### For All Evaluations
1. **Document assumptions** - Make all assumptions explicit
2. **Validate data** - Verify metrics from multiple sources
3. **Consider context** - Generic scores may not apply to your situation
4. **Include stakeholders** - Get input from team members who will use the technology
5. **Plan for change** - Technology landscapes evolve; plan for flexibility
### Common Pitfalls to Avoid
1. Over-weighting recent popularity vs. long-term stability
2. Ignoring team learning curve in timeline estimates
3. Underestimating migration complexity
4. Assuming vendor claims are accurate
5. Not accounting for hidden costs (training, hiring, technical debt)