47 lines
1.4 KiB
Markdown
47 lines
1.4 KiB
Markdown
# Interview Frameworks
|
|
|
|
## Loop Design by Level
|
|
|
|
### Junior/Mid
|
|
|
|
- Emphasize fundamentals, debugging, and growth potential.
|
|
- Keep loops concise with coding + behavioral validation.
|
|
|
|
### Senior
|
|
|
|
- Add system design and leadership rounds.
|
|
- Evaluate tradeoff quality, mentoring, and cross-team collaboration.
|
|
|
|
### Staff+
|
|
|
|
- Focus on architecture direction and organizational impact.
|
|
- Assess strategy, influence, and long-term technical judgment.
|
|
|
|
## Competency Areas
|
|
|
|
- Technical depth (implementation, design, quality)
|
|
- Problem solving (ambiguity handling, prioritization)
|
|
- Collaboration (communication, stakeholder alignment)
|
|
- Leadership (ownership, mentoring, influence)
|
|
|
|
## Scoring Rubric Baseline
|
|
|
|
- `4`: exceeds level expectations with strong evidence
|
|
- `3`: meets expectations consistently
|
|
- `2`: partial signal with notable gaps
|
|
- `1`: does not meet baseline requirements
|
|
|
|
## Calibration Guidelines
|
|
|
|
- Run recurring interviewer calibration sessions.
|
|
- Compare interviewer scoring variance across rounds.
|
|
- Track interview signal against new-hire outcomes.
|
|
- Use structured debriefs with independent scoring before discussion.
|
|
|
|
## Bias-Reduction Baseline
|
|
|
|
- Standardize question banks per competency area.
|
|
- Keep scorecards evidence-based and behavior-specific.
|
|
- Use diverse interviewer panels where possible.
|
|
- Require written rationale for strong yes/no recommendations.
|