Phase 4 priority shift: Meg needs AI assistant NOW (she is staff member #1)

CRITICAL INSIGHT:
- Meg = The Emissary = Current staff member #1 (not future)
- Meg doesn't have Claude access
- Every Meg question = Michael interruption
- AI assistant = Meg self-serves 24/7

TIMELINE CHANGE:
- Was: "Deploy when staff wiki exists" (Month 4+)
- Now: Deploy immediately after Phase 3 (Session 6)
- Impact: Meg gets AI assistant 4-6 months earlier

PHASE 4 UPDATED:
- Renamed: "Staff AI Assistant for Meg" (not generic staff)
- Knowledge Base: Emissary-focused (~20-30 docs)
  - Social Media Handbook
  - Consultant Profiles
  - Subscription Tiers
  - Contact Reference
  - Origin Story
- Questions Meg can ask:
  - "What personality traits for Jack in social posts?"
  - "When to post Fire vs Frost content?"
  - "How to describe Awakened tier?"
  - "Who handles billing issues?"
- Training: 15 min ("type question, get answer")

BENEFITS:
- Reduces Michael interruptions immediately
- Builds Meg's tech confidence (success → NextCloud later)
- Proves concept before recruiting more staff
- Recruitment advantage: "We have AI assistant"
- Simple interface (accessibility win)

SESSION 6 UPDATED:
- Added: Deploy Meg's AI assistant (2 hours)
- Added: Train Meg on usage (15 min)
- Total time: 4-5 hours (was 2-3)

Updated by: Chronicler the Ninth
This commit is contained in:
2026-02-15 12:50:02 -06:00
parent c774b9ae3c
commit d58270a8f2

View File

@@ -401,30 +401,80 @@ git clone https://git.firefrostgaming.com/firefrost-gaming/brainstorming.git
---
**Phase 4: Staff AI Assistant (2-3 hours)**
**Phase 4: Staff AI Assistant for Meg (2-3 hours)**
**Deploy Open WebUI with staff wiki docs only**
- Much smaller dataset (~50-100 docs when staff wiki exists)
- Built-in Chroma vector DB sufficient (no need for external)
- Embedded chat widget OR dedicated portal
- Domain: staff-ai.firefrostgaming.com
**PRIORITY SHIFT:** Meg is staff member #1. Deploy immediately after Phase 3 (not "when staff wiki exists")
**Why Meg Needs This Now:**
- Meg = The Emissary = Staff member #1 (not future staff, current staff)
- Meg doesn't have Claude access
- Meg needs to self-serve answers about:
- Social media posting guidelines (Fire vs Frost content)
- Consultant personality traits for posts
- Subscription tier descriptions
- Who to contact for what (Michael? Breezehost? Payment processor?)
- Basic procedures and workflows
- **Every Meg question = Michael interruption**
- AI assistant = Meg self-serves 24/7
**Deploy Open WebUI for "Emissary Knowledge Base"**
- Start small: ~20-30 docs (not 50-100, grow over time)
- Built-in Chroma vector DB sufficient (small dataset)
- Simple web interface (easier on Meg's tech comfort)
- Domain: staff-ai.firefrostgaming.com (or emissary-ai.firefrostgaming.com)
**Initial Knowledge Base (Emissary Focus):**
1. **Emissary Social Media Handbook** (already exists in `docs/planning/`)
- Fire path vs Frost path content strategy
- Posting schedule, content pillars
- Platform-specific guidelines
2. **Consultant Profiles** (`docs/relationship/consultant-profiles.md`)
- Jack, Oscar, Butter, Jasmine, Noir personalities
- Photo reference for social media posts
- Lore and character traits
3. **Subscription Tiers** (`docs/planning/subscription-tiers.md`)
- Tier names, prices, benefits
- How to describe each tier in marketing
4. **Basic Contact Reference**
- Who handles what (Michael = infrastructure, Breezehost = hosting, etc.)
- Emergency contacts
- Escalation paths
5. **Origin Story** (`docs/relationship/origin-story.md`)
- Brand storytelling reference
- How Michael & Meg met (Donna's Restaurant)
**Configuration:**
1. Create "Staff Wiki" knowledge base in Open WebUI
2. Upload staff-facing docs only (operations manual stays private in AnythingLLM)
3. Configure access (staff accounts, not public)
4. Test 24/7 staff question answering:
- "How do I restart a game server?"
- "What's the whitelist process?"
- "Who do I contact for billing issues?"
5. Document usage in staff wiki
6. Train Meg on basic usage
1. Create "Emissary Knowledge Base" in Open WebUI
2. Upload Meg-relevant docs only (operations manual stays private in AnythingLLM)
3. Set up Meg's account (simple username/password, no complicated auth)
4. Test with real Meg questions:
- "What personality traits should I emphasize for Jack in social posts?"
- "When should I post Fire path content vs Frost path content?"
- "How do I describe the Awakened tier to potential subscribers?"
- "Who do I contact if someone has a billing issue?"
- "What's the origin story of Firefrost Gaming?"
5. Train Meg on usage (15 min: "Type question, hit enter, get answer")
6. Monitor first week, add docs based on Meg's actual questions
**Benefits:**
- Reduces Michael/Meg interruptions (staff self-serve)
- 24/7 availability (AI doesn't sleep)
- Onboarding tool for future recruitment
- Consistent answers (no "telephone game")
- Reduces Michael interruptions immediately (Meg self-serves)
- 24/7 availability (Meg can ask at 2 AM if she's working)
- ✅ Builds Meg's confidence with tech (success with AI → confidence for NextCloud later)
- ✅ Proves concept before recruiting more staff
-**Recruitment advantage:** "We have AI assistant" = professional operation
- ✅ Consistent answers (AI doesn't forget, doesn't give conflicting info)
- ✅ Foundation grows: Add docs as staff grows, knowledge base scales naturally
**Accessibility Win:**
- Simple interface (type question, get answer)
- No complex menus or navigation
- Mobile-friendly (Meg can use on phone)
- No Git, no terminal, no technical barriers
**Timeline:**
- **Was:** "When staff wiki deployed" (Month 4+)
- **Now:** Session 6, immediately after AnythingLLM ingestion complete
- **Impact:** Meg gets AI assistant 4-6 months earlier
---
@@ -1176,12 +1226,14 @@ TIER 6: FUTURE (Month 4+ or 2027)
2. Start model downloads (overnight)
3. **Checkpoint:** Downloads running, come back tomorrow
### Session 6 (2-3 hours): AI Stack Complete
1. Verify models loaded
2. Gitea integration (1 hour)
3. Staff AI assistant setup (2 hours)
4. Test DERP functionality
5. **Checkpoint:** Full AI stack operational
### Session 6 (4-5 hours): AI Stack Complete + Meg's Assistant
1. Verify AnythingLLM workspaces fully ingested
2. Test DERP functionality (reconstruct session from repo)
3. Deploy Open WebUI for Meg (2 hours)
4. Create "Emissary Knowledge Base" (Social Media Handbook, Consultant Profiles, etc.)
5. Train Meg on usage (15 min: show her how to ask questions)
6. Test with real Meg questions
7. **Checkpoint:** Full AI stack operational, Meg can self-serve 24/7
### Session 7 (1-2 hours): Monitoring
1. Deploy Netdata (1-2 hours)