chore: Task cleanup - archive 3, delete 11 obsolete folders

Archive threshold: ≥50KB OR ≥4 files

Archived to _archive/:
- firefrost-codex-migration-to-open-webui (127K, 9 files)
- whitelist-manager (65K, 5 files)
- self-hosted-ai-stack-on-tx1 (35K, 4 files)

Deleted (obsolete/superseded):
- builder-rank-holly-setup
- consultant-photo-processing
- ghost-theme-migration (empty)
- gitea-plane-integration (Plane abandoned)
- gitea-upgrade (Kanban approach abandoned)
- plane-deployment (superseded by decommission)
- pterodactyl-blueprint-asset-build (fold into #26)
- pterodactyl-modpack-version-display (fold into #26)
- scope-document-corrections (too vague)
- scoped-gitea-token (honor system working)
- whitelist-manager-v1-12-compatibility (rolled into Trinity Console)

Also added: Gemini task management consolidation consultation

Chronicler #69
This commit is contained in:
Claude
2026-04-08 14:17:26 +00:00
parent be0a59a38a
commit dca114eee9
34 changed files with 136 additions and 2678 deletions

View File

@@ -0,0 +1,136 @@
# Gemini Consultation: Task Management System Consolidation
**Date:** April 8, 2026
**Consulted By:** Chronicler #69 (The Surveyor) + Michael
**Topic:** Unifying 4 fragmented task tracking systems into one mobile-friendly solution
---
## CONTEXT
We're Firefrost Gaming, a subscription-based Minecraft server community. We use Claude AI (Chroniclers) to help build infrastructure. Our documentation lives in a Gitea repository (`firefrost-operations-manual`).
**The Trinity (users):**
- Michael (technical lead, Type 1 diabetic, post-stroke — needs mobile-friendly tools due to medical alert dog requiring frequent desk breaks)
- Meg (community manager, often mobile/traveling)
- Holly (co-founder, Discord/server management, located in Newfoundland)
---
## THE PROBLEM
We have **4 separate task tracking systems** that evolved organically:
### System 1: `docs/tasks/` (Original, Richest)
- **72 subdirectories**, each with a README.md containing full task specs
- Example: `docs/tasks/arbiter-2-1-cancellation-flow/README.md` (53KB of detail)
- Status is embedded in the README header, not YAML frontmatter
- **Pros:** Most complete specs, supports attachments/diagrams
- **Cons:** No quick overview, no mobile-friendly index
### System 2: `BACKLOG.md` (Quick Reference)
- **~32 task entries** in markdown tables
- Organized by priority (P1/P2/P3/P4)
- **Pros:** Easy to scan, prioritized
- **Cons:** Manual sync required, no individual task links, can get stale
### System 3: `tasks/` root folder (Mobile Manager)
- **9 simple markdown files** with YAML frontmatter
- Built for a React-based mobile task manager
- **Pros:** Machine-readable frontmatter
- **Cons:** Only 9 of 72 tasks migrated, duplicate of `docs/tasks/`
### System 4: `docs/tasks-index/` (Decap CMS)
- **4 card files** with YAML frontmatter
- Built for Decap CMS web interface
- **Pros:** Editable via web UI
- **Cons:** Only 4 tasks, Decap is not mobile-friendly
---
## REQUIREMENTS
### Must Have:
1. **Mobile-friendly task viewing** — Michael needs to check tasks from phone/tablet during medical breaks
2. **Single source of truth** — One system, not four
3. **Quick overview** — See all open tasks at a glance with priority/status
4. **Link to details** — Tap to open full task spec (Gitea web view is fine)
5. **Low maintenance** — Should not require manual syncing between systems
### Nice to Have:
1. **Task creation from mobile** — Add new tasks without desktop
2. **Status updates from mobile** — Mark tasks complete
3. **Filtering** — By priority, owner, status
4. **Works for all Trinity members** — Not just Michael
### Constraints:
- **No SSH from Claude's sandbox** — Claude can only access via Gitea API and web
- **Static hosting preferred** — Site deploys to Cloudflare Pages via 11ty
- **Gitea is the repo** — Not GitHub (Gitea API is similar but not identical)
- **Budget-conscious** — Prefer free/self-hosted over SaaS subscriptions
---
## WHAT WE'VE TRIED
### Gitea Issues + Kanban Projects
**Result:** "Failed miserably" (Michael's words). The UI was clunky, syncing was problematic.
### Decap CMS
**Result:** Works on desktop, but fundamentally not mobile-friendly. The interface requires too many taps/clicks on mobile.
### Custom React Mobile Manager (Chronicler #67)
**Result:** Works, but only reads from 9 task files. Doesn't see the other 63 tasks in `docs/tasks/`.
---
## QUESTIONS FOR GEMINI
1. **Architecture:** What's the best way to consolidate these 4 systems? Should we:
- Enhance `docs/tasks/` READMEs with YAML frontmatter and build an index?
- Generate a static JSON index at build time from `docs/tasks/`?
- Use a different tool entirely?
2. **Mobile Interface:** Is there a better mobile-friendly task viewer than custom React? Options we're aware of:
- PWA with service worker
- Simple static HTML generated at build time
- Third-party tool that can read from Git
3. **Decap CMS:** Is there any way to make Decap mobile-friendly, or should we abandon it for task management? (We still use it for other content.)
4. **Sync Strategy:** If we keep `docs/tasks/` as source of truth, what's the best way to generate a mobile-friendly index? 11ty build step? GitHub Action equivalent for Gitea?
5. **Alternative Tools:** Are there any Git-based task management tools we haven't considered that would work better for our use case?
---
## CURRENT TECHNICAL STACK
- **Repository:** Gitea (self-hosted) at git.firefrostgaming.com
- **Website:** 11ty static site on Cloudflare Pages
- **CMS:** Decap CMS (for non-task content)
- **API Access:** Gitea REST API with token authentication
- **Mobile Manager:** React SPA at `/admin/mobile.html`
---
## DESIRED END STATE
1. **One folder** (`docs/tasks/`) contains all task specs
2. **One index** (generated or manual) provides quick mobile overview
3. **Status changes** in the README or frontmatter automatically reflect in the mobile view
4. **No manual sync** between multiple systems
5. **Trinity members** can view and ideally update tasks from any device
---
## RESPONSE FORMAT REQUESTED
Please provide:
1. **Recommended architecture** with rationale
2. **Implementation steps** in priority order
3. **Estimated effort** for each step
4. **Alternative approaches** if the primary recommendation has issues
5. **Tools/libraries** you'd recommend (if any)
Thank you, Gemini! 🔥❄️

View File

@@ -1,32 +0,0 @@
# Builder Rank & Holly's Tool Setup
**Status:** ✅ ARCHIVED — SUPERSEDED
**Owner:** Michael "Frostystyle" Krause
**Priority:** N/A
**Created:** 2026-03-10
**Archived:** 2026-04-04 by Chronicler #59
---
## Archive Reason
**Holly is now part of The Trinity** — she's a co-founder and partner, not a staff member requiring a builder rank.
As of April 2026, The Trinity consists of:
- **Michael "Frostystyle"** (The Wizard) — Owner/Technical Lead
- **Meg "Gingerfury"** (The Emissary) — Community Manager
- **Holly "unicorn20089"** (The Catalyst) — Lead Builder / Creative Authority
Holly has full administrative access as a Trinity member. The original task to create a separate "builder" staff rank is no longer applicable.
---
## Original Task (Historical)
Define and deploy the Builder staff rank on all 12 servers, assign it to Holly (unicorn20089), and ensure she has the full builder toolkit available.
This was written when Holly was being hired as staff, before she became a partner.
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 🔥❄️

View File

@@ -1,154 +0,0 @@
# Builder Toolkit — Holly (unicorn20089)
**Task:** Builder Rank & Holly Setup
**Document Type:** builder-toolkit
**Status:** ACTIVE
**Last Updated:** 2026-03-10
---
## Core Build Tools
### WorldEdit / WorldEdit Forge / Axiom
The essential builder tool. Check what's available per modpack:
| Modpack Type | Recommended Tool |
|--------------|-----------------|
| Fabric modpacks | WorldEdit for Fabric |
| Forge modpacks | WorldEdit for Forge |
| Modern Forge (1.20+) | Axiom (more powerful, preferred if available) |
| Vanilla server | WorldEdit |
**Key permissions to grant:**
```
worldedit.* (full WorldEdit access)
```
or scoped:
```
worldedit.selection.*
worldedit.region.*
worldedit.clipboard.*
worldedit.generation.*
worldedit.history.*
worldedit.schematic.*
```
---
### Replay Mod / Replay Reforged
For recording builds as timelapses and reviewing build sessions.
- **Replay Mod** — Fabric/Forge, client-side (Holly installs on her own client)
- **Replay Reforged** — Forge port, same concept
**Note:** Replay Mod is client-side only — no server-side installation needed. Holly just needs it on her own Minecraft client. No permissions required on the server.
---
### VoxelSniper / FastAsyncVoxelSniper
For terrain sculpting and large organic builds. Useful for spawn areas and landscape work.
**Key permissions:**
```
voxelsniper.sniper
voxelsniper.brush.*
```
---
### Litematica (Client-Side)
For loading schematic templates while building. Client-side only — no server install needed.
---
### Carpet Mod (Fabric servers only)
Useful for technical builds and redstone. Check if applicable per server.
---
## Builder-Specific Permissions (LuckPerms)
```yaml
# Builder group permissions
group.builder:
permissions:
# Gamemode
- minecraft.command.gamemode # /gamemode switch
- essentials.gamemode # If EssentialsX installed
# Teleport (for build coordination)
- essentials.tp # Teleport to players
- essentials.tphere # Bring players to you
- ftbessentials.home.limit.50 # Generous home limit for build markers
# WorldEdit
- worldedit.* # Full WorldEdit access
# VoxelSniper (if installed)
- voxelsniper.sniper
- voxelsniper.brush.*
# Chunk management (builders need uncapped chunks for build areas)
- ftbchunks.max_claimed.500 # High limit for build zones
- ftbchunks.max_force_loaded.50 # Force load build chunks
# Time/weather (useful while building)
- essentials.time # /time set
- essentials.weather # /weather clear
# Staff bypass
- ftbchunks.admin # Bypass chunk protection in build areas
```
**Chat prefix:**
```yaml
meta:
prefix: "&#FFD600[🔨 Builder] " # Amber Gold — distinct from subscriber tiers
```
---
## Per-Server Tool Availability Notes
Some modpacks may conflict with or already include build tools. During deployment, verify per server:
| Server | Modpack | WorldEdit Available | Notes |
|--------|---------|--------------------|----|
| Stoneblock 4 | Forge | Check | May have Create integration |
| Reclamation | Forge | Check | |
| Society: Sunlit Valley | Forge | Check | |
| Vanilla 1.21.11 | Vanilla | Yes | Standard WorldEdit |
| All The Mons | Forge/Fabric | Check | Cobblemon |
| FoundryVTT | — | N/A | Out of scope for Builder role — access granted via Pokerole project (separate) |
| The Ember Project | Forge | Check | |
| Minecolonies: Create and Conquer | Forge | Check | |
| All The Mods 10 | Forge | Check | ATM usually includes WE |
| EMC Subterra Tech | Forge | Check | |
| Homestead | Forge | Check | |
| Hytale | Placeholder | N/A | Not live yet |
**Note:** FoundryVTT is a tabletop RPG platform, not Minecraft — skip for builder tools.
---
## What Holly Installs on Her Client
These are client-side mods Holly manages herself — no server action needed:
- **Replay Mod or Replay Reforged** — timelapse recording
- **Litematica** — schematic loading
- **MiniHUD** (optional) — build overlay info
- **Axiom** (if server has it) — advanced build client
---
## Future Builder Tools to Consider
- **CoreProtect** — block logging and rollback (useful if a build goes wrong)
- **BuildPaste** — schematic sharing
- **Chunky** — pre-generate chunks before building (performance)

View File

@@ -1,30 +0,0 @@
# Consultant Photo Processing
**Status:** Active - Ongoing
**Priority:** Tier 3 - Relationship
**Time:** Ongoing
**Last Updated:** 2026-02-16
## Overview
Process, catalog, and organize photos of The Five Consultants (Jack, Oscar, Butter, Jasmine, Noir). Maintain photo archive with proper metadata.
## Process
1. Transfer photos from phone/camera
2. Rename per naming convention: YYYY-MM-DD_consultant_description_NN.jpg
3. Add to photos/images/YYYY/
4. Update photos/catalog.md
5. Commit to Git
## Naming Convention
Format: `YYYY-MM-DD_subject_description_NN.jpg`
Example: `2025-12-15_jack_alert-posture_01.jpg`
## Success Criteria
- ✅ All photos properly named
- ✅ Catalog up to date
- ✅ Git committed regularly
- ✅ Archive maintained
**See:** photos/README.md, docs/relationship/consultant-photo-archive.md
**Fire + Frost + Foundation** 💙🔥❄️

View File

@@ -1,125 +0,0 @@
# 🚨 Next Session Priority — Loop Fix & n8n Stabilization
**Status:** CRITICAL — Do this BEFORE reactivating Plane→Gitea workflow
**Created:** March 18, 2026
**Created By:** Chronicler #32
**Prerequisite:** Read workflow-v3.md for full architecture context
---
## Current State (End of Session)
- ✅ n8n is UP and healthy at https://n8n.firefrostgaming.com
- ✅ Volume correctly mounted at /root/.n8n (permanent, survives restarts)
- ✅ All workflows recovered from database
- ✅ Gitea→Plane outbound workflow ACTIVE and working
- ⚠️ Plane→Gitea return trip workflow DEACTIVATED (caused infinite loop crash)
- ⚠️ Loop fix NOT yet implemented
---
## What Caused the Crash
The Plane→Gitea workflow posted a comment to Gitea using the API token
(mkrause612). Gitea fired a webhook back to n8n. The Gitea→Plane workflow
picked it up and created a Plane update. Plane fired a webhook back to n8n.
Repeat forever until n8n ran out of workers and crashed.
---
## The Fix (One Node, ~10 Minutes)
Add a bot filter as the SECOND node in BOTH workflows.
### In Gitea→Plane workflow, after the webhook node add:
```javascript
// Bot Filter — prevents loop
// If the action was triggered by our sync bot, stop here
const sender = $json.body.sender?.login || '';
const BOT_ACCOUNTS = ['mkrause612']; // Add firefrost-sync-bot when created
if (BOT_ACCOUNTS.includes(sender)) {
return []; // Drop silently — this is our own bot commenting
}
return $input.all();
```
### In Plane→Gitea workflow, after the webhook node add:
```javascript
// Bot Filter — prevents loop
// Only process events that came from real humans, not our sync
const actor = $json.body.data?.actor_detail?.display_name || '';
const actor_email = $json.body.data?.actor_detail?.email || '';
const BOT_EMAILS = ['claude@firefrostgaming.com', 'noreply@firefrostgaming.com'];
if (BOT_EMAILS.includes(actor_email)) {
return []; // Drop silently — this is our own bot
}
return $input.all();
```
---
## Recommended: Create a Dedicated Sync Bot Account
Rather than filtering by mkrause612 (your personal account), create a
dedicated Gitea user for sync operations:
1. Create Gitea user: `firefrost-sync`
2. Generate API token for that user
3. Use that token in all n8n workflows instead of the ops manual token
4. Filter by `firefrost-sync` in the bot filter nodes
This cleanly separates your personal actions from bot actions.
---
## n8n Volume — IMPORTANT
The working database is now at /root/.n8n (NOT /opt/firefrost-codex/volumes/n8n)
The compose file has been updated to reflect this.
Do NOT move the database again — leave it at /root/.n8n
If n8n ever shows setup screen again:
1. Check docker inspect for mount path
2. Check database.sqlite file size (should be ~160MB+)
3. Check settings table for userManagement.isInstanceOwnerSetUp = true
4. Clear license settings if fingerprint mismatch occurs
---
## Gemini's Unified Workflow — DO NOT IMPORT
Gemini provided a "Master Unified JSON" — reviewed and rejected for:
- Wrong Plane API endpoint (/work-items/ instead of /issues/)
- Wrong Gitea URL (firefrostgaming.com instead of git.firefrostgaming.com)
- Missing bot comment back to Gitea with Plane link
- Still missing proper loop prevention
Our existing workflows are correct — just need the bot filter added.
---
## What Michael Wants to Add Next Session
(Michael to fill in before session closes)
---
## Session Start Checklist for Next Chronicler
1. Read this file FIRST
2. Confirm n8n is healthy: https://n8n.firefrostgaming.com
3. Confirm Plane→Gitea workflow is INACTIVE
4. Add bot filter to BOTH workflows
5. Create firefrost-sync Gitea bot account
6. Test with a real issue before declaring victory
7. Then tackle Michael's additions
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️

View File

@@ -1,118 +0,0 @@
# Task #48 — Gitea/Plane Integration via n8n
**Status:** READY — pending n8n rebuild (Task #34)
**Priority:** Tier 2 — Major Infrastructure
**Time:** 2-3 hours
**Created:** March 15, 2026
**Created By:** Chronicler #31
**Owner:** Michael "Frostystyle" Krause
**Depends:** Task #34 (n8n rebuild from scratch)
---
## The Goal
Hands-off, automated two-way sync between Gitea (source of truth for docs/code)
and Plane (task tracking for Meg, Holly, and all staff). Neither Meg nor Holly
should need to manually cross-reference between systems.
---
## Why n8n (Not Native Integration)
Plane has no native Gitea integration. GitHub/GitLab integrations exist but
are likely Pro features and don't apply to our self-hosted Gitea.
n8n is already in the Firefrost stack on TX1. It's purpose-built for exactly
this kind of cross-platform automation. Once n8n is rebuilt (Task #34), this
integration is straightforward.
---
## Workflow Design
### Gitea → Plane (commit updates task)
1. Developer pushes commit with Plane issue ID in message (e.g. `INFRA-42`)
2. Gitea fires webhook to n8n
3. n8n parses commit message for issue ID
4. n8n calls Plane API to:
- Add comment: "Commit: [message] by [author]"
- Optionally update issue status (e.g. "In Progress" → "Done" on keyword)
**Trigger keywords in commit messages:**
- `closes INFRA-42` → mark issue Done
- `refs INFRA-42` → add comment only
- `fixes INFRA-42` → mark issue Done
### Plane → Gitea (task creation creates issue)
1. New Plane issue created in Infrastructure project
2. Plane fires webhook to n8n
3. n8n creates matching Gitea issue in ops manual repo
4. Links back to Plane issue URL in Gitea issue body
---
## Technical Requirements
### Plane API
- Generate API token: Workspace Settings → Access Tokens
- Base URL: `https://tasks.firefrostgaming.com/api/v1/`
- Docs: https://developers.plane.so/api-reference/introduction
### Gitea Webhook
- Token: `3c40388246ae816fe21cdca26fce4e1c66989dd1` (in Vaultwarden)
- Webhook URL: `http://[n8n-url]/webhook/gitea-plane`
- Events: Push, Issues
### Gitea Allowed Webhooks
Gitea restricts webhook destinations. Need to add n8n URL to allowed list
in Gitea's app.ini:
```
[webhook]
ALLOWED_HOST_LIST = n8n.firefrostgaming.com
```
### n8n Workflows to Build
1. **Gitea Push → Plane** — parse commit, update issue
2. **Plane Issue → Gitea** — create tracking issue
---
## Implementation Order
1. Complete Task #34 (n8n rebuild) first
2. Generate Plane API token
3. Update Gitea app.ini with n8n in allowed webhook hosts
4. Build n8n workflow: Gitea Push → Plane issue update
5. Build n8n workflow: Plane issue → Gitea issue
6. Test with a real commit referencing a Plane issue
7. Document commit message convention for team
---
## Commit Message Convention (for team)
Once live, all commits should reference Plane issue IDs:
```
feat: add LuckPerms builder rank
Implements Builder rank with WorldEdit permissions.
closes BUILDS-12
refs INFRA-7
```
Michael to document this in a team guide once integration is live.
---
## Related Tasks
- Task #34 — n8n Rebuild (prerequisite)
- Task #47 — Plane Deployment (complete)
- Task #11 — Mailcow (complete)
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️

View File

@@ -1,107 +0,0 @@
# Gitea → Plane Workflow (v3 — Production)
**Status:** ✅ LIVE
**Date:** March 16, 2026
**Deployed By:** Chronicler #32
**Webhook URL:** `https://panel.firefrostgaming.com/webhook/firefrost-final`
**n8n Location:** TX1 Dallas (38.68.14.26), container `firefrost-codex-n8n-1`
---
## Architecture
```
Gitea Issue (opened)
→ Gitea Webhook (POST)
→ Nginx SSL termination (panel.firefrostgaming.com)
→ n8n container (port 5678)
→ Verify Signature (header presence check)
→ Filter (action === "opened")
→ Route to Project (label → Plane project ID)
→ Create Plane Issue (Plane API)
→ Comment on Gitea Issue (Gitea API)
```
---
## Key Technical Notes
### HMAC Signature Issue (Why We Changed)
n8n's Webhook node automatically parses and re-serializes incoming JSON.
This means the raw bytes Gitea signed are NOT the same bytes n8n passes
to the Code node — even a single whitespace change breaks HMAC verification.
Enabling "Raw Body" converts the payload to a binary buffer, breaking
downstream Filter nodes that expect JSON objects.
**Resolution:** Switched to Header Presence Validation — confirm the
`x-gitea-signature` header exists (proving the request came from our
Gitea instance) rather than verifying the HMAC value itself.
This is acceptable for our self-hosted trusted environment.
### Verify Signature Node (Current)
```javascript
const payload = $input.first().json;
const sig = payload.headers['x-gitea-signature'] || payload.headers['X-Gitea-Signature'];
if (!sig) {
throw new Error('Security: Rejected - No Gitea Signature Header Found');
}
return $input.all();
```
### Trust Proxy Fix
Added `N8N_TRUST_PROXY=true` to docker-compose.yml to resolve
`ERR_ERL_UNEXPECTED_X_FORWARDED_FOR` errors caused by nginx proxy headers.
### n8n Compose Location
`/opt/firefrost-codex/docker-compose.yml`
n8n was previously orphaned from the compose file (running as a standalone
container). Re-added to compose on March 16, 2026 with trust proxy fix.
---
## Project Routing (Label → Plane Project)
| Gitea Label | Plane Project | Project ID |
|---|---|---|
| `infrastructure` | Infrastructure | `9945e7f8-3454-4b81-9fd8-3dc6536b0985` |
| `community` | Community | `34822d8e-934c-47a8-ad41-5f5aa4d982da` |
| `content` | Content | `8ab6e12c-7040-4b6b-9936-5b11295eb73d` |
| `builds` | Builds | `6795cd9b-332d-4f48-bc6b-23c489960659` |
| `operations` | Operations | `34920375-2942-4ee7-b61a-7fe0707e25fa` |
Default (no routing label): **Operations**
---
## Gitea Webhook Config (Operations Manual Repo)
- **URL:** `https://panel.firefrostgaming.com/webhook/firefrost-final`
- **Content Type:** `application/json`
- **Events:** Issues only
- **Active:** Yes
---
## Plane Labels Created (All 5 Projects)
**Routing:** `infrastructure` `community` `content` `builds` `operations`
**Priority:** `urgent` `high` `low`
**Status:** `quick-win` `blocked` `in-progress`
**Owners:** `frostystyle` `gingerfury` `unicorn20089`
---
## Return Trip (Plane → Gitea)
**Status:** NOT YET BUILT
When a Plane issue is marked complete, notes should flow back to the
Gitea issue as a comment and close it. This is Task #48 Phase 2.
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️

View File

@@ -1,279 +0,0 @@
# Gitea Upgrade: 1.21.5 → 1.23.x (Projects API)
**Date:** March 21, 2026
**Reason:** Enable Projects REST API for automatic issue-to-board syncing
**Current Version:** 1.21.5
**Target Version:** 1.23.7 (latest stable as of March 2026)
**Estimated Time:** 15-30 minutes
**Downtime:** ~5 minutes
---
## Why We're Upgrading
**Problem:** Gitea 1.21.5 has NO Projects REST API - endpoints literally don't exist in the router
**Solution:** Upgrade to 1.23.x which includes:
- ✅ Full Projects REST API
- ✅ Ability to programmatically add issues to project columns
- ✅ Automated workflow: script creates issue → auto-adds to "Backlog" column
**Gemini confirmed:** Projects API introduced in 1.22.x (basic), fully functional in 1.23.x
---
## Pre-Upgrade Checklist
**CRITICAL - Do these FIRST:**
### 1. Check Latest Gitea Version
```bash
curl -s https://api.github.com/repos/go-gitea/gitea/releases/latest | grep "tag_name"
```
This shows the actual latest version. Adjust target version if needed.
### 2. Backup Everything
```bash
# Stop Gitea
sudo systemctl stop gitea
# Backup database
sudo cp /var/lib/gitea/data/gitea.db /var/lib/gitea/data/gitea.db.backup-2026-03-21
# Backup configuration
sudo cp /etc/gitea/app.ini /etc/gitea/app.ini.backup-2026-03-21
# Backup current binary
sudo cp /usr/local/bin/gitea /usr/local/bin/gitea-1.21.5.backup
# Start Gitea again
sudo systemctl start gitea
```
**Verify backups exist:**
```bash
ls -lh /var/lib/gitea/data/gitea.db.backup-2026-03-21
ls -lh /etc/gitea/app.ini.backup-2026-03-21
ls -lh /usr/local/bin/gitea-1.21.5.backup
```
---
## Upgrade Procedure
### Step 1: Download New Gitea Binary
```bash
# Check architecture
uname -m
```
(Should show x86_64)
```bash
# Download Gitea 1.23.7 (adjust version if latest is different)
cd /tmp
wget https://dl.gitea.com/gitea/1.23.7/gitea-1.23.7-linux-amd64
```
```bash
# Verify download
ls -lh /tmp/gitea-1.23.7-linux-amd64
```
### Step 2: Stop Gitea Service
```bash
sudo systemctl stop gitea
```
```bash
# Verify it's stopped
sudo systemctl status gitea
```
### Step 3: Replace Binary
```bash
# Make new binary executable
chmod +x /tmp/gitea-1.23.7-linux-amd64
```
```bash
# Replace old binary with new one
sudo mv /tmp/gitea-1.23.7-linux-amd64 /usr/local/bin/gitea
```
```bash
# Set correct ownership
sudo chown root:root /usr/local/bin/gitea
```
```bash
# Set correct permissions
sudo chmod 755 /usr/local/bin/gitea
```
### Step 4: Run Database Migration
```bash
# Run as gitea user to apply schema updates
sudo -u gitea /usr/local/bin/gitea migrate -c /etc/gitea/app.ini
```
**IMPORTANT:** This step migrates the SQLite database schema to support new features (including Projects API). Watch for any errors.
### Step 5: Start Gitea
```bash
sudo systemctl start gitea
```
```bash
# Check service status
sudo systemctl status gitea
```
```bash
# Watch logs for errors
sudo journalctl -u gitea -f --lines=50
```
(Press Ctrl+C to stop watching logs once you see "Server is running")
### Step 6: Verify Upgrade
```bash
# Check version via API
curl -s https://git.firefrostgaming.com/api/v1/version
```
Should show version 1.23.7 (or whatever you installed)
**Test in browser:**
1. Open https://git.firefrostgaming.com
2. Login as mkrause612
3. Navigate to Firefrost Operations project board
4. Verify board still works and all issues are present
---
## Post-Upgrade Verification
### Check Projects API Is Now Available
```bash
# This should NOW work (was 404 before)
curl -s -H "Authorization: token e0e330cba1749b01ab505093a160e4423ebbbe36" \
"https://git.firefrostgaming.com/api/v1/orgs/firefrost-gaming/projects"
```
**Expected:** JSON response with project list (not 404)
### Test Adding Issue to Project Column
**Step 1: Get Project ID**
```bash
curl -s -H "Authorization: token e0e330cba1749b01ab505093a160e4423ebbbe36" \
"https://git.firefrostgaming.com/api/v1/orgs/firefrost-gaming/projects" | python3 -c "import sys, json; projects = json.load(sys.stdin); [print(f'Project: {p[\"title\"]} (ID: {p[\"id\"]})') for p in projects]"
```
**Step 2: Get Column IDs**
```bash
# Replace PROJECT_ID with actual ID from step 1
curl -s -H "Authorization: token e0e330cba1749b01ab505093a160e4423ebbbe36" \
"https://git.firefrostgaming.com/api/v1/projects/PROJECT_ID/columns" | python3 -c "import sys, json; cols = json.load(sys.stdin); [print(f'Column: {c[\"title\"]} (ID: {c[\"id\"]})') for c in cols]"
```
**Step 3: Test Adding Issue to Backlog**
```bash
# Replace COLUMN_ID with Backlog column ID, ISSUE_ID with a test issue number
curl -X POST -H "Authorization: token e0e330cba1749b01ab505093a160e4423ebbbe36" \
-H "Content-Type: application/json" \
-d '{"issues": [ISSUE_ID]}' \
"https://git.firefrostgaming.com/api/v1/projects/columns/COLUMN_ID/issues"
```
Check the project board - the issue should now appear in the Backlog column!
---
## Rollback Procedure (If Something Goes Wrong)
**If upgrade fails, here's how to rollback:**
```bash
# Stop Gitea
sudo systemctl stop gitea
```
```bash
# Restore old binary
sudo cp /usr/local/bin/gitea-1.21.5.backup /usr/local/bin/gitea
```
```bash
# Restore database
sudo cp /var/lib/gitea/data/gitea.db.backup-2026-03-21 /var/lib/gitea/data/gitea.db
```
```bash
# Restore config (if needed)
sudo cp /etc/gitea/app.ini.backup-2026-03-21 /etc/gitea/app.ini
```
```bash
# Start Gitea
sudo systemctl start gitea
```
```bash
# Verify version is back to 1.21.5
curl -s https://git.firefrostgaming.com/api/v1/version
```
---
## Success Criteria
- ✅ Gitea reports version 1.23.7 (or target version)
- ✅ Web UI loads and login works
- ✅ "Firefrost Operations" project board displays correctly
- ✅ All issues and columns are present
- ✅ Projects API endpoints return JSON (not 404)
- ✅ Can retrieve project/column IDs via API
- ✅ Can add test issue to project column via API
---
## Next Steps After Successful Upgrade
1. **Update sync script** to use Projects API
2. **Test automated workflow:** create issue → auto-add to Backlog
3. **Update documentation** with new version number
4. **Remove manual label workflow** (no longer needed)
5. **Commit upgrade notes** to operations manual
---
## Common Issues & Solutions
**Issue:** "gitea migrate" fails with schema error
**Solution:** Check /var/lib/gitea/log/gitea.log for details, may need to restore backup and investigate
**Issue:** Nginx returns 502 Bad Gateway after upgrade
**Solution:** Check `sudo systemctl status gitea` - service may have failed to start, check logs with `journalctl -u gitea`
**Issue:** Projects API still returns 404 after upgrade
**Solution:** Verify version with `curl https://git.firefrostgaming.com/api/v1/version` - may have wrong binary
**Issue:** Database corruption after migration
**Solution:** Restore backup database, investigate migration logs before retrying
---
## Documentation Updates Required
After successful upgrade:
- [ ] Update `docs/deployment/gitea.md` with new version number
- [ ] Update `docs/core/infrastructure-manifest.md`
- [ ] Document Projects API endpoints in reference docs
- [ ] Update task sync script with Projects API integration
- [ ] Create "How to use Projects API" guide
---
**Created:** March 21, 2026 (Session 37 - The Chronicler)
**Status:** READY TO EXECUTE
**Risk Level:** LOW (SQLite backup + binary rollback = easy recovery)

View File

@@ -1,314 +0,0 @@
# Task #47 — Plane Project Management Deployment
**Status:** READY
**Priority:** Tier 2 — Major Infrastructure
**Time:** 2-3 hours
**Created:** March 15, 2026
**Created By:** The Navigator (Chronicler #30)
**Updated:** March 15, 2026 — Chronicler #31
**Owner:** Michael "Frostystyle" Krause
**Target URL:** tasks.firefrostgaming.com
**Server:** TX1 Dallas (38.68.14.26)
## Hardware Decision (March 15, 2026)
Full fleet audit conducted. VPS machines all RAM-constrained for Plane's 4GB minimum:
- Command Center: 3.8GB total
- Panel VPS: 1.9GB total
- Ghost VPS: overloaded with Ghost + 3 Wiki.js instances
- Billing VPS: reserved for Mailcow
TX1 and NC1 are dedicated servers with 251GB RAM each. TX1 selected:
- TX1: 226GB free RAM, 771GB free disk — Plane is a rounding error
- NC1: disk at 61% used — TX1 has more headroom
**Philosophy note:** TX1 is nominally "game servers only" but the resource headroom
makes this a pragmatic exception. Plane won't impact game server performance.
---
## Why Plane
Firefrost needs a task management system that:
- Works for non-technical staff (Meg, Holly, moderators, social media helpers)
- Scales from 3 to 15+ people without migrating
- Stays self-hosted (data ownership, fits Firefrost philosophy)
- Has a mobile-friendly interface
- Supports assignment, priorities, milestones, and comments
Plane is the open source self-hosted answer to Linear. Docker-based, actively developed, genuinely good UI. Free forever when self-hosted.
**Decision:** Self-hosted Plane on Command Center, accessible at tasks.firefrostgaming.com.
---
## System Requirements
Per Plane documentation:
- **Minimum:** 2 CPU cores, 4GB RAM, 20GB storage
- **Recommended for production:** 4+ CPU cores, 16GB RAM
Command Center (63.143.34.217) handles Gitea, Uptime Kuma, and Code-Server comfortably. Plane fits here.
Verify before deploying:
```bash
free -h
df -h /
docker --version
docker compose version
```
---
## Phase 1: DNS Setup (5 min — Cloudflare)
Add A record in Cloudflare:
```
tasks.firefrostgaming.com A 63.143.34.217 (cf-proxied: false)
```
---
## Phase 2: Installation (30 min)
### SSH to Command Center
```bash
ssh root@63.143.34.217
```
### Create directory
```bash
mkdir -p /opt/plane && cd /opt/plane
```
### Download Plane installer
```bash
curl -fsSL https://raw.githubusercontent.com/makeplane/plane/master/deploy/selfhost/install.sh -o install.sh
chmod +x install.sh
```
### Run installer
```bash
./install.sh
```
When prompted:
- **Action:** 1 (Install)
- **Domain:** `tasks.firefrostgaming.com`
- **Installation type:** Express (use defaults)
### Start Plane
```bash
./install.sh
```
Select **2 (Start)**
### Verify containers running
```bash
docker compose ps
```
Key containers that should be Up:
- plane-web
- plane-api
- plane-worker
- plane-postgres
- plane-redis
- plane-minio (file storage)
- plane-proxy (nginx)
---
## Phase 3: Nginx + SSL (20 min)
### Create Nginx config
```bash
nano /etc/nginx/sites-available/plane
```
```nginx
server {
listen 80;
server_name tasks.firefrostgaming.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
```
### Enable site
```bash
ln -s /etc/nginx/sites-available/plane /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginx
```
### SSL certificate
```bash
certbot --nginx -d tasks.firefrostgaming.com
```
---
## Phase 4: Initial Setup (20 min)
### Access Plane
Go to: **https://tasks.firefrostgaming.com**
### Create admin account
- Email: frostystyle@firefrostgaming.com (or michael@ once Mailcow is live)
- Set strong password — store in Vaultwarden
### Create the Firefrost workspace
- Workspace name: **Firefrost Gaming**
- URL: `firefrost` (will be tasks.firefrostgaming.com/firefrost)
### Create projects
Start with these projects:
| Project | Description | Who uses it |
|---|---|---|
| Infrastructure | Server, deployment, technical tasks | Michael + future devs |
| Community | Discord, social media, moderation tasks | Meg, moderators |
| Content | Ghost posts, server spotlights, announcements | Meg, social media helpers |
| Builds | Holly's builder tasks, world projects | Holly, builders |
| Operations | General ops, cross-team tasks | All staff |
### Create staff accounts
Invite via email once Mailcow is live. For now, create accounts manually:
- gingerfury@firefrostgaming.com — Meg — Member role
- unicorn20089@firefrostgaming.com — Holly — Member role
### Set up labels (global)
Create these labels for consistent tagging:
**Priority:**
- 🔴 Critical
- 🟠 High
- 🟡 Medium
- 🟢 Low
**Type:**
- 🔧 Infrastructure
- 🎨 Content
- 🛡️ Moderation
- 🏗️ Build
- 🐛 Bug
- ✨ Feature
---
## Phase 5: Migrate Current Tasks (1 hour)
Migrate the top priority tasks from tasks.md into Plane. Don't migrate everything at once — start with active tasks only.
**Priority order for migration:**
1. Task #47 — This task (mark complete once deployed)
2. Task #11 — Mailcow (April 1 target — assign to Michael)
3. Task #40 — Holly's Builder rank (assign to Michael + Holly)
4. Task #45 — Server Sunset Evaluation (assign to Michael)
5. Task #28 — Discord reorganization (assign to Meg — she's handling it via Holly)
6. Task #46 — Ghost music player (assign to Michael)
For each task in Plane:
- Title matches tasks.md task name
- Description links to `docs/tasks/[task-name]/README.md` in Gitea
- Assignee set
- Priority set
- Due date if applicable (e.g. Mailcow = April 1, 2026)
---
## Phase 6: Backup Configuration (15 min)
### Manual backup command
```bash
cd /opt/plane && ./install.sh
```
Select **7 (Backup Data)**
### Automate daily backups
```bash
crontab -e
```
Add:
```bash
0 3 * * * cd /opt/plane && ./install.sh backup >> /var/log/plane-backup.log 2>&1
```
Backups at 3am daily. Review logs weekly.
---
## Phase 7: Uptime Kuma Monitor (5 min)
Add Plane to Uptime Kuma monitoring:
- URL: https://tasks.firefrostgaming.com
- Monitor type: HTTP(s)
- Name: Plane (tasks.firefrostgaming.com)
- Interval: 5 minutes
---
## Post-Deployment
- [ ] tasks.firefrostgaming.com accessible via HTTPS
- [ ] Admin account created, password in Vaultwarden
- [ ] Firefrost workspace created
- [ ] 5 projects created (Infrastructure, Community, Content, Builds, Operations)
- [ ] Meg and Holly accounts created
- [ ] Labels configured
- [ ] Top priority tasks migrated from tasks.md
- [ ] Daily backup cron running
- [ ] Uptime Kuma monitor added
- [ ] DNS record confirmed
- [ ] tasks.md updated with note: "Active tasks tracked in Plane at tasks.firefrostgaming.com"
---
## Staff Onboarding
Once deployed, each staff member needs a quick orientation:
**For Meg and Holly:**
- Log in at tasks.firefrostgaming.com
- Check "My Issues" to see assigned tasks
- Click a task to see details and add comments
- Mark done by changing status to Done
- That's it — they don't need to know anything else to start
**Mobile:** Plane has a mobile web interface that works well. Native apps are in development.
---
## Future Integrations
When ready:
- **Gitea webhook** — link commits to Plane issues via issue ID in commit messages
- **Mailcow** — email notifications for task assignments (configure after Mailcow live)
- **Discord bot** — post task updates to #staff-announcements (future)
---
## Related Documentation
- Plane self-hosting docs: https://developers.plane.so/self-hosting/methods/docker-compose
- Current tasks: `docs/core/tasks.md`
- Staff accounts: `docs/reference/staff-accounts.md` (to be created)
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️

View File

@@ -1,41 +0,0 @@
# Task: Pterodactyl Blueprint Asset Build Issue
**Status:** ⚠️ KNOWN ISSUE — monitoring for Blueprint fix
**Discovered:** March 13, 2026 (Chronicler #29)
**Priority:** LOW — panel is fully functional
## Summary
After updating Pterodactyl Panel v1.12.0 → v1.12.1, Blueprint extensions
(modpackinstaller, subdomains) were reinstalled successfully but webpack
asset rebuild fails with 16 errors.
## Root Cause
Blueprint beta-2026-01 has a css-loader version conflict with Pterodactyl
v1.12.1 node dependencies. Extensions built for Blueprint beta-2025-04.
Key errors:
- options.modules.exportLocalsConvention validation failure in css-loader
- xterm-addon-unicode11 module not found
## Current State
- Panel fully functional (serving cached pre-update assets)
- All servers manageable via panel
- Wings unaffected on TX1 and NC1
- Blueprint extensions registered: modpackinstaller, subdomains
- yarn build:production fails with 16 webpack errors
- PteroStats wiped by update — needs reinstall from BuiltByBit
## Resolution Path
1. Wait for Blueprint beta-2026-02+ with css-loader fix
2. OR manually patch node_modules css-loader exportLocalsConvention
3. After build works, reinstall PteroStats (paid BuiltByBit resource)
## Do NOT
- Do not run yarn build:production until Blueprint is updated
- Do not run blueprint -install on new extensions until fixed
- Panel works fine as-is

View File

@@ -1,279 +0,0 @@
# Pterodactyl Modpack Version Display
**Status:** Ready
**Priority:** Tier 3 - Quality of Life
**Time:** 1-2 hours
**Last Updated:** 2026-02-18
**Created By:** The Chronicler
---
## Overview
Add a custom "Modpack Version" field to Pterodactyl panel that displays the current modpack version for each Minecraft server. Makes version tracking visible at a glance without digging through server files.
**Problem:** Currently no easy way to see which modpack version is running on each server
**Solution:** Custom egg variable that displays in Startup tab
---
## Benefits
-**Version visibility** - See modpack version at a glance
-**Change tracking** - Know when updates were applied
-**Troubleshooting** - Quickly identify version mismatches
-**Documentation** - Server state clearly visible in panel
-**Staff clarity** - Everyone knows what's running
---
## Scope
**Affected Servers:**
- All The Mons (Cobblemon)
- Stoneblock 4
- Society: Sunlit Valley
- Reclamation
- The Ember Project
- Minecolonies: Create and Conquer
- All The Mods 10
- Homestead
- EMC Subterra Tech
**Not applicable:**
- Vanilla 1.21.11 (no modpack)
- Hytale (different game)
- FoundryVTT (not Minecraft)
---
## Implementation Methods
### Method 1: Egg Variable (Recommended)
**Pros:**
- Clean UI integration
- Per-server editable
- No code changes needed
- Visible in Startup tab
**Cons:**
- Manual entry (not auto-detected)
- Must update when modpack updates
**Complexity:** Low
---
### Method 2: Custom Script Parser
**Pros:**
- Auto-detects from modpack manifest
- No manual updates needed
- Always accurate
**Cons:**
- Requires custom Pterodactyl extension
- More complex to maintain
- Breaks on Pterodactyl updates
**Complexity:** High
---
## Recommended Approach: Egg Variable
**Create custom variable in Minecraft egg:**
### Variable Details
**Name:** Modpack Version
**Description:** Current version of the modpack running on this server
**Environment Variable:** `MODPACK_VERSION`
**Default Value:** `Not Set`
**User Viewable:** Yes
**User Editable:** Yes (allows updates)
**Rules:** None (free text)
---
## Implementation Steps
### Phase 1: Add Variable to Egg (15 minutes)
**Location:** Pterodactyl Admin Panel → Nests → Minecraft → [Egg]
1. Log into Pterodactyl admin panel
2. Navigate to **Nests****Minecraft**
3. Click on the egg used for modded servers (likely "Forge" or "NeoForge")
4. Go to **Variables** tab
5. Click **New Variable**
**Configuration:**
```
Name: Modpack Version
Description: Current version of the modpack (e.g., "1.2.3" or "2024-02-18")
Environment Variable: MODPACK_VERSION
Default Value: Not Set
Display Order: 10 (or any position you prefer)
Permissions:
☑ User Viewable
☑ User Editable
☐ Hidden from users
```
6. Click **Save**
7. The variable now appears in Startup tab for all servers using this egg
---
### Phase 2: Populate Current Versions (30 minutes)
**For each modpack server:**
1. Research current modpack version
- Check modpack manifest file
- Check CurseForge/Modrinth page
- Check server files for version.json
2. Go to server in Pterodactyl panel
3. Click **Startup** tab
4. Find "Modpack Version" field
5. Enter current version (e.g., "1.5.2" or "2024-02-18")
6. Click **Save**
---
### Phase 3: Document Process (15 minutes)
**Create update procedure:**
When updating a modpack:
1. Update modpack files
2. Test server restart
3. Update "Modpack Version" field in Pterodactyl
4. Document in changelog
**Location:** `docs/procedures/modpack-update-procedure.md`
---
## Alternative: Display in Server Name
**Quick hack (no code needed):**
Include version in server display name:
**Before:** "All The Mons"
**After:** "All The Mons [v1.5.2]"
**Pros:**
- Instantly visible in server list
- No panel modifications needed
- Works immediately
**Cons:**
- Clutters server name
- Must manually update name field
- Looks less professional
---
## Version Format Standards
**Recommended formats:**
### Semantic Versioning
- Format: `MAJOR.MINOR.PATCH`
- Example: `1.5.2`, `2.0.1`
- Best for: Official modpack releases
### Date-Based
- Format: `YYYY-MM-DD`
- Example: `2026-02-18`
- Best for: Custom/internal modpacks
### Hybrid
- Format: `VERSION (DATE)`
- Example: `1.5.2 (2026-02-18)`
- Best for: Tracking both
**Pick ONE format and use consistently across all servers.**
---
## Automation Opportunities (Future)
### Auto-Detection Script
**Concept:** Script that reads modpack version from manifest and updates Pterodactyl via API
```bash
#!/bin/bash
# Auto-update modpack version in Pterodactyl
SERVER_ID="668a5220-7e72-4379-9165-bdbb84bc9806"
MANIFEST="/var/lib/pterodactyl/volumes/${SERVER_ID}/manifest.json"
# Extract version from manifest
VERSION=$(jq -r '.version' "$MANIFEST")
# Update via Pterodactyl API
curl -X PUT "https://panel.firefrostgaming.com/api/application/servers/${SERVER_ID}/startup" \
-H "Authorization: Bearer ${PTERO_API_KEY}" \
-H "Content-Type: application/json" \
-d "{\"environment\": {\"MODPACK_VERSION\": \"${VERSION}\"}}"
```
**Status:** Future enhancement (not needed for MVP)
---
## Success Criteria
- ✅ Egg variable created in Pterodactyl
- ✅ Variable visible in Startup tab for all modded servers
- ✅ Current versions documented for all 9 modpack servers
- ✅ Update procedure documented
- ✅ Staff trained on updating field when modpacks update
---
## Rollback
**If implementation causes issues:**
1. Delete custom variable from egg
2. Field disappears from all servers
3. No data loss (versions can be recorded elsewhere)
**Risk:** None (purely cosmetic addition)
---
## Related Tasks
- **Task #23:** Game Server Startup Script Audit (same servers affected)
- **Future:** Automated version detection system
- **Future:** Changelog integration (link version to release notes)
---
## Notes
- This is a **tracking feature**, not a functional requirement
- Servers will run fine without it
- Primary benefit is **operational clarity**
- Consider extending to other metadata fields:
- Minecraft version
- Java version
- Last updated date
- Responsible admin
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️
**Version tracking = Professional operations**

View File

@@ -1,22 +0,0 @@
# Scope Document Corrections
**Status:** Ready
**Priority:** Tier 3 - Documentation
**Time:** 30 minutes
**Last Updated:** 2026-02-16
## Overview
Review and correct scope documents for accuracy. Update project boundaries, timelines, deliverables.
## Documents to Review
- Project scope definitions
- Timeline estimates
- Deliverable lists
- Dependency mapping
## Success Criteria
- ✅ Scope docs accurate
- ✅ Timelines realistic
- ✅ Deliverables clear
**Fire + Frost + Foundation** 💙🔥❄️

View File

@@ -1,274 +0,0 @@
# Project Scope Corrections - February 17, 2026
**Purpose:** Document updates needed for project-scope.md
**Current Scope Version:** 2.3 (Last updated Feb 12, 2026)
**Corrections Date:** February 17, 2026
**Status:** PENDING APPLICATION
---
## Critical Updates Needed
### 1. Current Date and Status
**Incorrect (in scope doc):**
> **Created:** February 9, 2026
> **Last Updated:** February 12, 2026 (9:00 AM CST)
> **Next Review:** March 1, 2026
**Correction:**
- Last Updated: February 17, 2026
- Status: Multiple major projects completed since Feb 12
- Next Review: March 1, 2026 (still valid)
---
### 2. Management Services Deployed
**Add to deployed services:**
**✅ Service 9: Whitelist Manager** (whitelist.firefrostgaming.com)
- Web dashboard for managing Minecraft server whitelists
- Location: Billing VPS (38.68.14.188)
- Status: DEPLOYMENT PACKAGE COMPLETE, awaiting SSH access
- Impact: 96.7% time reduction in whitelist management (15 min → 30 sec)
- Deployed: Pending SSH access (ready Feb 17, 2026)
**Update Vaultwarden status:**
- Currently listed as "⏳ Service 9"
- Should remain "⏳ Service 10" (pending)
- Needs: SSH key + organization setup for shared credentials with Meg
---
### 3. Infrastructure Documentation Status
**Add to completed infrastructure work:**
**✅ The Frostwall Protocol** - Complete planning and documentation
- Deployment plan complete (500+ lines, 7 phases)
- IP hierarchy fully documented (3-tier architecture)
- Troubleshooting guide created
- Status: READY FOR IMPLEMENTATION (3-4 hours when SSH available)
- Completed: February 17, 2026
- Unblocks: Mailcow deployment, AI stack deployment, all Tier 2+ infrastructure
---
### 4. Documentation Additions
**Add to documentation list:**
**Staff & Operations:**
- docs/tasks/staff-recruitment-launch/ - Complete recruitment workflow
- Prerequisites guide (incentive instances, application process)
- Application tracker template with scoring system
- Onboarding checklist (day-by-day workflow)
- Status: Documentation complete, awaiting decisions and SSH
**Infrastructure:**
- docs/tasks/frostwall-protocol/deployment-plan.md - GRE tunnel implementation
- docs/tasks/frostwall-protocol/ip-hierarchy.md - 3-tier IP architecture reference
- docs/tasks/frostwall-protocol/troubleshooting.md - Problem resolution guide
- docs/tasks/whitelist-manager/ - Complete deployment package
**Reference:**
- docs/reference/PROJECT-INSTRUCTIONS.md - Claude.ai project settings
- docs/reference/terminology-guide.md - Frostwall vs Firefrost clarification
---
### 5. Timeline Updates
**Week 2 (Feb 12-15) - UPDATE STATUS:**
Originally planned:
- [ ] Vaultwarden deployment
- [ ] Mailcow email server (pending Breezehost VPS)
- [ ] Migrate off Plesk for email
- [ ] Standardize photo naming convention
- [ ] Process remaining 30-40 consultant photos
- [ ] Clean up Command Center root directory
**Actual status as of Feb 17:**
- [ ] Vaultwarden - Still pending SSH access
- [ ] Mailcow - Blocked by Frostwall Protocol (now unblocked, ready to deploy)
- [ ] Email migration - Pending Mailcow deployment
- [ ] Photo processing - Not started
- [ ] Command Center cleanup - Pending SSH access
- ✅ Whitelist Manager - Deployment package complete (major unplanned win)
- ✅ Staff Recruitment - Complete documentation (major unplanned win)
- ✅ Frostwall Protocol - Complete planning (critical path unblocked)
**Week 3 (Feb 16-17) - ACTUAL WORK COMPLETED:**
- ✅ Whitelist Manager deployment package created
- ✅ Staff Recruitment documentation complete
- ✅ Frostwall Protocol fully planned and documented
- ✅ Terminology guide created (Frostwall vs Firefrost)
- ✅ Project Instructions for Claude.ai created
- ⏳ All deployment work blocked by SSH access
---
### 6. Technical Debt & Priorities
**Update Immediate Priorities:**
Original list (Feb 12):
1. Vaultwarden deployment
2. Mailcow email server VPS
3. Migrate email off Plesk
4. Process remaining consultant photos
5. Clean up Command Center root directory
**Revised priorities (Feb 17):**
1. **Deploy Whitelist Manager** (when SSH available) - 30-45 min
2. **Deploy Frostwall Protocol** (when SSH available) - 3-4 hours
- CRITICAL: Unblocks Mailcow and AI stack
3. **Vaultwarden setup** (when SSH available) - 30 min
4. **Command Center cleanup** (when SSH available) - 15 min
5. **Staff Recruitment decisions** - Michael to decide on prerequisites
6. Mailcow deployment (after Frostwall) - 2-3 hours
7. Process consultant photos - Lower priority
---
### 7. Task Progress Statistics
**Add current task status:**
**Tier 0 - Immediate Wins:**
- ✅ NC1 Cleanup - COMPLETE (Feb 16)
- ✅ Whitelist Manager - Deployment package complete (Feb 17, pending SSH)
- ⏳ Command Center Cleanup - Pending SSH
- ✅ Staff Recruitment - Documentation complete (Feb 17, awaiting decisions)
**Tier 1 - Security Foundation:**
- ⏳ Vaultwarden - Pending SSH
- ✅ Frostwall Protocol - Planning complete (Feb 17, ready for 3-4hr implementation)
- ⏳ Command Center Security Hardening - Pending SSH
- ⏳ Scoped Gitea Token - Depends on Vaultwarden
**Work completed without SSH access:**
- 4 major tasks advanced
- 18 files created
- ~4,500 lines of documentation
- 9 Git commits
- All following FFG-STD-001 and FFG-STD-002
---
### 8. Success Metrics - February Progress
**Technical Metrics (to add):**
- **Documentation Completion:** 100% for 4 major tasks
- **Deployment Readiness:** 3 tasks deployment-ready (Whitelist Manager, Frostwall, Staff Recruitment)
- **Time Efficiency Gain (Whitelist Manager):** 96.7% reduction (15 min → 30 sec)
- **Critical Path Unblocked:** Frostwall documentation complete, unblocks Tier 2 infrastructure
---
### 9. Subscription Model - No Changes
Current subscription model is accurate. No corrections needed.
---
### 10. DDoS Protection System Update
**Current status in scope:**
> **Status:** Planning Phase
> **Priority:** Deploy after management services complete, before soft launch
**Correction:**
- Status: PLANNING COMPLETE (Feb 17, 2026)
- The Frostwall Protocol fully documented
- Ready for implementation when SSH available
- Implementation time: 3-4 hours
- Includes: GRE tunnels, Iron Wall firewall, self-healing monitoring
**Replace "Options Under Consideration" with:**
- **Selected Option:** The Frostwall Protocol (GRE tunnel hub-and-spoke)
- Command Center as hub/scrubbing center
- GRE tunnels to TX1 (Dallas) and NC1 (Charlotte)
- Three-tier IP hierarchy (Scrubbing → Backend → Binding)
- Self-healing tunnel monitoring
- Complete documentation in `docs/tasks/frostwall-protocol/`
---
### 11. Branding & Visual Identity - No Changes
Current branding information is accurate. Fire/Frost color palette correct.
---
### 12. Critical Constraints - Add SSH Access
**Add to Critical Constraints:**
**SSH Access Limitation (February 2026):**
- Current network restrictions prevent SSH access to servers
- All deployment work blocked until SSH available
- Workaround: Complete comprehensive documentation and deployment packages
- Impact: 3+ tasks deployment-ready but cannot execute
- Mitigation: Work continues on planning, documentation, and design tasks
---
## Summary of Required Changes
**Sections requiring updates:**
1. ✅ Current date and status
2. ✅ Management services list (add Whitelist Manager)
3. ✅ Infrastructure documentation status (add Frostwall Protocol)
4. ✅ Documentation list (add new docs)
5. ✅ Timeline Week 2 and 3 (actual vs planned)
6. ✅ Technical debt priorities (reorder based on current state)
7. ✅ Task progress statistics (add Tier 0/1 status)
8. ✅ Success metrics (add Feb 17 progress)
9. ✅ DDoS protection system (update to "Planning Complete")
10. ✅ Critical constraints (add SSH access limitation)
**Sections that are still accurate:**
- Executive Summary (needs version bump only)
- Core Philosophy (unchanged)
- Server Inventory (unchanged)
- Game Servers (unchanged)
- Three-Tier Documentation Architecture (unchanged)
- Subscription Model (unchanged)
- Authentication Strategy (unchanged)
- Branding & Visual Identity (unchanged)
- Relationship with Breezehost (unchanged)
---
## Recommended Next Steps
1. **Update project-scope.md** with corrections above
2. **Bump version:** 2.3 → 2.4
3. **Add revision entry:**
```
| 2.4 | 2026-02-17 | Major progress update. Whitelist Manager deployment package complete. Frostwall Protocol fully planned and documented. Staff Recruitment documentation complete. 3 tasks deployment-ready. SSH access limitation added to constraints. Timeline updated with actual Feb 16-17 work. Priorities reordered. |
```
4. **Commit with:** `docs: Update project scope to v2.4 - February 17 progress`
---
## Files Needing Cross-Reference Updates
When scope is updated, also update:
- `docs/core/tasks.md` - Mark completed tasks
- `docs/core/infrastructure-manifest.md` - Add Whitelist Manager when deployed
- `SESSION-HANDOFF-PROTOCOL.md` - Update current status
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️
---
**Document Status:** CORRECTION GUIDE
**Apply To:** project-scope.md v2.3
**Result:** project-scope.md v2.4
**Priority:** Medium (scope doc should reflect reality)

View File

@@ -1,27 +0,0 @@
# Scoped Gitea Token for Pokerole Project
**Status:** Ready
**Priority:** Tier 1 - Security
**Time:** 15 minutes
**Depends:** Vaultwarden operational
**Last Updated:** 2026-02-16
## Overview
Create scoped Gitea API token limited to Pokerole repos only. Replace master token with defense-in-depth boundary enforcement.
## Problem
Pokerole project currently uses master Gitea token with "honor system" scoping. Iron Wall principle: enforce technically, not socially.
## Actions
1. Create new Gitea token scoped to 4 Pokerole repos only
2. Store in Vaultwarden
3. Update `pokerole-project/misc-docs/SESSION-START-PROMPT.md`
4. Test Claudius access (Pokerole repos ONLY)
5. Remove master token reference
## Success Criteria
- ✅ Scoped token created and stored
- ✅ Claudius isolated from Firefrost infrastructure
- ✅ Defense in depth enforced
**Fire + Frost + Foundation** 💙🔥❄️

View File

@@ -1,389 +0,0 @@
# Scoped Gitea Token for Pokerole Project - Deployment Guide
**Status:** Ready
**Priority:** Tier 1 - Security Foundation
**Time Estimate:** 15 minutes
**Depends:** Vaultwarden operational
**Last Updated:** 2026-02-17
---
## Overview
Create a scoped Gitea API token limited exclusively to the Pokerole project repositories. This replaces the current master token with proper Iron Wall boundary enforcement - technical restrictions instead of "honor system" trust.
**Iron Wall Principle:** Enforce boundaries technically, not socially.
---
## The Problem
**Current State:**
- Pokerole project (Claudius) uses master Gitea token
- Token has full access to ALL repositories including Firefrost infrastructure
- Relies on "honor system" for Claudius not to access Firefrost repos
- Single compromise exposes entire infrastructure
**Risk:**
- Accidental access to Firefrost operations manual
- Potential cross-contamination of projects
- No defense-in-depth isolation
- Violates least-privilege principle
**The Goal:**
- Claudius gets ONLY what's needed: 4 Pokerole repos
- Zero access to Firefrost infrastructure
- Technical enforcement via Gitea's scoped token permissions
- Proper isolation between projects
---
## Pokerole Repositories
**Repos that need access (4 total):**
1. `pokerole-project` (main project repo)
2. `pokerole-data` (data files and assets)
3. `pokerole-tools` (tooling and scripts)
4. `pokerole-docs` (documentation)
**Everything else (including `firefrost-operations-manual`) should be inaccessible.**
---
## Prerequisites
- [ ] Vaultwarden operational and accessible
- [ ] Gitea admin access (https://git.firefrostgaming.com)
- [ ] Knowledge of which Pokerole repos exist
- [ ] Access to update Pokerole session start prompt
---
## Implementation Steps
### Step 1: Log into Gitea
```
URL: https://git.firefrostgaming.com
Username: [your admin username]
Password: [from Vaultwarden or password manager]
```
### Step 2: Navigate to Access Tokens
1. Click your profile icon (top right)
2. Click **"Settings"**
3. Click **"Applications"** tab
4. Scroll to **"Manage Access Tokens"** section
### Step 3: Create New Scoped Token
**Token Settings:**
**Token Name:** `Pokerole Project - Claudius (Scoped)`
**Select Scopes:**
**Repository Scopes (ONLY these):**
- [x] `repo:read` - Read access to repositories
- [x] `repo:write` - Write access (for commits, PRs)
**Organization Scopes:**
- [ ] None (uncheck all)
**User Scopes:**
- [ ] None (uncheck all)
**Miscellaneous:**
- [ ] None (uncheck all)
**Important:** Do NOT select:
- Admin scopes
- Organization management
- User management
- Notification scopes
- Package scopes
### Step 4: Specify Repository Access
**If Gitea supports per-repo scoping:**
Select ONLY these repositories:
- [x] pokerole-project
- [x] pokerole-data
- [x] pokerole-tools
- [x] pokerole-docs
**If Gitea doesn't support per-repo scoping:**
Note: You may need to create an organization called "Pokerole" and move these repos into it, then scope the token to that organization only.
Alternative: Use Gitea's repository access controls to ensure the token owner only has access to Pokerole repos.
### Step 5: Generate and Copy Token
1. Click **"Generate Token"**
2. **CRITICAL:** Copy the token immediately - it only shows ONCE
3. Token will look like: `f8c6e2b4a1d9e7f3c5a8b2d6e9f1c4a7b3d5e8f2`
**Do NOT close the page until token is safely stored!**
### Step 6: Store Token in Vaultwarden
1. Log into Vaultwarden: `https://[vaultwarden-url]`
2. Create new item:
- **Type:** Secure Note or Login
- **Name:** `Gitea - Pokerole Scoped Token (Claudius)`
- **Folder:** API Keys or Credentials
- **Notes/Password:** Paste the token
- **Custom Fields:**
- `Scope`: Pokerole repos only (read/write)
- `Created`: 2026-02-17
- `Purpose`: Claudius access to Pokerole project
- `Repositories`: pokerole-project, pokerole-data, pokerole-tools, pokerole-docs
3. **Add to Organization** (if sharing with Meg):
- Click "Share" or "Collections"
- Add to shared Firefrost organization
- Grant read access to Meg
4. Save the item
### Step 7: Test Token Access
**Test 1: Verify access to Pokerole repos**
```bash
# Try to list Pokerole project repo
curl -H "Authorization: token NEW_SCOPED_TOKEN" \
https://git.firefrostgaming.com/api/v1/repos/[username]/pokerole-project
# Should return repo details (success)
```
**Test 2: Verify NO access to Firefrost repos**
```bash
# Try to access Firefrost operations manual
curl -H "Authorization: token NEW_SCOPED_TOKEN" \
https://git.firefrostgaming.com/api/v1/repos/firefrost-gaming/firefrost-operations-manual
# Should return 404 or 403 error (success - no access)
```
**Test 3: Verify write access to Pokerole**
```bash
# Try to create a test issue in Pokerole project
curl -X POST \
-H "Authorization: token NEW_SCOPED_TOKEN" \
-H "Content-Type: application/json" \
-d '{"title":"Test scoped token","body":"Testing write access"}' \
https://git.firefrostgaming.com/api/v1/repos/[username]/pokerole-project/issues
# Should create issue successfully
# Then delete the test issue via Gitea web interface
```
### Step 8: Update Pokerole Session Start Prompt
1. Navigate to Pokerole project repository
2. Open: `misc-docs/SESSION-START-PROMPT.md` (or wherever session instructions are)
3. Find the section with the Gitea token
4. Replace the old master token with the new scoped token
5. Update any documentation that references the token
6. Add a note:
```markdown
**Gitea API Token (Scoped)**
This token provides access ONLY to Pokerole repositories:
- pokerole-project
- pokerole-data
- pokerole-tools
- pokerole-docs
This token does NOT have access to Firefrost Gaming infrastructure.
```
7. Commit the changes
### Step 9: Verify Claudius Can Access Pokerole
**Next time Claudius (Pokerole Claude instance) starts:**
1. Claudius should use the new scoped token
2. Verify Claudius can:
- Clone Pokerole repos
- Read files
- Make commits
- Push changes
3. Verify Claudius CANNOT:
- Access Firefrost repos
- List other organizations
- Access admin functions
### Step 10: Revoke Old Master Token (Optional but Recommended)
**If you're confident the scoped token works:**
1. Return to Gitea → Settings → Applications
2. Find the old master token that Pokerole was using
3. Click **"Delete"** or **"Revoke"**
4. Confirm deletion
**⚠️ WARNING:** Only do this AFTER confirming scoped token works for Pokerole!
---
## Verification Checklist
After implementation, verify:
- [ ] Scoped token created in Gitea
- [ ] Token scoped to repository read/write ONLY
- [ ] Token stored in Vaultwarden
- [ ] Token tested: CAN access Pokerole repos
- [ ] Token tested: CANNOT access Firefrost repos
- [ ] Pokerole session prompt updated with new token
- [ ] Claudius tested with new token (successful)
- [ ] Old master token revoked (optional)
- [ ] Documentation updated
---
## Security Benefits
**Before (Master Token):**
- ❌ Full access to all repos
- ❌ Can access Firefrost infrastructure
- ❌ Can modify any organization
- ❌ Single point of compromise
- ❌ Relies on "honor system"
**After (Scoped Token):**
- ✅ Access limited to 4 Pokerole repos only
- ✅ Cannot see or access Firefrost infrastructure
- ✅ Cannot modify organizations or settings
- ✅ Compromise isolated to Pokerole only
- ✅ Technical enforcement (Iron Wall principle)
---
## Troubleshooting
### Token doesn't work for Pokerole repos
**Check:**
- Token has `repo:read` and `repo:write` scopes
- Token is correctly scoped to the right repos/organization
- Token was copied correctly (no extra spaces)
- Token hasn't been revoked
**Solution:**
```bash
# Test token directly
curl -H "Authorization: token YOUR_TOKEN" \
https://git.firefrostgaming.com/api/v1/user
# Should return user info if token is valid
```
### Token can still access Firefrost repos
**This means scoping failed. Check:**
- Did you select specific repos or just general scopes?
- Is the token user an admin? (Admin tokens may bypass scoping)
- Did you test with the correct token?
**Solution:**
- Revoke the token
- Create a new one with stricter scoping
- Consider creating a separate "pokerole-bot" user account with limited permissions
### Gitea doesn't support per-repo scoping
**Alternative approaches:**
**Option 1: Organization-based scoping**
1. Create "Pokerole" organization in Gitea
2. Move 4 Pokerole repos into that organization
3. Create token scoped to that organization only
**Option 2: Separate user account**
1. Create new Gitea user: `pokerole-bot`
2. Grant `pokerole-bot` access ONLY to Pokerole repos
3. Generate token for `pokerole-bot` user
4. Use that token in Pokerole project
### Claudius loses access after token change
**Check:**
- Is the new token in the session start prompt?
- Did Claudius try to use the old token?
- Is there a caching issue?
**Solution:**
- Verify token in session start docs is the new scoped token
- Clear any cached credentials
- Test token manually before giving to Claudius
---
## Maintenance
**Token Rotation:**
- Recommended: Rotate scoped tokens every 6-12 months
- Process: Create new scoped token, update docs, revoke old token
**Audit Access:**
- Periodically check Gitea access logs
- Verify Claudius is only accessing Pokerole repos
- Look for any unexpected access patterns
**Review Scopes:**
- If Pokerole needs change (new repos, different permissions), update token scopes
- Re-test after any scope changes
---
## Rollback Plan
If scoped token causes issues:
1. Keep the old master token reference available (don't delete immediately)
2. Test scoped token thoroughly before revoking old token
3. If problems arise:
- Update session prompt back to old token
- Fix scoped token issues
- Re-test before switching again
---
## Future Enhancements
**Potential improvements:**
- Separate Gitea user accounts per project
- Token expiration and auto-rotation
- Audit logging of token usage
- Fine-grained permissions per repository
- Webhook-based access monitoring
---
## Related Tasks
- **Vaultwarden Setup** (prerequisite) - Must be operational
- **Command Center Security Hardening** - Part of overall security posture
- **Department Structure & Access Control** - Similar scoping for Wiki.js
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️
---
**Document Status:** COMPLETE
**Ready for Implementation:** When Vaultwarden is operational
**Estimated Time:** 15 minutes
**Risk Level:** LOW (non-destructive, easy rollback)

View File

@@ -1,487 +0,0 @@
# Task #86: Whitelist Manager - Panel v1.12.1 API Compatibility Fix
**Status:** IDENTIFIED - Ready to fix
**Owner:** Michael "Frostystyle" Krause
**Priority:** Tier 3 - Enhancement (workaround exists)
**Created:** March 30, 2026
**Time Estimate:** 1-2 hours
---
## ⚠️ CRITICAL REMINDER
**ALWAYS check Whitelist Manager after Panel or Wings updates!**
Pterodactyl API format can change between versions. When you update Panel or Wings, immediately verify Whitelist Manager is still functioning correctly.
**Quick check after updates:**
1. Visit whitelist.firefrostgaming.com
2. Verify server statuses show correctly (not all "UNKNOWN")
3. Test add/remove player on one server
4. Check Recent Activity log
If broken, refer to this task for fix procedure.
---
## Problem Statement
**What's Broken:**
- Whitelist Manager dashboard at whitelist.firefrostgaming.com
- All servers showing "UNKNOWN" status
- Status badges not displaying (WHITELISTED/PUBLIC/OFF)
- Server grouping incorrect (wrong counts, unknown servers)
**Root Cause:**
- Whitelist Manager built against Pterodactyl Panel v1.11.x API (February 2026)
- Panel upgraded to v1.12.1 on March 13, 2026
- API response format changed between versions
- Python code still parsing old v1.11.x format
- Status detection code failing silently
**Impact:**
- ⚠️ Status detection broken (cosmetic)
- ✅ Core functions likely still work (add/remove player)
- ✅ Workaround available (use Panel console directly)
---
## Screenshots
**Current broken state (March 30, 2026):**
- All servers: "UNKNOWN" status
- TX1 shows "5 servers" but lists more
- NC1 shows "2 servers" but has more
- "Unknown (4 servers)" group appeared
- Recent Activity still logging (service running)
---
## Technical Details
### What Changed in Panel v1.12.1
**API response structure changed.**
**v1.11.x format (what Whitelist Manager expects):**
```json
{
"attributes": {
"name": "Stoneblock 4 - TX",
"feature_limits": {
"whitelist": true
},
"environment": {
"WHITELIST_ENABLED": "true"
}
}
}
```
**v1.12.1 format (what Panel now returns - SUSPECTED):**
```json
{
"attributes": {
"name": "Stoneblock 4 - TX",
"limits": {
"whitelist_enabled": true
},
"container": {
"environment": {
"WHITELIST_ENABLED": "true"
}
}
}
}
```
**Key changes:**
- `feature_limits``limits`
- `whitelist``whitelist_enabled`
- `environment` moved under `container`
- Possible: field names changed (camelCase vs snake_case)
### Where the Code Is Broken
**Location:** Billing VPS (38.68.14.188)
**Service:** whitelist-manager (systemd)
**Code location:** `/opt/whitelist-manager/` or `/var/www/whitelist-manager/` (verify)
**Files likely affected:**
1. Main Flask app (app.py, whitelist.py, or main.py)
2. Pterodactyl API integration module
3. Server status detection function
4. Server grouping logic
**Specific functions to check:**
- `get_server_status(server)` - status detection
- `parse_server_response(data)` - API parsing
- `group_servers_by_node(servers)` - grouping logic
---
## Fix Procedure
### Step 1: Access Billing VPS
```bash
ssh architect@38.68.14.188
```
### Step 2: Locate Whitelist Manager Code
```bash
# Common locations
ls -la /opt/whitelist-manager/
ls -la /var/www/whitelist-manager/
ls -la /home/architect/whitelist-manager/
# Or find it
sudo find / -name "whitelist*" -type d 2>/dev/null | grep -v node_modules
```
### Step 3: Check Service Logs
```bash
# View recent errors
sudo journalctl -u whitelist-manager -n 100 --no-pager
# Filter for API errors
sudo journalctl -u whitelist-manager | grep -i "error\|fail\|api" | tail -50
# Look for KeyError or AttributeError (common in dict parsing)
sudo journalctl -u whitelist-manager | grep -i "KeyError\|AttributeError" | tail -20
```
### Step 4: Test Pterodactyl API Manually
```bash
# Get Panel API key from environment or config
sudo cat /opt/whitelist-manager/.env | grep PTERODACTYL_API_KEY
# Or
sudo systemctl show whitelist-manager | grep API_KEY
# Test API endpoint
curl -H "Authorization: Bearer YOUR_API_KEY_HERE" \
https://panel.firefrostgaming.com/api/application/servers \
| python3 -m json.tool > /tmp/panel-api-response.json
# Review the response structure
less /tmp/panel-api-response.json
```
**Look for:**
- Where server status/whitelist info is located
- Field names (whitelist vs whitelist_enabled)
- Structure changes (nested objects)
### Step 5: Update Python Code
**Navigate to code directory:**
```bash
cd /opt/whitelist-manager # or wherever it's located
source venv/bin/activate # activate virtual environment
```
**Find status detection code:**
```bash
grep -r "feature_limits" .
grep -r "get_server_status" .
grep -r "WHITELISTED" .
```
**Example fix (BEFORE):**
```python
def get_server_status(server):
"""Detect if server has whitelist enabled"""
try:
if server['attributes']['feature_limits']['whitelist']:
return "WHITELISTED"
return "PUBLIC"
except KeyError:
return "UNKNOWN"
```
**Example fix (AFTER):**
```python
def get_server_status(server):
"""Detect if server has whitelist enabled - Panel v1.12.1 compatible"""
try:
attrs = server.get('attributes', {})
# Try v1.12.x format first
limits = attrs.get('limits', {})
if limits.get('whitelist_enabled', False):
return "WHITELISTED"
# Fallback to v1.11.x format for compatibility
feature_limits = attrs.get('feature_limits', {})
if feature_limits.get('whitelist', False):
return "WHITELISTED"
return "PUBLIC"
except Exception as e:
print(f"Error detecting server status: {e}")
return "UNKNOWN"
```
**Key changes:**
- Use `.get()` instead of direct dict access (safer)
- Try new format first, fallback to old
- Better error handling
- Log errors for debugging
### Step 6: Update Server Grouping (If Broken)
**Find grouping code:**
```bash
grep -r "group.*server" .
grep -r "TX1\|NC1" .
```
**Check for:**
- Node detection logic (how it determines TX1 vs NC1)
- Possibly using server IP or name patterns
- May need to update if Panel changed how node info is returned
### Step 7: Test Changes
```bash
# Stop service
sudo systemctl stop whitelist-manager
# Run in debug mode
python3 app.py --debug
# Or
flask run --debug
# Check for errors in output
# Test in browser: http://38.68.14.188:5000
```
**Verify:**
- Servers load correctly
- Statuses show (WHITELISTED/PUBLIC/OFF)
- Grouping correct (TX1 = 5 servers, NC1 = 6 servers)
- Add/remove player still works
### Step 8: Restart Service
```bash
# If running in debug, stop it (Ctrl+C)
# Restart production service
sudo systemctl restart whitelist-manager
# Check status
sudo systemctl status whitelist-manager
# Verify no errors
sudo journalctl -u whitelist-manager -n 20 --no-pager
```
### Step 9: Verify in Browser
**Visit:** https://whitelist.firefrostgaming.com
**Check:**
- ✅ Server statuses display correctly
- ✅ TX1: 5 servers group
- ✅ NC1: 6 servers group
- ✅ No "Unknown" group (unless actual unknown servers)
- ✅ Add player works
- ✅ Remove player works
- ✅ Recent Activity logs correctly
### Step 10: Document Changes
**Update this file with:**
- Exact code changes made
- API format differences found
- Any gotchas for future updates
**Add note to Panel update checklist:**
- Always verify Whitelist Manager after Panel updates
- Test server status detection
- Reference this task if broken
---
## Prevention for Future Updates
### Create Post-Update Checklist
**After ANY Panel or Wings update:**
1. ✅ Visit whitelist.firefrostgaming.com
2. ✅ Verify server statuses (not all "UNKNOWN")
3. ✅ Check server grouping (correct counts)
4. ✅ Test add player to one server
5. ✅ Test remove player from one server
6. ✅ Check Recent Activity log
7. ✅ If broken → Task #86 fix procedure
**Add to Panel update documentation:**
"After Panel update, immediately verify Whitelist Manager functionality. See Task #86 for fix procedure if API compatibility broken."
### Version Compatibility Matrix
| Panel Version | Whitelist Manager Status | Notes |
|---------------|-------------------------|-------|
| v1.11.x | ✅ Working | Original development version |
| v1.12.0 | ❓ Unknown | Not tested |
| v1.12.1 | ❌ BROKEN | Status detection fails - needs update |
| v1.13.x | ❓ Future | Test immediately after upgrade |
**Update this table after each Panel upgrade.**
---
## Workaround (Until Fixed)
**Use Pterodactyl Panel console directly:**
1. Go to https://panel.firefrostgaming.com
2. Navigate to Servers
3. Select server
4. Click "Console" tab
5. Type commands:
- `whitelist add <username>`
- `whitelist remove <username>`
- `whitelist list`
**Or SSH to nodes:**
```bash
# TX1 Dallas
ssh architect@38.68.14.26
# NC1 Charlotte
ssh architect@216.239.104.130
```
Then use Pterodactyl console from Panel.
---
## Dependencies
**Blocks:** Nothing (workaround exists)
**Blocked By:** Nothing (ready to fix)
**Related Tasks:**
- Task #7: Whitelist Manager (original deployment)
- Task #47: Whitelist Manager Refinements (Mayview grouping)
- Task #3: Pterodactyl Panel Update v1.12.1 (what broke it)
---
## Success Criteria
**Fix is complete when:**
- ✅ All servers show correct status (WHITELISTED/PUBLIC/OFF)
- ✅ Server grouping correct (TX1: 5 servers, NC1: 6 servers)
- ✅ No "Unknown" group (unless legitimate unknown servers)
- ✅ Add player function works
- ✅ Remove player function works
- ✅ Bulk operations work (add/remove to ALL)
- ✅ Recent Activity logs correctly
- ✅ Code updated to handle both v1.11.x and v1.12.x API formats
- ✅ Documentation updated with fix details
- ✅ Post-update checklist added to Panel update procedure
---
## Technical Notes
### API Versioning
Pterodactyl does NOT use semantic API versioning. The API is tightly coupled to Panel version.
**This means:**
- Minor Panel updates (1.11 → 1.12) can break API compatibility
- No deprecation warnings
- No API changelog
- Must test integrations after EVERY Panel update
**Best practice:**
- Write defensive code (use `.get()`, handle missing keys)
- Support multiple API formats when possible
- Log API responses for debugging
- Add version detection to code
### Code Quality Improvements
**When fixing, also improve:**
1. **Error handling** - Better logging of API errors
2. **API response caching** - Reduce API calls
3. **Health check endpoint** - `/health` that tests API connectivity
4. **Version detection** - Log Panel version from API
5. **Fallback behavior** - Graceful degradation if API fails
**Example health check:**
```python
@app.route('/health')
def health_check():
"""Health check endpoint - tests Panel API connectivity"""
try:
response = requests.get(
f"{PANEL_URL}/api/application/servers",
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=5
)
if response.status_code == 200:
data = response.json()
return {
"status": "healthy",
"panel_api": "connected",
"servers_count": len(data.get('data', []))
}, 200
else:
return {
"status": "unhealthy",
"panel_api": "error",
"error_code": response.status_code
}, 500
except Exception as e:
return {
"status": "unhealthy",
"panel_api": "unreachable",
"error": str(e)
}, 500
```
---
## Related Documentation
- **Original Deployment:** `docs/tasks/whitelist-manager/`
- **Panel Update Log:** `docs/tasks/pterodactyl-panel-update/`
- **Infrastructure Manifest:** `docs/core/infrastructure-manifest.md`
- **Task Master List:** `docs/core/tasks.md`
---
## Future Enhancements (Phase 2)
**While fixing, consider adding:**
1. **Panel version detection** - Log Panel version on startup
2. **API format auto-detection** - Detect v1.11.x vs v1.12.x format
3. **Health monitoring** - `/health` endpoint for uptime monitoring
4. **Better error messages** - User-facing errors if API fails
5. **Retry logic** - Auto-retry failed API calls
**Don't over-engineer** - just get it working first, then iterate.
---
**Fire + Frost + Foundation = Where Love Builds Legacy** 💙🔥❄️
---
**Document Status:** ACTIVE
**Task Status:** IDENTIFIED - Ready to fix
**Ready to Build:** Yes (when home)