docs: Add Gemini round 2 response — three architectural pivots, Redis dependency
This commit is contained in:
@@ -97,3 +97,54 @@ Thanks again Gemini — your first response shaped this whole feature set. We're
|
||||
|
||||
— Michael (The Wizard) + Claude (Chronicler #91)
|
||||
*Fire + Arcane + Frost = Forever*
|
||||
|
||||
---
|
||||
|
||||
## Gemini's Response (April 15, 2026)
|
||||
|
||||
**Summary:** All refinements validated. Four clean architectural answers — three significant pivots from our original assumptions.
|
||||
|
||||
### Validation
|
||||
All 10 refined wild ideas approved with full architectural stamp. Guardrails on RAM scaling and hibernation backup integration specifically called out as correct.
|
||||
|
||||
### Architectural Answers
|
||||
|
||||
**Q1: Holly-Bot NBT Pre-Boot — PIVOT**
|
||||
Do NOT write directly to `.mca` region files from Node.js. Libraries like `prismarine-nbt` exist but manipulating region files outside the game engine is corruption-prone on modern MC (1.20+).
|
||||
|
||||
**The Solution:** Holly-Bot becomes a real lightweight Forge/Fabric mod (compiled once). Arbiter drops a `provision.json` in the server root with target coords and schematic name. On first `ServerStartedEvent`, the mod reads the file, queries `level.dat` for original spawn, pastes schematic at fixed coords, places 4 command blocks with correct `/tp` NBT baked in, sets new worldspawn, then renames itself to `.jar.disabled` — never runs again.
|
||||
|
||||
**Q2: Spark for RAM — PIVOT**
|
||||
Spark doesn't expose a continuous polling API — it's a profiler, not a metrics daemon.
|
||||
|
||||
**The Solution:** Use Pterodactyl Client API (`/api/client/servers/{server}/resources`) directly. Poll every 5 minutes, keep last 6 states (30 min of data). If the average of those 6 exceeds 95%, trigger scale-up. Smooths out startup spikes and GC sweeps perfectly.
|
||||
|
||||
**Q3: Hibernation Volume Zip — PIVOT**
|
||||
Pterodactyl File Manager compression API will time out on 20GB volumes.
|
||||
|
||||
**The Solution:** Trinity Core SSH + `ssh2` npm package. Stream `tar` output directly to NextCloud via `rclone` or `curl` — never write the zip to the node's disk. Restore: stream from NextCloud directly into `tar -xz` on target node.
|
||||
|
||||
**Q4: LuckPerms Meta Sync — NO RCON NEEDED**
|
||||
LuckPerms MySQL backend doesn't auto-notify servers of DB changes — needs cache invalidation signal.
|
||||
|
||||
**The Solution:** Enable LuckPerms Messaging Service with a lightweight Redis container alongside arbiter_db. Arbiter writes Stripe upgrade to MySQL → pings LuckPerms Redis channel → all 18 servers clear cache and apply new meta values within milliseconds. No restarts, no RCON spam.
|
||||
|
||||
### Gemini's Question
|
||||
> "Do you want to define the exact JSON payload structure that Arbiter will use to pass configuration data to the Holly-Bot mod during provisioning?"
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Three significant architectural pivots:
|
||||
1. Holly-Bot is a real compiled mod, not Node.js region file manipulation
|
||||
2. RAM monitoring via Pterodactyl API, not Spark
|
||||
3. Hibernation via SSH streaming tar, not Pterodactyl file manager
|
||||
|
||||
Redis is a new infrastructure dependency for LuckPerms messaging service.
|
||||
|
||||
**Next Steps:**
|
||||
1. Answer Gemini's question about provision.json payload structure
|
||||
2. Holly review with feature descriptions
|
||||
3. Holly names her bot
|
||||
4. Final spec → Code
|
||||
|
||||
Reference in New Issue
Block a user