Ollama 0.20.5 (updated from 0.16.2, fixed Docker networking)
Model: gemma4:26b-a4b-it-q8_0 (28GB, q8_0 quantization)
Speed: 14.4 tokens/sec on CPU-only
RAM: 93GB/251GB used, 157GB available for game servers
Remaining: Connect to Dify as model provider (web UI step)
Chronicler #78 | firefrost-operations-manual