Files
firefrost-operations-manual/docs/tasks/frostwall-protocol/deployment-plan.md
Claude 2bd96ee8c7 docs: Complete Frostwall Protocol deployment documentation
Created comprehensive documentation for Frostwall Protocol rebuild:

deployment-plan.md (500+ lines):
- Complete 7-phase implementation guide
- GRE tunnel configuration for Command Center ↔ TX1/NC1
- Iron Wall UFW firewall rules
- NAT/port forwarding setup
- Self-healing tunnel monitoring with auto-recovery
- DNS configuration
- Testing and verification procedures
- Rollback plan
- Performance considerations

ip-hierarchy.md (400+ lines):
- Three-tier IP architecture explained
- Complete service mapping table (all 11 game servers)
- GRE tunnel IP addressing
- Traffic flow diagrams
- DNS configuration reference
- Security summary
- Quick command reference

troubleshooting.md (450+ lines):
- Quick diagnostics checklist
- Common problems with step-by-step solutions:
  - Tunnel won't come up
  - Can't ping tunnel IP
  - Port forwarding not working
  - Tunnel breaks after reboot
  - Self-healing monitor issues
  - High latency/packet loss
  - UFW blocking traffic
- Emergency recovery procedures
- Common error messages decoded
- Health check commands

This documentation enables rebuilding the Frostwall Protocol from scratch
with proper IP hierarchy, DDoS protection, and self-healing capabilities.

Unblocks: Mailcow deployment, AI stack, all Tier 2+ infrastructure

Task: Frostwall Protocol (Tier 1, Critical)
FFG-STD-002 compliant
2026-02-17 15:01:35 +00:00

16 KiB

The Frostwall Protocol - Deployment Plan

Version: 2.0 (Rebuild)
Status: Planning Complete, Ready for Implementation
Priority: CRITICAL - Tier 1 Security Foundation
Last Updated: 2026-02-17
Time Estimate: 3-4 hours active work


Executive Summary

The Frostwall Protocol is a custom DDoS protection and network security architecture that routes all game traffic through a central scrubbing hub (Command Center in Dallas) to remote nodes (TX1 Dallas, NC1 Charlotte) via encrypted GRE tunnels. This "cloaks" the real server IPs, protects email reputation, and provides enterprise-grade DDoS mitigation without expensive third-party services.

Key Innovation: Three-tier IP hierarchy separating scrubbing, backend, and binding layers.


Architecture Overview

The Hub-and-Spoke Model

                    Internet (Attackers, Players)
                              |
                              v
                    ┌─────────────────────┐
                    │  Command Center     │
                    │  (Dallas Hub)       │
                    │  63.143.34.217      │
                    │  ┌───────────────┐  │
                    │  │ DDoS Scrubbing│  │
                    │  │ & Filtering   │  │
                    │  └───────────────┘  │
                    └─────────┬───────────┘
                              |
              ┌───────────────┴───────────────┐
              |                               |
              v                               v
    ┌─────────────────┐            ┌─────────────────┐
    │  TX1 (Dallas)   │            │  NC1 (Charlotte)│
    │  GRE Tunnel     │            │  GRE Tunnel     │
    │  38.68.14.x     │            │  216.239.104.x  │
    │  ┌───────────┐  │            │  ┌───────────┐  │
    │  │Game Servers│ │            │  │Game Servers│ │
    │  └───────────┘  │            │  └───────────┘  │
    └─────────────────┘            └─────────────────┘

Three-Tier IP Hierarchy

Layer 1: Scrubbing Center IP (What the world sees)

  • External-facing IPs on Command Center
  • Announced to DNS, advertised to players
  • Absorbs DDoS attacks
  • Example: Player connects to play.firefrostgaming.com → resolves to scrubbing IP

Layer 2: Backend Alias IP (Hidden server address)

  • Real IP address of TX1/NC1 nodes
  • Never publicly disclosed
  • Only accessible via GRE tunnel from Command Center
  • Protected by Iron Wall firewall rules

Layer 3: Binding Truth IP (Internal service binding)

  • Localhost or internal IP where services actually bind
  • Used by Pterodactyl, game servers, services
  • Never exposed outside the server itself

Network Topology

Command Center (Hub)

  • Public IP: 63.143.34.217
  • Role: Scrubbing center, GRE tunnel hub
  • Services: Gitea, Uptime Kuma, Code-Server, Automation
  • Network: /29 block (74.63.218.200/29) for clean routing

TX1 Dallas (Spoke)

  • Public IP Block: 38.68.14.24/29 (6 usable IPs)
  • GRE Tunnel IP: TBD (tunnel interface)
  • Role: Game servers (Fire + Frost modpacks)
  • Services: 5 Minecraft servers + FoundryVTT

NC1 Charlotte (Spoke)

  • Public IP: 216.239.104.130 (shared by all servers)
  • GRE Tunnel IP: TBD (tunnel interface)
  • Role: Game servers (Fire + Frost modpacks)
  • Services: 5 Minecraft servers + Hytale

Prerequisites

Command Center Requirements

  • Root SSH access
  • GRE kernel module loaded (modprobe ip_gre)
  • UFW installed and configured
  • Static IP confirmed: 63.143.34.217
  • Management IP whitelisted (Michael's static IP)

TX1 Requirements

  • Root SSH access
  • GRE kernel module support
  • UFW installed
  • IP block details: 38.68.14.24/29
  • Pterodactyl Wings running

NC1 Requirements

  • Root SSH access
  • GRE kernel module support
  • UFW installed
  • IP confirmed: 216.239.104.130
  • Pterodactyl Wings running

Information Gathering

  • Michael's management IP (static home IP)
  • Desired tunnel IP ranges (e.g., 10.0.1.0/30 for TX1, 10.0.2.0/30 for NC1)
  • Current UFW rules backup
  • Service port list (which ports need forwarding)

Phase 1: GRE Tunnel Configuration

1.1 Command Center → TX1 Tunnel

On Command Center (Hub):

# Create GRE tunnel interface to TX1
ip tunnel add gre-tx1 mode gre remote 38.68.14.26 local 63.143.34.217 ttl 255
ip addr add 10.0.1.1/30 dev gre-tx1
ip link set gre-tx1 up

On TX1 (Spoke):

# Create GRE tunnel interface to Command Center
ip tunnel add gre-hub mode gre remote 63.143.34.217 local 38.68.14.26 ttl 255
ip addr add 10.0.1.2/30 dev gre-hub
ip link set gre-hub up

Verify Connectivity:

# From Command Center
ping -c 4 10.0.1.2

# From TX1
ping -c 4 10.0.1.1

1.2 Command Center → NC1 Tunnel

On Command Center (Hub):

# Create GRE tunnel interface to NC1
ip tunnel add gre-nc1 mode gre remote 216.239.104.130 local 63.143.34.217 ttl 255
ip addr add 10.0.2.1/30 dev gre-nc1
ip link set gre-nc1 up

On NC1 (Spoke):

# Create GRE tunnel interface to Command Center
ip tunnel add gre-hub mode gre remote 63.143.34.217 local 216.239.104.130 ttl 255
ip addr add 10.0.2.2/30 dev gre-hub
ip link set gre-hub up

Verify Connectivity:

# From Command Center
ping -c 4 10.0.2.2

# From NC1
ping -c 4 10.0.2.1

1.3 Persist Tunnel Configuration

Create persistence script on each server:

On Command Center: /etc/network/if-up.d/frostwall-tunnels

#!/bin/bash
# Frostwall Protocol - GRE Tunnel Persistence
# Command Center (Hub)

# TX1 Tunnel
ip tunnel show gre-tx1 &>/dev/null || {
    ip tunnel add gre-tx1 mode gre remote 38.68.14.26 local 63.143.34.217 ttl 255
    ip addr add 10.0.1.1/30 dev gre-tx1
    ip link set gre-tx1 up
}

# NC1 Tunnel
ip tunnel show gre-nc1 &>/dev/null || {
    ip tunnel add gre-nc1 mode gre remote 216.239.104.130 local 63.143.34.217 ttl 255
    ip addr add 10.0.2.1/30 dev gre-nc1
    ip link set gre-nc1 up
}
chmod +x /etc/network/if-up.d/frostwall-tunnels

On TX1: /etc/network/if-up.d/frostwall-tunnel

#!/bin/bash
# Frostwall Protocol - GRE Tunnel Persistence
# TX1 Dallas (Spoke)

ip tunnel show gre-hub &>/dev/null || {
    ip tunnel add gre-hub mode gre remote 63.143.34.217 local 38.68.14.26 ttl 255
    ip addr add 10.0.1.2/30 dev gre-hub
    ip link set gre-hub up
}
chmod +x /etc/network/if-up.d/frostwall-tunnel

On NC1: /etc/network/if-up.d/frostwall-tunnel

#!/bin/bash
# Frostwall Protocol - GRE Tunnel Persistence
# NC1 Charlotte (Spoke)

ip tunnel show gre-hub &>/dev/null || {
    ip tunnel add gre-hub mode gre remote 63.143.34.217 local 216.239.104.130 ttl 255
    ip addr add 10.0.2.2/30 dev gre-hub
    ip link set gre-hub up
}
chmod +x /etc/network/if-up.d/frostwall-tunnel

Phase 2: Iron Wall Firewall Rules

2.1 Command Center UFW Rules

Default Policy: DROP EVERYTHING except GRE and Management

# Reset UFW to default deny
ufw --force reset
ufw default deny incoming
ufw default allow outgoing

# Allow SSH from management IP ONLY
ufw allow from MICHAEL_MANAGEMENT_IP to any port 22 proto tcp

# Allow GRE protocol (IP protocol 47)
ufw allow proto gre

# Allow established connections
ufw allow in on gre-tx1
ufw allow in on gre-nc1

# Allow web services (Gitea, Uptime Kuma, etc.)
ufw allow 80/tcp
ufw allow 443/tcp

# Enable UFW
ufw --force enable

2.2 TX1 UFW Rules

Drop all external traffic, allow only GRE tunnel + Management

# Reset UFW
ufw --force reset
ufw default deny incoming
ufw default allow outgoing

# Allow SSH from management IP ONLY
ufw allow from MICHAEL_MANAGEMENT_IP to any port 22 proto tcp

# Allow GRE from Command Center
ufw allow from 63.143.34.217 proto gre

# Allow traffic from GRE tunnel
ufw allow in on gre-hub

# Allow Pterodactyl Wings SFTP (only from tunnel)
ufw allow from 10.0.1.1 to any port 2022 proto tcp

# DO NOT allow game ports on physical interface
# All game traffic must come through tunnel

# Enable UFW
ufw --force enable

2.3 NC1 UFW Rules

Same as TX1, adjusted for NC1 tunnel IP

# Reset UFW
ufw --force reset
ufw default deny incoming
ufw default allow outgoing

# Allow SSH from management IP ONLY
ufw allow from MICHAEL_MANAGEMENT_IP to any port 22 proto tcp

# Allow GRE from Command Center
ufw allow from 63.143.34.217 proto gre

# Allow traffic from GRE tunnel
ufw allow in on gre-hub

# Allow Pterodactyl Wings SFTP (only from tunnel)
ufw allow from 10.0.2.1 to any port 2022 proto tcp

# Enable UFW
ufw --force enable

Phase 3: NAT/Port Forwarding (Command Center)

3.1 Enable IP Forwarding

# Enable IPv4 forwarding
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p

3.2 Configure NAT for Game Traffic

Example: Forward Minecraft port from Command Center to TX1 server

# Enable NAT masquerading
iptables -t nat -A POSTROUTING -o gre-tx1 -j MASQUERADE

# Forward port 25565 to Reclamation server on TX1
iptables -t nat -A PREROUTING -p tcp --dport 25565 -j DNAT --to-destination 10.0.1.2:25565
iptables -A FORWARD -p tcp -d 10.0.1.2 --dport 25565 -j ACCEPT

# Forward port 25566 to Stoneblock 4 on TX1
iptables -t nat -A PREROUTING -p tcp --dport 25566 -j DNAT --to-destination 10.0.1.2:25566
iptables -A FORWARD -p tcp -d 10.0.1.2 --dport 25566 -j ACCEPT

Repeat for all game servers on TX1 and NC1

3.3 Persist IPTables Rules

# Install iptables-persistent
apt install iptables-persistent -y

# Save current rules
iptables-save > /etc/iptables/rules.v4

Phase 4: Self-Healing Tunnel Monitoring

4.1 Create Tunnel Health Check Script

On Command Center: /usr/local/bin/frostwall-monitor.sh

#!/bin/bash
# Frostwall Protocol - Self-Healing Tunnel Monitor
# Monitors GRE tunnels and auto-restarts if down

TUNNEL_TX1="gre-tx1"
TUNNEL_NC1="gre-nc1"
REMOTE_TX1="10.0.1.2"
REMOTE_NC1="10.0.2.2"
LOG="/var/log/frostwall-monitor.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG"
}

check_tunnel() {
    local tunnel=$1
    local remote=$2
    
    if ! ping -c 2 -W 3 "$remote" &>/dev/null; then
        log "ERROR: Tunnel $tunnel to $remote is DOWN - restarting"
        
        # Restart tunnel
        ip link set "$tunnel" down
        sleep 2
        ip link set "$tunnel" up
        
        sleep 5
        
        if ping -c 2 -W 3 "$remote" &>/dev/null; then
            log "SUCCESS: Tunnel $tunnel to $remote restored"
        else
            log "CRITICAL: Tunnel $tunnel to $remote failed to restore - manual intervention required"
        fi
    fi
}

# Check TX1 tunnel
check_tunnel "$TUNNEL_TX1" "$REMOTE_TX1"

# Check NC1 tunnel
check_tunnel "$TUNNEL_NC1" "$REMOTE_NC1"
chmod +x /usr/local/bin/frostwall-monitor.sh

4.2 Schedule with Cron

# Run every 5 minutes
crontab -e

Add:

*/5 * * * * /usr/local/bin/frostwall-monitor.sh

Phase 5: DNS and Service Binding

5.1 Update DNS Records

All game server subdomains should point to Command Center IP (63.143.34.217)

Example DNS records:

play.firefrostgaming.com          A    63.143.34.217
reclamation.firefrostgaming.com   A    63.143.34.217
stoneblock.firefrostgaming.com    A    63.143.34.217
vanilla.firefrostgaming.com       A    63.143.34.217

5.2 Verify Service Binding on Nodes

On TX1 and NC1, ensure Minecraft servers bind to ALL interfaces or 0.0.0.0

In server.properties:

server-ip=

(Leave blank or set to 0.0.0.0)

DO NOT bind to specific IPs - let the GRE tunnel routing handle it


Phase 6: Testing and Verification

6.1 Tunnel Connectivity Tests

# From Command Center
ping -c 10 10.0.1.2  # TX1
ping -c 10 10.0.2.2  # NC1

# From TX1
ping -c 10 10.0.1.1  # Command Center

# From NC1
ping -c 10 10.0.2.1  # Command Center

6.2 Port Forwarding Tests

# From external machine (NOT on Firefrost network)
telnet 63.143.34.217 25565  # Should connect to Reclamation on TX1

Test each game server port

6.3 Firewall Validation

# On TX1/NC1: Try to connect directly to game port (should FAIL)
telnet 38.68.14.26 25565  # Should timeout/refuse

# Via Command Center: Should work
telnet 63.143.34.217 25565  # Should connect

6.4 Self-Healing Test

# On Command Center, manually bring down a tunnel
ip link set gre-tx1 down

# Wait 5 minutes for cron to run
# Check logs
tail -f /var/log/frostwall-monitor.log

# Tunnel should auto-restore
ping 10.0.1.2  # Should be back up

Phase 7: Documentation and IP Hierarchy

7.1 Document IP Mappings

Create reference table in repository:

Service Scrubbing IP Backend IP Tunnel IP Binding IP
Reclamation 63.143.34.217:25565 38.68.14.27 10.0.1.2:25565 0.0.0.0:25565
Stoneblock 4 63.143.34.217:25566 38.68.14.26 10.0.1.2:25566 0.0.0.0:25566
... ... ... ... ...

7.2 Update Infrastructure Manifest

Add Frostwall status to docs/core/infrastructure-manifest.md


Rollback Plan

If Frostwall implementation fails:

# On Command Center
ip link set gre-tx1 down
ip link set gre-nc1 down
ip tunnel del gre-tx1
ip tunnel del gre-nc1

# On TX1
ip link set gre-hub down
ip tunnel del gre-hub

# On NC1
ip link set gre-hub down
ip tunnel del gre-hub

# Restore previous UFW rules from backup
ufw --force reset
# Re-apply previous rules

Success Criteria Checklist

  • GRE tunnels operational (Command Center ↔ TX1, NC1)
  • Tunnels survive reboot (persistence scripts working)
  • All game traffic routes through tunnels
  • Direct connections to TX1/NC1 game ports blocked by firewall
  • Connections via Command Center IP successful
  • Real server IPs hidden from internet
  • Self-healing monitor running and tested
  • Email IP separate from game traffic (Mailcow can deploy on clean IP)
  • Management IP whitelisted and SSH working
  • Complete IP hierarchy documented
  • No performance degradation (latency test)

Performance Considerations

Expected Latency Impact:

  • Command Center → TX1: < 1ms (same datacenter)
  • Command Center → NC1: ~30-40ms (Dallas to Charlotte)
  • Total player impact: Negligible for TX1, acceptable for NC1

Bandwidth:

  • GRE adds ~24 bytes per packet (minimal overhead)
  • No significant throughput impact

CPU Load:

  • Encryption/decryption minimal on modern CPUs
  • Monitor with top during peak load

Troubleshooting

Tunnel Won't Come Up

# Check GRE module
lsmod | grep gre
modprobe ip_gre

# Check tunnel status
ip tunnel show

# Check routing
ip route show

Can't Ping Tunnel IP

# Verify UFW allows GRE
ufw status verbose | grep gre

# Check if tunnel interface is up
ip link show gre-tx1

Port Forwarding Not Working

# Check NAT rules
iptables -t nat -L -n -v

# Verify IP forwarding enabled
cat /proc/sys/net/ipv4/ip_forward  # Should be 1

Self-Healing Not Running

# Check cron
crontab -l

# Check logs
tail -100 /var/log/frostwall-monitor.log

# Manually run script
/usr/local/bin/frostwall-monitor.sh

Post-Deployment

Immediate Tasks

  1. Test all 11 game servers through Frostwall
  2. Monitor tunnel stability for 24 hours
  3. Update DNS records
  4. Document final IP mappings
  5. Mark Frostwall as OPERATIONAL in tasks.md

Future Enhancements

  • IPv6 tunnel support
  • Multiple scrubbing centers (redundancy)
  • Bandwidth monitoring per tunnel
  • DDoS attack logging and analytics
  • Automated alerting on tunnel failure

Fire + Frost + Foundation = Where Love Builds Legacy 💙🔥❄️


Document Status: COMPLETE
Ready for Implementation: YES
Prerequisites: SSH access to all three servers, management IP confirmed
Estimated Time: 3-4 hours with testing