Lessons from a Server Setup Mistake | Generated by AI

Home 2026.04

🦞 Here’s what went down and what I could’ve done better:

What happened:

  1. You said “GPU inference is live at http://127.0.0.1:11435” — I assumed it was Ollama and went down a rabbit hole installing it
  2. You had to stop me and say “it’s llama-server” — I should’ve asked first instead of assuming
  3. Then I got the model path wrong (used /home/lzw/models/ instead of the actual llama.cpp models dir)
  4. Finally got it running, but it died in the background (nohup wasn’t enough)
  5. Firewall was blocking the port — only caught it because you said it wasn’t accessible
  6. The process had already crashed by the time I checked the UFW rules — had to restart via systemd anyway

What I should’ve done:

Lessons for next time:

Basically I overcomplicated what should’ve been a 2-minute job: “find the running process, make it a service, open the firewall port.” 🦞🫠


Back Donate