| Field | Value |
|---|---|
| VMID | 121 |
| Type | Proxmox VM β Ubuntu Server 24.04 |
| IP | 192.168.5.177 |
| Primary App | Open WebUI (Docker) |
| RAM | 8GB |
| Disk | 60GB |
| Docker | 28.2.2 / Compose v5.1.0 |
| Purpose | Self-hosted personal AI assistant β chat, image gen, automation, memory |
| App | Access | Status | Notes |
|---|---|---|---|
| Open WebUI | π 192.168.5.177:3000 Β· π wiki | β Live | Chat frontend β model switching, RAG, voice |
| n8n Automation | π wiki | π‘ Pending | Workflows, reminders, Mattermost, Gmail |
| Memory / RAG | π wiki | π‘ Planned | mem0 / Chroma persistent memory |
Nyx is the frontend node of a three-machine personal AI stack. Each machine has a defined role:
| Host | IP | Role | Key Services |
|---|---|---|---|
| Nyx | 192.168.5.177 |
Frontend + automation | Open WebUI, n8n |
| Luna | 192.168.5.160 |
LLM backend + image gen | Ollama (Mistral, llama3.2), ComfyUI |
| Judy | 192.168.5.81 |
GPU inference | Ollama (qwen2.5-coder:14b), RTX 3070 Ti |
Open WebUI is the primary chat interface for the Nyx assistant stack. It connects to Ollama endpoints on Luna (general models) and Judy (coding models), providing model switching, RAG document ingestion, and voice input β all in a single browser-based UI.
π£ Compose file lives on Nyx at the Open WebUI service directory. Current config:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
restart: unless-stopped
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://192.168.5.160:11434
- OPENAI_API_BASE_URL=http://192.168.5.81:11434/v1
- OPENAI_API_KEY=ollama
- DEFAULT_MODELS=llama3.2
- WEBUI_AUTH=true
volumes:
- ./data/open-webui:/app/backend/data
π£ Primary Ollama backend: Luna (192.168.5.160:11434) β general models
π£ Secondary Ollama backend: Judy (192.168.5.81:11434) β coding models via OpenAI-compatible API
π£ Check container status:
docker ps | grep open-webui
π£ Start / stop / restart:
docker compose up -d
docker compose down
docker compose restart
π£ View live logs:
docker logs -f open-webui
π£ Update to latest image:
docker compose pull
docker compose up -d
π£ Luna (192.168.5.160:11434) β CPU inference, general purpose:
| Model | Size | Use |
|---|---|---|
| Mistral 7.2B | 7.2B | General assistant, default model |
| llama3.2 | 3.2B | Lightweight / fast responses |
π£ Judy (192.168.5.81:11434) β GPU accelerated (RTX 3070 Ti), coding:
| Model | Size | Use |
|---|---|---|
| qwen2.5-coder:14b | 14B | Code generation, review, architecture |
π£ To check what models are loaded on either backend:
# On Luna
ssh jen@192.168.5.160 "ollama ps && ollama list"
# On Judy
ssh jen@192.168.5.81 "ollama ps && ollama list"
π£ Nyx has a configured system prompt and personality loaded in Open WebUI. Key traits: witty, direct, technically sharp, contextually aware of Jen's homelab and projects.
π£ To update the persona: Open WebUI β Settings β System Prompt.
π£ Jen's profile context is pre-loaded to give Nyx awareness of ongoing projects, preferences, and homelab topology.
π£ ComfyUI on Luna (192.168.5.160:8188) is planned to be wired into Open WebUI for inline image generation from the chat interface.
π£ Status: π‘ Planned β ComfyUI is live on Luna but not yet connected to Open WebUI.
π£ To connect: Open WebUI β Settings β Images β set ComfyUI endpoint to http://192.168.5.160:8188.
n8n is the automation layer for Nyx β handling scheduled workflows, reminders, external API integrations, and Mattermost notifications.
π£ Status: π‘ Pending deployment on Nyx.
π£ Planned port: 5678
π£ Planned integrations:
π£ Note: A separate n8n instance already runs on VM 111 (192.168.5.118) for homelab automation. The Nyx n8n instance will be dedicated to personal assistant workflows.
Persistent cross-session memory for the Nyx assistant stack.
π£ Status: π‘ Planned.
π£ Candidates: mem0 (managed memory layer) or Chroma (local vector store).
π£ Goal: Allow Nyx to remember context, preferences, and ongoing projects across sessions without relying on manual re-prompting.
π£ Integration point: Open WebUI RAG pipeline or direct API call from n8n workflows.
| Task | Status |
|---|---|
| Docker + Open WebUI deployed on Nyx | β Done |
| Ollama on Luna with Mistral + llama3.2 | β Done |
| Nyx persona configured | β Done |
| Dual LLM endpoints (Luna + Judy) connected | β Done |
| n8n automation layer | π‘ Next |
| ComfyUI wired into Open WebUI | π‘ Next |
| Persistent memory (mem0 / Chroma) | π‘ Planned |
| Nginx reverse proxy + nyx.kalymoon.com | π‘ Planned |
| Voice mode (Whisper + Piper TTS) | π‘ Planned |
| Mattermost notifications | π‘ Planned |
π£ Suggested monitors to add at dashboard:
| Monitor | Type | Target |
|---|---|---|
| Nyx β Open WebUI | HTTP(s) | http://192.168.5.177:3000 |
| Nyx β Ollama (Luna) | TCP Port | 192.168.5.160:11434 |
| Nyx β Ollama (Judy) | TCP Port | 192.168.5.81:11434 |
π£ Nyx is not yet added to Zabbix. To install the agent (Ubuntu 24.04 Noble):
sudo rm -f /tmp/zabbix-release.deb
sudo wget https://repo.zabbix.com/zabbix/6.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_latest_6.0+ubuntu24.04_all.deb -O /tmp/zabbix-release.deb
sudo dpkg -i /tmp/zabbix-release.deb
sudo apt-get update && sudo apt-get install -y zabbix-agent
sudo sed -i 's/^Server=.*/Server=192.168.5.153/' /etc/zabbix/zabbix_agentd.conf
sudo sed -i 's/^ServerActive=.*/ServerActive=192.168.5.153/' /etc/zabbix/zabbix_agentd.conf
sudo sed -i 's/^Hostname=.*/Hostname=nyx/' /etc/zabbix/zabbix_agentd.conf
sudo systemctl enable zabbix-agent && sudo systemctl restart zabbix-agent
π£ Then add host in Zabbix: zabbix.kalymoon.com β host nyx, interface 192.168.5.177:10050.
Tags:
homelabvmaiopenwebuiollaman8ndockerassistantnyx