Kubernetes GitOps Ingress Nginx with Lets Encrypt Certificate

This guide explains how to deploy Ingress-NGINX with dynamically (hostname-based) assigned Let’s Encrypt certificates using Flux GitOps. The steps are based on a working example and provide instructions for configuration, deployment, and testing. Prerequisites Flux Installed: Ensure Flux is installed and running in your Kubernetes cluster. Let’s Encrypt Certificate: Provisioned for FQDN. Follow the instructions in the Let’s Encrypt guide. Git Repository: A Git repository structured for Flux GitOps, e.g.: . ├── apps/ └── ingress-nginx/ └── base/ ├── clusters/ │ └── production/ │ ├── flux-system/ │ │ └── sources/ │ └── apps/ ├── infrastructure/ └── networking/ ├── metallb/ └── ingress-nginx/ Kubernetes Cluster: A Kubernetes cluster with MetalLB-compatible networking. 1. Deploying Ingress-NGINX via Flux Step 1: Create the Ingress-NGINX Namespace Create a namespace for Ingress-NGINX in your Git repository: ...

August 14, 2025 · 6 min · 1163 words · Dmitry Konovalov

Kubernetes GitOps MetalLB Sample Test Application

This guide explains how to deploy a sample application using Flux GitOps. It demonstrates creating a simple NGINX application and testing it with MetalLB. Prerequisites MetalLB Installed: Ensure MetalLB is installed and configured in your Kubernetes cluster. Flux Installed: Ensure Flux is installed and running in your Kubernetes cluster. Git Repository: A Git repository structured for Flux GitOps, e.g., . ├── apps/ │ └── nginx-test/ │ └── base/ ├── clusters/ │ └── production/ │ ├── apps/ <...> 1. Deploy a Sample Application Step 1: Create the Application Manifest File: apps/nginx-test/base/nginx-test.yaml: ...

August 14, 2025 · 2 min · 337 words · Dmitry Konovalov

Kubernetes Lab Sample Application

VM side On the master node create two files sample-deployment.yaml # file: sample-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: sample-app spec: replicas: 2 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx image: nginx:alpine ports: - containerPort: 80 sample-service-nodeport.yaml # file: sample-service-nodeport.yaml apiVersion: v1 kind: Service metadata: name: sample-service-nodeport spec: type: NodePort selector: app: sample-app ports: - protocol: TCP port: 80 # ClusterIP port targetPort: 80 # Container port nodePort: 30080 # Node port (any available >30000) Deploy application ...

August 14, 2025 · 2 min · 214 words · Dmitry Konovalov

Llama.cpp + OpenWebUI Setup on Proxmox

Llama.cpp + OpenWebUI Setup on Proxmox This document details the complete setup of a llama.cpp server with OpenWebUI interface running in a Proxmox container with HTTPS access. Container Specifications Container ID: #106 Name: llama-ai RAM: 8GB CPU: 4 cores Storage: 32GB (local-lvm) MAC Address: BC:24:11:15:F2:3A IP Address: 172.16.32.135 OS: Debian 12 (Bookworm) Services Overview llama.cpp Server Port: 8080 Model: Qwen2.5-1.5B-Instruct (Q4_0 quantization) Context Window: 16,384 tokens (~12,000-13,000 words) Service: llama-cpp.service Status: Auto-start enabled OpenWebUI Port: 3000 Interface: Web-based chat interface Service: open-webui.service Status: Auto-start enabled PyTorch: CPU version installed NGINX Reverse Proxy HTTP Port: 80 (redirects to HTTPS) HTTPS Port: 443 Domain: https://llama-ai.<yourdomain.com> SSL: Let’s Encrypt with Cloudflare DNS challenge Auto-renewal: Enabled Installation Steps 1. Create Proxmox Container pct create 106 local:vztmpl/debian-12-standard_12.12-1_amd64.tar.zst \ --hostname llama-ai \ --memory 8192 \ --cores 4 \ --rootfs local-lvm:32 \ --net0 name=eth0,bridge=vmbr0,hwaddr=BC:24:11:15:F2:3A,ip=dhcp \ --unprivileged 1 \ --onboot 1 2. Install Dependencies apt update && apt upgrade -y apt install -y build-essential cmake git curl wget python3 python3-pip pkg-config libcurl4-openssl-dev 3. Build llama.cpp cd /opt git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp mkdir build && cd build cmake .. && make -j$(nproc) 4. Download Models mkdir -p /opt/llama.cpp/models cd /opt/llama.cpp/models # Qwen2.5 0.5B (fastest, 409MB) wget -O qwen3-0.6b-q4_0.gguf \ https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct-GGUF/resolve/main/qwen2.5-0.5b-instruct-q4_0.gguf # Qwen2.5 1.5B (most capable, 1017MB) wget -O qwen2.5-1.5b-instruct-q4_0.gguf \ https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF/resolve/main/qwen2.5-1.5b-instruct-q4_0.gguf # Llama 3.2 1B (balanced, 738MB) wget -O llama3.2-1b-instruct-q4_0.gguf \ https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_0.gguf 5. Create llama.cpp Service # /etc/systemd/system/llama-cpp.service [Unit] Description=Llama.cpp Server After=network.target [Service] Type=simple User=root WorkingDirectory=/opt/llama.cpp ExecStart=/opt/llama.cpp/build/bin/llama-server \ --model /opt/llama.cpp/models/qwen2.5-1.5b-q4_0.gguf \ --host 0.0.0.0 \ --port 8080 \ --ctx-size 16384 Restart=always RestartSec=3 [Install] WantedBy=multi-user.target 6. Install OpenWebUI python3 -m venv /opt/openwebui-venv source /opt/openwebui-venv/bin/activate pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu pip install open-webui 7. Create OpenWebUI Service # /etc/systemd/system/open-webui.service [Unit] Description=OpenWebUI After=network.target [Service] Type=simple User=root WorkingDirectory=/opt/openwebui-venv Environment=OPENAI_API_BASE_URL=http://127.0.0.1:8080/v1 Environment=OPENAI_API_KEY=sk-dummy Environment=WEBUI_AUTH=false ExecStart=/opt/openwebui-venv/bin/open-webui serve --port 3000 --host 0.0.0.0 Restart=always RestartSec=3 [Install] WantedBy=multi-user.target 8. Setup HTTPS with Let’s Encrypt Install NGINX and Certbot apt install -y nginx python3-certbot-nginx python3-certbot-dns-cloudflare Configure Cloudflare Credentials mkdir -p /etc/letsencrypt/credentials chmod 700 /etc/letsencrypt/credentials echo "dns_cloudflare_api_token = YOUR_CLOUDFLARE_TOKEN" > /etc/letsencrypt/credentials/cloudflare.ini chmod 600 /etc/letsencrypt/credentials/cloudflare.ini Obtain SSL Certificate certbot certonly \ --dns-cloudflare \ --dns-cloudflare-credentials /etc/letsencrypt/credentials/cloudflare.ini \ -d llama-ai.<yourdomain.com> \ --non-interactive \ --agree-tos \ --email [email protected] Configure NGINX # /etc/nginx/sites-available/llama-ai.<yourdomain.com> server { listen 80; server_name llama-ai.<yourdomain.com>; return 301 https://$server_name$request_uri; } server { listen 443 ssl http2; server_name llama-ai.<yourdomain.com>; ssl_certificate /etc/letsencrypt/live/llama-ai.<yourdomain.com>/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/llama-ai.<yourdomain.com>/privkey.pem; location / { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } 9. Enable Services systemctl daemon-reload systemctl enable --now llama-cpp.service systemctl enable --now open-webui.service systemctl enable --now nginx ln -s /etc/nginx/sites-available/llama-ai.<yourdomain.com> /etc/nginx/sites-enabled/ nginx -t && systemctl reload nginx Switching Models llama.cpp runs as a systemd service with a single model loaded at start (set via --model in llama-cpp.service). To change models, update the service, reload, and restart. ...

October 16, 2025 · 4 min · 836 words · Dmitry Konovalov