Talos Kubernetes Initial Configuration

Client machine Talos nodes have no shell at all, so you would need some box to run configuration commands. In this case I’m using Ubuntu 22.04 LTS for console to run commands, configuring Talos 1.91 # install TalosCTL, KubeCTL, Helm curl -sL https://talos.dev/install | sh snap install kubectl --classic snap install helm --classic helm repo update Note controlplane (master) node IP and save to variable as well as some other staff ...

August 14, 2025 · 3 min · 443 words · Dmitry Konovalov

Talos Kubernetes Nodes Deployment

For simplicity, this guide uses a flat cluster configuration where each node acts as both a master (control-plane) and a worker. ProxMox Hardware Configuration When setting up ProxMox for Talos Kubernetes nodes, follow these instructions to configure the virtual machine: Memory: Allocate at least 3 GB of RAM per node to ensure smooth operation of the Kubernetes components. Processors: Set the CPU configuration to 2 cores (1 socket, 2 cores). ...

August 14, 2025 · 1 min · 197 words · Dmitry Konovalov

Talos Kubernetes Scaling Out

Adding more nodes to an existing Talos Kubernetes cluster is straightforward. This guide includes optional automation steps that streamline the process, such as editing node configuration files, but these can also be performed manually. Assumptions Before proceeding, ensure the following conditions are met: You have deployed a single-node Talos-based Kubernetes cluster following the instructions in this guide. The additional node will also have the control-plane role. (If this is not the case, additional node configuration editing will be required.) Talos is already installed on the new node, and it is currently in “Maintenance” mode with an IP address assigned. (Optional) You are using GitOps for storing Kubernetes configuration. Steps to Add a Node 1. (Optional) Install yq yq is a YAML processor required for modifying the node configuration. Install the correct version using the commands below, do not use apt install yq: ...

August 14, 2025 · 3 min · 531 words · Dmitry Konovalov

Talos Kubernetes Upgrading

Notes To use image with QEMU support use tag ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515 For example: factory.talos.dev/installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.9.3 See Talos Upgrading Guide for more details If you are using Talos image for booting, don’t forget to update it as well Upgrade Process Use talosctl upgrade command to upgrade the nodes: talosctl upgrade --nodes $N2_IP \ --image factory.talos.dev/installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.9.3 Expected Output You’ll see live remote diagnostics on your screen during the upgrade process: # watching nodes: [xxx.xxx.xxx.182] # * xxx.xxx.xxx.182: waiting for actor ID # watching nodes: [xxx.xxx.xxx.182] # * xxx.xxx.xxx.182: task: stopAllPods action: START # watching nodes: [xxx.xxx.xxx.182] # * xxx.xxx.xxx.182: unavailable, retrying... # watching nodes: [xxx.xxx.xxx.182] # * xxx.xxx.xxx.182: stage: RUNNING ready: false unmetCond: [name:"nodeReady" reason:"node \"n1\" status is not available yet"] # watching nodes: [xxx.xxx.xxx.182] # * xxx.xxx.xxx.182: post check passed Post-upgrade Update talosctl and kubectl ...

August 14, 2025 · 1 min · 145 words · Dmitry Konovalov

Syncthing Kubernetes Pod with FluxCD and NFS Mounts

Overview Created a Syncthing pod in Kubernetes cluster managed by FluxCD with dual NFS mounts, SSL certificate via cert-manager, and consolidated LoadBalancer services. Architecture Namespace: syncthing Deployment: Single replica with Recreate strategy Storage: Two NFS persistent volumes SSL: Automatic Let’s Encrypt certificate via cert-manager Load Balancing: Combined TCP/UDP service on single external IP Storage Configuration NFS Mounts # Data mount (Dropbox sync) xxx.xxx.xxx.xxx:/mnt/media/dropbox → /var/syncthing/dropbox # Config mount (Syncthing configuration) xxx.xxx.xxx.xxx:/mnt/media/home/nfs/syncthing → /var/syncthing/config Persistent Volumes syncthing-dropbox-pv: 1Ti capacity for sync data syncthing-config-pv: 1Gi capacity for configuration Both use NFS storage class with ReadWriteMany access mode. ...

August 11, 2025 · 5 min · 868 words · Dmitry Konovalov

Proxmox GPU Passthrough, Q35 Machine Type Network Issues, and Plex Deployment

Overview This comprehensive guide covers GPU passthrough setup in Proxmox, the network interface issues caused by switching to Q35 machine type, and the complete deployment of Plex Media Server with Intel QSV hardware transcoding on a Talos Kubernetes cluster. Part 1: GPU Passthrough Setup Problem Need to grant a Proxmox VM direct access to a GPU for hardware acceleration or AI workloads. Solution Steps Enable IOMMU in host BIOS/UEFI Intel: Enable VT-d AMD: Enable AMD-Vi Configure host kernel parameters ...

January 8, 2025 · 5 min · 943 words · Dmitry Konovalov