PowerShell to Go Migration: Containerized Web Scraper Project
Complete modernization of legacy PowerShell scraper to containerized Go application with Kubernetes deployment - achieving 90% container size reduction and 10x performance improvement
Complete modernization of legacy PowerShell scraper to containerized Go application with Kubernetes deployment - achieving 90% container size reduction and 10x performance improvement
1. System Requirements Before starting, ensure your hardware meets the requirements for Proxmox. This documentation outlines the minimum and recommended specifications for CPU, memory, storage, and network. See https://pve.proxmox.com/wiki/System_Requirements 2. Prepare the Installation Media Learn how to create a bootable USB or DVD for Proxmox installation at https://pve.proxmox.com/wiki/Prepare_Installation_Media . This guide covers the tools and steps needed for preparing your installation media. 3. Installation Follow the step-by-step instructions for installing Proxmox. This includes partitioning your drives, configuring the network, and completing the initial setup. ...
Client machine Talos nodes have no shell at all, so you would need some box to run configuration commands. In this case I’m using Ubuntu 22.04 LTS for console to run commands, configuring Talos 1.91 # install TalosCTL, KubeCTL, Helm curl -sL https://talos.dev/install | sh snap install kubectl --classic snap install helm --classic helm repo update Note controlplane (master) node IP and save to variable as well as some other staff ...
After migration wiki from on-prem to Attlassian Cloud we’ve got all old links broken. Here is a workaround. 1) Overview A Cloudflare Worker that: Extracts pageId from …/pages/viewpage.action?pageId=… Decodes tiny links /x/<code> → pageId Optionally parses /display/<SPACEKEY>/<TITLE> Looks up SPACEKEY + Title in KV (PAGES) and 301‑redirects to Atlassian Cloud search: https://<new-wiki>.atlassian.net/wiki/search?text=<SPACEKEY> <Title> Enforces an ASN allowlist (e.g., AS12345) on the production host to prevent titles enumeration Uses Workers KV with one record per Confluence page: key = pid:<CONTENTID> → value = {"s":"<SPACEKEY>","t":"<Title>"} Scopes routes only to legacy Confluence paths 2) Prerequisites Cloudflare <your domain> zone access; DNS record for <old-wiki.yourdomain.com> iis Proxied (orange cloud) Windows with PowerShell 5.1+ or 7+ CSV export with columns: CONTENTID,SPACEKEY,TITLE Cloudflare API token with Workers KV Storage: Read & Edit 3) Export mapping from MySQL → CSV -- Use the returned folder from SHOW VARIABLES LIKE 'secure_file_priv'; SELECT 'CONTENTID','SPACEKEY','TITLE' UNION ALL SELECT c.CONTENTID, s.SPACEKEY, c.TITLE FROM CONTENT c JOIN SPACES s ON s.SPACEID = c.SPACEID WHERE c.CONTENTTYPE='PAGE' AND c.PREVVER IS NULL INTO OUTFILE '/var/lib/mysql-files/confluence_pages.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\n'; 4) Cloudflare setup 4.1 Create an API token (UI) Dashboard → My Profile → API Tokens → Create Token → Custom Permissions: Workers KV Storage: Edit and Read. Copy the token to $tok. ...
Overview Created a Syncthing pod in Kubernetes cluster managed by FluxCD with dual NFS mounts, SSL certificate via cert-manager, and consolidated LoadBalancer services. Architecture Namespace: syncthing Deployment: Single replica with Recreate strategy Storage: Two NFS persistent volumes SSL: Automatic Let’s Encrypt certificate via cert-manager Load Balancing: Combined TCP/UDP service on single external IP Storage Configuration NFS Mounts # Data mount (Dropbox sync) xxx.xxx.xxx.xxx:/mnt/media/dropbox → /var/syncthing/dropbox # Config mount (Syncthing configuration) xxx.xxx.xxx.xxx:/mnt/media/home/nfs/syncthing → /var/syncthing/config Persistent Volumes syncthing-dropbox-pv: 1Ti capacity for sync data syncthing-config-pv: 1Gi capacity for configuration Both use NFS storage class with ReadWriteMany access mode. ...
This guide covers setting up Ollama (Open Large Language Model) in a Proxmox LXC container with GPU passthrough and creating a simple web interface for easy interaction. Overview We’ll deploy Ollama in a resource-constrained LXC environment with: Intel UHD Graphics GPU acceleration llama3.2:1b model (~1.3GB) Lightweight Python web interface Auto-starting services Prerequisites Proxmox VE host Intel integrated graphics (UHD Graphics) At least 4GB RAM allocated to LXC 40GB+ storage for container Step 1: Container Setup Check Available GPU Resources First, verify GPU availability on the Proxmox host: ...
Problem Proxmox shows this warning message: WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. This warning indicates that LVM thin pools are not configured to automatically extend when they approach capacity, which could lead to VMs running out of disk space unexpectedly. Solution Configure LVM to automatically extend thin pools before they become full. ...
Overview This guide covers the complete setup of a Plex Media Server running in an LXC container on Proxmox VE, including NFS storage integration and Intel GPU passthrough for hardware transcoding. Environment Details Host: Proxmox VE 8.4.8 (kernel 6.8.12-13-pve) Hardware: Intel N97 processor with integrated UHD Graphics Storage: NFS shares from NAS (nas.my.domain.com) Container: Ubuntu 22.04 LTS template Prerequisites 1. Enable IOMMU on Proxmox Host Ensure IOMMU is enabled in the kernel command line: ...
Overview This comprehensive guide covers GPU passthrough setup in Proxmox, the network interface issues caused by switching to Q35 machine type, and the complete deployment of Plex Media Server with Intel QSV hardware transcoding on a Talos Kubernetes cluster. Part 1: GPU Passthrough Setup Problem Need to grant a Proxmox VM direct access to a GPU for hardware acceleration or AI workloads. Solution Steps Enable IOMMU in host BIOS/UEFI Intel: Enable VT-d AMD: Enable AMD-Vi Configure host kernel parameters ...
In this initiative I led the deployment of Azure Arc to manage a hybrid estate of Microsoft SQL Server instances spanning hundreds of on-premises and cloud-hosted servers. By unifying SQL infrastructure visibility and automating policy enforcement, the project delivered a 20% reduction in licensing costs and significantly streamlined asset compliance tracking. The work combined deep Azure administration expertise with enterprise IT automation practices and leveraged AI-generated deployment scaffolds to accelerate implementation while maintaining production-grade stability. Lessons Learned Permissions The required permissions must be set thoroughly and are documented at https://learn.microsoft.com/azure/azure-arc/servers/prerequisites ...