1. Drain the Node

Draining a node ensures that workloads are safely evicted before cleaning its configuration:

kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data

Notes

  • --ignore-daemonsets: Ensures that DaemonSet pods are not deleted.
  • --delete-emptydir-data: Deletes data in emptyDir volumes.

⚠️ Warning: If the node is a control plane node, use --force with caution.


2. Remove the Node from the Cluster

To remove the node from the cluster, run:

kubectl delete node <node-name>

This will delete the node’s representation in the Kubernetes API.


3. Reset the Node Configuration (on the Node Itself)

Log in to the node (via SSH or terminal) and reset the Kubernetes configuration:

sudo kubeadm reset

This command removes:

  • Cluster configuration (/etc/kubernetes directory)
  • Certificates, kubelet configuration, and the cluster state

It won’t remove kubelet or the container runtime.


4. Remove Kubernetes Directories

Remove residual Kubernetes files and directories:

sudo rm -rf /etc/kubernetes /var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes

Verification

After cleaning up, verify that the node no longer appears in the cluster:

kubectl get nodes

If you are planning to rejoin the node, ensure its status is updated and it appears ready. If you are planning to rejoin the node, ensure its status is updated and it appears ready.