VM side

On the master node create two files

  • sample-deployment.yaml
# file: sample-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
  • sample-service-nodeport.yaml
# file: sample-service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: sample-service-nodeport
spec:
  type: NodePort
  selector:
    app: sample-app
  ports:
    - protocol: TCP
      port: 80       # ClusterIP port
      targetPort: 80 # Container port
      nodePort: 30080  # Node port (any available >30000)

Deploy application

kubectl apply -f sample-deployment.yaml
# You should see 2 pods (replicas: 2) running:
kubectl get pods -l app=sample-app

and create a NodePort service

kubectl apply -f sample-service-nodeport.yaml
# You shouid see service running on port 30080
kubectl get services sample-service-nodeport

OCI Side

Navigate to Networking - Load Balancers Create new “Load Balancer” (not “Network Load balancer”) using you VPC network and subnet. On the Next step pick the instances you’d like to balance (worker nodes). Use HTTP health check and port 30080 for probes. On the Next step choose HTTP as e type of traffic your listener handles After provisioning LB creation, make sure you have rule allowing incoming traffic on port 80 from your public IP and open http:// in your browser. If everything worked fine, you would see

image-20250103-000111.png