Cobra Forum

Plesk Panel => Web Application => Topic started by: Suhitha on Sep 10, 2025, 12:23 AM

Title: How to Delete Node Pools from a Vultr Kubernetes Engine Cluster
Post by: Suhitha on Sep 10, 2025, 12:23 AM
Question: How to Delete Node Pools from a Vultr Kubernetes Engine Cluster


Managing resources efficiently in a Vultr Kubernetes Engine (VKE) cluster includes the ability to remove individual worker nodes or delete entire Node Pools when scaling down infrastructure. Removing a single instance from a Node Pool ensures better cost management and workload reallocation, while deleting an entire Node Pool eliminates unnecessary resources and optimizes cluster performance.

Follow this guide to delete a Node or an entire Node Pool from a Vultr Kubernetes Engine cluster on your Vultr account using the Vultr Customer Portal, API, CLI, or Terraform.

Vultr Customer Portal

1.Navigate to Products and click Kubernetes.

2.Click your target VKE cluster to open its management page.

3.Click Nodes.

4.Locate your target Node Pool and click the plus icon to expand the Node Pool to view the attached instances.

5.Click the delete icon to remove the target Node from the Node Pool.

6.To delete a Node Pool, click the delete icon at the Node Pool level.

7.Check the Yes, destroy this node pool box in the confirmation prompt, and click Destroy Node Pool to permanently delete the target Node Pool.


Vultr API

1.Send a GET request to the List all Kubernetes Clusters endpoint and note the target VKE cluster's ID.

[color=blue]console[/color]


$ curl "https://api.vultr.com/v2/kubernetes/clusters" \
    -X GET \
    -H "Authorization: Bearer $VULTR_API_KEY"
2.Send a GET request to the List NodePools endpoint to view all Node Pools and note the target Node Pool's ID.
[color=teal]console[/color]


$ curl "https://api.vultr.com/v2/kubernetes/clusters/{cluster-id}/node-pools" \
    -X GET \
    -H "Authorization: Bearer ${VULTR_API_KEY}"
3.Send a GET request to the Get NodePool endpoint and note the target Node's ID.

[color=blue]console[/color]


$ curl "https://api.vultr.com/v2/kubernetes/clusters/{cluster-id}/node-pools/{nodepool-id}" \
    -X GET \
    -H "Authorization: Bearer ${VULTR_API_KEY}"

4.Send a DELETE request to the Delete NodePool Instance endpoint to delete the target Node from the Node Pool.

[color=blue]console
[/color]

$ curl "https://api.vultr.com/v2/kubernetes/clusters/{cluster-id}/node-pools/{nodepool-id}/nodes/{node-id}" \
    -X DELETE \
    -H "Authorization: Bearer ${VULTR_API_KEY}"
5.Send a DELETE request to the Delete Nodepool endpoint to delete the target Node Pool.

[color=blue]console[/color]


$ curl "https://api.vultr.com/v2/kubernetes/clusters/{cluster-id}/node-pools/{nodepool-id}" \
    -X DELETE \
    -H "Authorization: Bearer ${VULTR_API_KEY}"


Vultr CLI

1.List the available VKE clusters in your Vultr account and note the target VKE cluster's ID.

[color=blue]console
[/color]

$ vultr-cli kubernetes list --summarize

2.List all the available Node Pools in the VKE cluster and note the target Node Pool's ID.
[color=blue]console[/color]


$ vultr-cli kubernetes node-pool list <cluster-id> --summarize

3.List the attached instances of the target Node Pool and note the taget Node's ID.

[color=blue]console
[/color]

$ vultr-cli kubernetes node-pool get <cluster-id> <nodepool-id>

4.Delete the target Node from the Node Pool.

[color=blue]console[/color]


$ vultr-cli kubernetes node-pool node delete <cluster-id> <nodepool-id> <node-id>

5.Delete the target Node Pool from the VKE cluster.

[color=blue]console
[/color]

$ vultr-cli kubernetes node-pool delete <cluster-id> <nodepool-id>


Terraform

1.Open your Terraform configuration for the existing VKE cluster.

2.Remove the Node Pool block you want to delete, or reduce node_quantity to remove nodes.

[color=blue]terraform[/color]


resource "vultr_kubernetes" "vke" {
    # ...existing fields (label, region, version)

    # keep only the node pools you want to retain
    node_pools {
        node_quantity = 3
        label         = "pool-a"
        plan          = "vc2-2c-4gb"
    }
    # node_pools "pool-b" removed from configuration
}

3.Apply the configuration and observe the following output:

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.