As a DevOps engineer, effectively managing Kubernetes clusters requires being mindful of avoiding unnecessary cloud resource usage. One particular area to address is the accumulation of unused Kubernetes Persistent Volumes (PVs). This situation commonly occurs when undeploying stateful sets, testing new applications, or in non-production clusters.
Cleaning up these unused PVs manually can be a tedious and error-prone task. It involves considering the Persistent Volume's Retain value, and if there are many PVs, it can take a significant amount of time. Moreover, it's a monotonous and repetively task.
To simplify and speed up the process, our CLI tool, acloud-toolkit, provides a convenient storage prune subcommand. This functionality allows you to quickly remove any unused persistent volume claims (PVCs). By utilizing the acloud-toolkit, you can automate the cleanup, ensuring efficient use of cloud resources and reducing unnecessary waste associated with unused PVs.
Consider a real-world scenario where we have a Lab cluster dedicated to one of our team members who has been actively working on Helm charts and deployments for Dependency Track. Over time, we noticed that the number of persistent volumes being utilized in the cluster has been steadily increasing. While the storage occupied may not be significant, it is important to address the issue of accumulating unused resources, as it is wasteful to retain them unnecessarily.
Here are the persistent volumes within the cluster:
❯ kubectl get persistentvolumeNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpg-backup-pv 5Gi RWO Retain Available gp2 61dpostgresdb-persistent-volume 8Gi RWX Retain Released dependency-track/db-persistent-volume-claim manual 20hpvc-084c6751-ef42-4f7b-8f94-e30dc7d5497c 8Gi RWO Retain Bound dependency-track/dependency-track-apiserver gp2 19hpvc-085c94b0-939a-4150-b79f-c5857ba72f06 10Gi RWO Retain Released minio/minio gp2 62dpvc-0f19c704-ba50-4332-8ebf-eff54a7f56f4 8Gi RWO Retain Bound default/data-test-postgresql-0 gp2 216dpvc-1784aa38-1137-4986-a76a-33cfd1048729 8Gi RWO Retain Bound dependency-track/data-dependency-track-postgresql-0 gp2 20hpvc-2529c053-181b-4264-91e8-f50b179a6610 1Gi RWO Retain Bound minio/cluster-example-with-backup-2 standard 33dpvc-36a7efd4-e8c0-4d79-9c09-c81677748b1d 8Gi RWO Retain Released dependency-track/dependency-track-apiserver gp2 212dpvc-45bc639d-f6b8-4921-b7aa-87d6f1a8bb21 1Gi RWO Retain Released minio/cluster-example-with-backup-1 my-storageclass 61dpvc-473997f2-90d9-45e4-a289-bf9c987e3301 1Gi RWO Retain Bound minio/cluster-example-with-backup-1 standard 33dpvc-533652d0-42a6-42f1-aab6-262fc8d1106c 1Gi RWO Retain Released minio/cluster-example-with-backup-3 standard 61dpvc-5a407924-5268-40cf-9893-4f1b6e7b8449 1Gi RWO Retain Bound minio/cluster-example-with-backup-3 standard 33dpvc-6456dc25-41b9-4f9b-888b-6eabe082bcc5 8Gi RWO Retain Released dependency-track/dependency-track-apiserver gp2 210dpvc-6a2b5a29-a44d-4ca7-b89e-38c79ed0f38f 1Gi RWO Retain Released minio/cluster-example-with-backup-1 my-storageclass 61dpvc-8259c5df-ee09-4b2e-adba-59d9fdd48c93 8Gi RWO Retain Released dependency-track/dependency-track-apiserver gp2 215dpvc-9182dd06-251b-4fee-b9e9-47225dae935a 8Gi RWO Retain Released dependency-track/dependency-track-apiserver gp2 20hpvc-abed312e-f913-492f-a1d8-806e77e99914 8Gi RWO Retain Released dependency-track/dependency-track-apiserver gp2 20hpvc-b04a4ccd-4295-4582-8436-111c52d9b0c6 1Gi RWO Retain Released minio/cluster-example-with-backup-1 standard 61dpvc-bb8bc5ab-0499-40b3-a9cb-ac9ea649df52 10Gi RWO Retain Bound dependency-track/pg-data-dependency-track-db-0 gp2 18hpvc-bcfc6bce-d100-47f3-8637-e0116e651982 8Gi RWO Retain Released dependency-track/dependency-track-apiserver gp2 211dpvc-cf0dd4f9-e8a6-4410-9763-fe7a62f5b1ac 8Gi RWO Retain Released dependency-track/data-dependency-track-postgresql-0 gp2 215dpvc-ef3b95f9-8731-410f-b7fc-0823339e7f80 1Gi RWO Retain Released minio/cluster-example-with-backup-2 standard 61dpvc-f532bd09-87bc-4e71-bf9b-3cfc19524664 8Gi RWO Retain Bound default/data-dependency-track-postgresql-0 gp2 219dpvc-f8656c7f-0a71-4612-a4ac-29eac4fc48f9 8Gi RWO Retain Bound default/data-postgresql-0 gp2 219d
As you can see, a significant number of persistent volumes are categorized as Released in the STATUS field. This indicates that the corresponding persistent volume claims have been deleted, while the actual volumes still exist in the cloud. Adding to the complexity, the RECLAIM POLICY is set to Retain" Therefore, if you were to simply delete these persistent volumes using the kubectl delete command, they would remain orphaned in the cloud infrastructure. Consequently, you would still incur costs for these volumes, despite them no longer being referenced or used within your cluster.
Once installed, we can start using the storage prune sub command to gather an overview of any volume that is unused and can be removed. By default, the command runs in dry-run mode. This is to avoid accidently removing persistent volumes that you did not intend to delete.
We can dubble check the volumes it is going to delete for us. We also see the total amount of storage that will be deleted by performing this action. In this case, 79Gi.
When we confirm we are happy with the volumes it will delete, we can run the same command using --dry-run=false.
The executed command accomplishes the following actions:
Modifies the reclaim policy of the persistent volumes from "Retain" to "Delete" specifically for volumes marked as Released. It's important to note that this command does not affect any persistent volumes with Available or Bound statuses.
Removes the persisted volumes from the Kubernetes cluster. The associated CSI (Container Storage Interface) provider, such as AWS EBS, Ceph, or others, will handle the deletion of the volumes within the underlying storage backend.
Following these steps allows for efficient cleanup of unused persistent volumes across your Kubernetes clusters. By executing this procedure, you can effectively manage resources and optimize storage utilization within your environment.