Fix PVC binding to wrong PV.

kubernetes persistent-storage

How to fix a Persistent Volume binding to the wrong Persistent Volume

When for some reason an application runs with a mounted PVC that is pointing to the wrong PV follow these steps:

Situation

pvc

$ kubectl get pvc | grep shared
jira-shared                         Bound    pvc-98bd91e6-948d-4518-88d1-b9fed2779cc4    290Gi      RWX            nfs            23m

pv’s

$ kubectl get pv | grep shared
pvc-09636c84-ad97-4921-9af5-8997bc96323b   290Gi      RWX            Retain           Released      jira/jira-shared                                           nfs                     154d
pvc-98bd91e6-948d-4518-88d1-b9fed2779cc4   290Gi      RWX            Retain           Bound         jira/jira-shared                                           nfs                     10h

The PVC is bound to PV pvc-98bd91e6-948d-4518-88d1-b9fed2779cc4 but needs to be bound to pvc-09636c84-ad97-4921-9af5-8997bc96323b.

Create new PVC yaml file

Create a new yaml file with the new PVC (with reference to the correct PV in the last line).

You can take the existing PVC output as a reference with (in this case): kubectl get pvc jira-shared -o yaml

For example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    meta.helm.sh/release-name: jira
    meta.helm.sh/release-namespace: jira
  labels:
    app.kubernetes.io/managed-by: Helm
  name: jira-shared
  namespace: jira
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 290Gi
  storageClassName: nfs
  volumeMode: Filesystem
  volumeName: pvc-09636c84-ad97-4921-9af5-8997bc96323b

Remove claim reference block from the correct PV

$ kubectl edit pv pvc-09636c84-ad97-4921-9af5-8997bc96323b

This looks something like below.

Remove this whole claimRef block and save file.

claimRef:
  apiVersion: v1
  kind: PersistentVolumeClaim
  name: jira-shared
  namespace: jira
  resourceVersion: "93464003"
  uid: 3a172e36-8652-43da-9244-eb8c2ee5aaa5

The new STATUS of the PV should now be “Available” instead of “Released”.

$ kubectl get pv | grep shared
pvc-09636c84-ad97-4921-9af5-8997bc96323b   290Gi      RWX            Retain           Available     jira/jira-shared                                           nfs                     154d
pvc-98bd91e6-948d-4518-88d1-b9fed2779cc4   290Gi      RWX            Retain           Bound         jira/jira-shared                                           nfs                     10h

Remove the PVC The PVC that is pointing to the wrong PV needs to be removed. If this PVC is mounted into a pod this pod first need to be stopped to free the PVC.

$ kubectl delete pvc jira-shared

If the PVC stays in ‘Deleting’ status it is probably still mounted. Stop the pod where it is mounted.

If the PVC is deleted move on to the next step:

Create new PVC

Deploy the correct PVC you created in the first step. Make sure you are in the right namespace when deploying.

$ kubectl apply -f <create_new_pvc.yaml>

Check:

$ kubectl get pvc | grep shared
jira-shared                         Bound    pvc-09636c84-ad97-4921-9af5-8997bc96323b   290Gi      RWX            nfs            42m

$ kubectl get pv | grep shared
pvc-09636c84-ad97-4921-9af5-8997bc96323b   290Gi      RWX            Retain           Bound      jira/jira-shared                                           nfs                     154d
pvc-98bd91e6-948d-4518-88d1-b9fed2779cc4   290Gi      RWX            Retain           Released   jira/jira-shared                                           nfs                     10h
  • The newly created PVC should be bound to the right PV -> pvc-09636c84-ad97-4921-9af5-8997bc96323b
  • The PV pvc-09636c84-ad97-4921-9af5-8997bc96323b is Bound
  • The PV pvc-98bd91e6-948d-4518-88d1-b9fed2779cc4 is Released but still exists because of the Retain policy. So data is not lost until the PV is manually removed.

Start application and check data

Test the application by starting and verify the right data exists in the right place.