In this post we will be exploring how we can copy a persistent volume across to a different Kubernetes cluster running within the same cloud provider and region using Kubernetes native concepts.
This blog post will assume you are familiar with the following concepts:
- Kubernetes Persistent Volumes & Persistent Volume Claims
- Kubernetes CSI VolumeSnapshots
We have various runbooks available that we will be using to help us with copying the data across to a different kubernetes cluster:
Note
We will be using acloud
and Avisi Cloud Kubernetes in our examples, however the steps should translate well between
other Kubernetes Distributions.
Our clusters
In our case, we have two Kubernetes clusters called cluster-01
and cluster-02
. Both are running within the same Cloud Account
and Cloud Region
.
Our goal is to copy a persistent volume called data-database-01
from cluster-01
to cluster-02
.
Creating the Snapshot
In order to do this, we will first create a volumeSnapshot
for the data-database-01
persistent volume claim, using acloud-toolkit snapshot create
:
Connect to cluster-01
:
View PersistentVolumeClaims:
Create the volumeSnapshot:
The snapshot create
command will wait until the volumeSnapshot
is ready to use
. Once it completes, use snapshot ls
to find the SNAPSHOT HANDLE
:
In this case, the snapshot handle
is snap-009753350a1dac31c
. We will be using this to import into cluster-02
.
Importing into cluster-02
Connect to cluster-02
:
Once we are connected, we can import the snapshot using acloud snapshot import
:
Once it is imported, we can restore it into a persistent volume claim:
And now you have a ready to use persistent volume claim
called data-database-01
in cluster-02
, that is a copy of the PVC in cluster-01
!
The imported volumeSnapshot
will have it's deletionPolicy
set to Retain
. This means that if we delete the volumeSnapshot in cluster-02
, we will only delete the references for it. This can and done after restoring it to the PersistentVolumeClaim
, since we no longer need it within this cluster.
Cleaning up
During this proces, we have created VolumeSnapshot
and VolumeSnapshotContent
resources in both clusters, as well as a snapshot
in AWS. We can clean those up using kubectl delete
:
In cluster-01
, run the following:
This will remove the kubernetes resources for the snapshot, as well as clean-up the snapshot
in AWS.