Resizing StatefulSet Persistent Volumes (Increasing Disk Size) in Kubernetes

If you’re running a StatefulSet application/workload in Kubernetes like a database (Postgres, Mongo, ElasticSearch, etc.), at some point your disk storage will get full and you will have to increase the size of the StatefulSet’s persistent volumes. Let’s go through the steps to do this.

First of all, if you are provisioning persistent volumes for your STS through volumeClaimTemplates then you might think that patching its storage capacity will simply resize the volumes and disks, but that is not the case. Changing the value of an STS’s spec.volumeClaimTemplates[].spec.resources.requests.storage will give the following error:

statefulsets.apps "<sts-name>" was not valid:

* spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

So what do we do then?

Step 1: Re-size Each PVC Manually

For each PVC in use by the STS (Pods), you will have to increase their storage capacity leading to volume expansion. So just run kubectl edit pvc <pvc-name> and then change the value for spec.resources.requests.storage. You can also use kubectl patch pvc ...:

$ k patch pvc <pvc-name> -p '{ "spec": { "resources": { "requests": { "storage": "100Gi" }}}}'

This action will expand your volumes and the new sizes will reflect inside your Pods as well (run df -h to check). However, the STS’s manifest will still reflect the old storage value since that is unchanged. In the next steps, we will fix that.

Note: To learn more about online volume expansion in K8S, read this article.

Step 2: Backup and Delete the StatefulSet

Backup the STS first:

$ k get sts <sts-name> -o yaml > sts-backup.yaml

Note: You don’t need to backup the STS if you already have the YAML manifest for it available.

Let’s delete the STS now without deleting the Pods itself. We will then re-create the STS (next step) with the updated storage size from the backup file we just created.

$ k delete sts <sts-name> --cascade=orphan

Step 3: Re-create the StatefulSet

If you took a backup in the previous step, we cannot just re-create the STS from that. We will have to remove certain fields from the manifest that are not required while creating a new STS, specifically the following:

  • metadata.uid, metadata.resourceVersion, metadata.creationTimestamp and metadata.generation
  • status
  • status inside each volumeClaimTemplates

In the future, you might see more fields that will have to be removed. So it’s best to just go through the entire manifest and remove any field which you wouldn’t otherwise put while creating a new STS.

Once the fields are removed, feel free to change the spec.volumeClaimTemplates[].resources.requests.storage field for each volume claim template and save the file.

We can now re-create our STS with the updated storage size:

$ k apply -f sts-backup.yaml

Hope that helps!

Leave a Reply

Your email address will not be published. Required fields are marked *