How to Backup Your ZPA Configuration via ZPA Terraform Provider to AWS S3 Backend (Part 2)

How to Enable Terraform Remote State with AWS S3 for ZPA.

In part two of this series, I will discuss how you can enable the AWS S3 Remote Back with Terraform.
To enable remote state storage with S3, the first step is to create an S3 bucket. In the example below, I am creating each resource configuration in its own individual .tf file for better organization, but for testing purposes, you may use a single i.e file and specify all the resources in that file. Moving on.

Part 1
The first part of the configuration is to specify the AWS provider. For that, I created the file, and then specified the required parameters.

provider "aws" {
region = "us-east-2"

Part 2
Next, I will create a separate file called and then specify the necessary parameters to create my S3 bucket.

resource "aws_s3_bucket" "terraform_state" {
  bucket = "zpa-terraform-up-and-running-state" 
  versioning {
    enabled = true
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"

The above code sets a few arguments, which are important to walk through.

1. bucket:: This is the name of the S3 bucket. Note that S3 bucket names must be globally unique amongst all AWS customers. Therefore, you will have to change the bucket parameter from zpa-terraform-up-and-running-state (which I already created) to your own name. Make sure to remember this name and take note of what AWS region you’re using, as you’ll need both pieces of information again a little later on.

2. versioning:: This block enables versioning on the S3 bucket, so that every update to a file in the bucket actually creates a new version of that file. This allows you to see older versions of the file and revert to those older versions at any time.

3. server_side_encryption_configuration: This block turns server-side encryption on by default for all data written to this S3 bucket. This ensures that your ZPA state files, and any secrets they may contain, are always encrypted on disk when stored in S3.

Part 3

Next, you need to create a DynamoDB table to use for locking. DynamoDB is Amazon’s distributed key-value store. It supports strongly-consistent reads and conditional writes, which are all the ingredients you need for a distributed lock system. Moreover, it’s completely managed, so you don’t have any infrastructure to run yourself, and it’s inexpensive, with most Terraform usage easily fitting into the free tier.
To use DynamoDB for locking with Terraform, you must create a DynamoDB table that has a primary key called LockID (with this exact spelling and capitalization!). You can create such a table using the aws_dynamodb_table resource.

resource "aws_dynamodb_table" "zpa_terraform_locks" {
  name         = "zpa-terraform-up-and-running-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"  attribute {
    name = "LockID"
    type = "S"

Once, parts 2 and 3 have been complete, we can run terraform init to download the provider code and then run terraform apply to deploy. Once everything is deployed, you will have an S3 bucket and DynamoDB table, but your Terraform state will still be stored locally.

Part 4

To configure Terraform to store the state in your S3 bucket (with encryption and locking), you need to add a backend configuration to your Terraform code. This is the configuration for Terraform itself, so it lives within a terraform block, and has the following syntax:

terraform {
backend "<BACKEND_NAME>" {

Where BACKEND_NAME is the name of the backend you want to use (e.g., “s3”) and CONFIG consists of one or more arguments that are specific to that backend (e.g., the name of the S3 bucket to use). Here’s what the backend configuration looks like for an S3 backend:

terraform {
  backend "s3" {
    bucket         = "zpa-terraform-up-and-running-state"
    key            = "global/s3/terraform.tfstate"
    region         = "us-east-2"  
    dynamodb_table = "zpa-terraform-up-and-running-locks"
    encrypt        = true

Let’s go through these settings one at a time:
1. bucket: The name of the S3 bucket to use. Make sure to replace this with the name of the S3 bucket you created earlier.

2. key: The file path within the S3 bucket where the Terraform state file should be written. You’ll see a little later on why the example code above sets this to global/s3/terraform.tfstate.

3. region: The AWS region where the S3 bucket lives. Make sure to replace this with the region of the S3 bucket you created earlier.

4. dynamodb_table: The DynamoDB table to use for locking. Make sure to replace this with the name of the DynamoDB table you created earlier.

5. encrypt: Setting this to true ensures your Terraform state will be encrypted on disk when stored in S3. We already enabled default encryption in the S3 bucket itself, so this is here as a second layer to ensure that the data is always encrypted.

To tell Terraform to store your state file in this S3 bucket, you’re going to use the terraform init command again. This little command can not only download provider code, but also configure your Terraform backend. Moreover, the init command is idempotent, so it’s safe to run it over and over again.

Terraform will automatically detect that you already have a state file locally and prompt you to copy it to the new S3 backend. If you type in “yes,” you should see:

After running this command, your Terraform state will be stored in the S3 bucket. You can check this by heading over to S3 console in your browser and clicking your bucket:

With this backend enabled, Terraform will automatically pull the latest state from this S3 bucket before running a command, and automatically push the latest ZPA state to the S3 bucket.

Now, head over to the S3 console again, refresh the page, and click the gray “Show” button next to “Versions.” You should now see several versions of your terraform.tfstate file in the S3 bucket:


This means that Terraform is automatically pushing and pulling state data to and from S3 and S3 is storing every revision of the state file, which can be useful for debugging and rolling back to older versions if something goes wrong.

Part 5
Protecting the ZPA State File from Concurrent Changes

Let me finish

Finally, we want to make sure we prevent potential corruption of the state file, due concurrent changes performed by multiple administrators. That’s where we can see the power of DynamoDB in action.
In the below screenshot, another DevOps engineer tried to apply changes to ZPA while someone else was already performing the same action. Because the state file is locked, Terraform received a signal from DynamoDB to alert the second administrator about it.

As you can see, it is showing me here a few pieces of information such as:
• LockID: This is the ID in the DynamoDB table
• Path: This is the S3 bucket path we have set
• Operation: The type of operation being performed. In this case it is Apply.
• Who: Who currently has the state file locked.

Can you unlock the state? Yes, you can use the command: terraform force-unlock followed by the lockID number. Obviously, once you set this up, it is done, and you no longer have to worry about it, and it should take no more than probably 30 minutes to get everything up and running from an AWS perspective.

And that’s all there is to it.

We now, have a complete safe way to backup and store our ZPA configuration using Terraform, as well as protect that same configuration from being overridden or corrupted due to concurrent changes.
I hope that is helpful.