Today I Learned

8 posts about #terraform

Upgrading to Nomad 1.1.0 and above when using CSI

Nomad v1.1.0 introduced few backward incompatibilites - one related to CSI.

When upgrading you need to change your nomad_volume resources from:

resource "nomad_volume" "grafana" {
  # ...
  access_mode     = "single-node-writer"
  attachment_mode = "file-system"

to:

resource "nomad_volume" "grafana" {
  # ...
  capability {
    access_mode     = "single-node-writer"
    attachment_mode = "file-system"
  }

That might require nomad provider to be upgraded. Then simply add attachment_mode and access_mode to nomad jobs:

  # ...
  group "monitoring" {
    volume "${volume_id}" {
      type            = "csi"
      read_only       = false
      source          = "${volume_id}"
      attachment_mode = "file-system"
      access_mode     = "single-node-writer"
    }

  # ...

Terraform Cloud settings for multiconfiguration repo

If you have a repository with more then one Terraform configuration you need to set Terraform Working Directory in workspace general settings to the subdirectory that the configuration you want to run.

When running Terraform remote operations they will be executed from the working directory if it’s empty it defaults to the root directory. Before starting a Terraform run Terraform Cloud will change to this directory so all the relative modules paths that you had previously defined will work correctly.

for_each over tuples in terraform

Terraform does not allow to iterate through a tuple with for_each, but you can create one in the fly.

locals {
  cron_jobs = [
    {
      name    = "every-15-minutes"
      command = "rake send:sms:notification"
      cron    = "*/15 * * * * *"
    },
    {
      name    = "every-30-minutes"
      command = "rake send:email:notification"
      cron    = "*/30 * * * * *"
    }
  ]
}

resource "nomad_job" "cron_jobs" {
  for_each = { for cj in local.cron_jobs : cj.name => cj }

  jobspec = templatefile("cron_job.tpl", {
    name    = each.key
    command = each.value.command
    cron    = each.value.cron
  })
}

Use templatefile function instead of template_file

Since terraform v 0.12 it’s recommended to use templatefile(path, vars) function instead of data source template_file.

As data source provider has it’s own copy of template file engine that is separate from the Terraform itself. The engine depends on the version of Terraform that the provider was compiled with and not the one run on your system.

Using templatefile(path, vars) function will give you more consistent results as it depends on the version of Terraform you are using.

Example usage:

templatefile("${path.module}/backends.tmpl", { port = 8080, ip_addrs = ["10.0.0.1", "10.0.0.2"] })

Source:

https://www.terraform.io/docs/configuration/functions/templatefile.html https://www.terraform.io/docs/providers/template/d/file.html

Generating AWS policies in Terraform

Imagine that you have the following policy defined:

resource "aws_iam_user_policy" "circleci" {
 #...
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:PutObject"
        ]
        Resource = [
          "${var.ireland_bucket_arn}/*",
          "${var.mumbai_bucket_arn}/*",
        ]
      }
    ]
  })
}

with latest Terraform (>= v0.12.x) you can rewrite this to for loop and map values accordingly:

        Resource = [
          for arn in var.deployment_bucket_arns :
          "${arn}/*"
        ]

Now, you can only provide one variable to your module that contain a list of ARNs.

Dynamic blocks with for_each in Terraform

We have a module with AWS CodePipeline:

module "codepipeline" {
  source = "../modules/codepipeline"
...

this module configures multiple apps deployments in a single AWS Region. We use this module for staging and production environment.

For AWS CodeBuild we need to provide bunch of ENVs for docker build aka. build args.

resource "aws_codebuild_project" "portal" {
  # ...
  environment {
    environment_variable {
        name  = "MIX_ENV"
        value = var.environment
    }
    environment_variable {
      name  = "REPOSITORY_URI"
      value = var.repo_uri
    }
   # ...
  }

Since Terraform 0.12.x we can refactor that into dynamic block:

environment {
    dynamic "environment_variable" {
      for_each = var.build_args

      content {
        name  = environment_variable.value.name
        value = environment_variable.value.value
      }
    }
}

so we can provide args to the module as a variable:

module "codepipeline" {
  source = "../modules/codepipeline"

  build_args = [
    {
      name = "MIX_ENV"
      value = "prod"
     },
    {
      name = "REPOSITORY_URI"
      value = module.ecr.repo_uri
    }
  ]
...

This gives us better flexibility and safe separation from production and staging because we can introduce changes independently.

More info here: here