Today I Learned

24 posts by bartłomiejwójtowicz

Handling missing data points in AWS metrics

Sometimes you might want to treat missing data as good (not breaching threshold) to avoid keeping insufficient state. AWS allows to specify few options:

  • notBreaching – Missing data points are treated as “good” and within the threshold
  • breaching – Missing data points are treated as “bad” and breaching the threshold
  • ignore – The current alarm state is maintained
  • missing – If all data points in the alarm evaluation range are missing, the alarm transitions to INSUFFICIENT_DATA.

For more details go here: AWS Docs

Terraform config:

  resource "aws_cloudwatch_metric_alarm" "some_resource_usage" {
  ...
    treat_missing_data = "notBreaching"
  }

Creating DNS record for current origin in AWS Route53

Raw DNS zone:

@ 3600   IN   MX  1 aspmx.l.google.com.
@ 3600   IN   MX  10  aspmx2.googlemail.com.
@ 3600   IN   MX  10  aspmx3.googlemail.com.
@ 3600   IN   MX  5 alt1.aspmx.l.google.com.
@ 3600   IN   MX  5 alt2.aspmx.l.google.com.

From rfc1035:

A free standing @ is used to denote the current origin.

when using Terraform just leave the name blank:

resource "aws_route53_record" "google_mail_mx" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "" # 
  type    = "MX"
  ttl     = 3600

  records = [
    "1 aspmx.l.google.com.",
    "5 alt1.aspmx.l.google.com.",
    "5 alt2.aspmx.l.google.com.",
    "10 aspmx2.googlemail.com.",
    "10 aspmx3.googlemail.com."
  ]
}

AWS Route 53, prefer ALIAS over CNAME if you can

Let’s assume you have a zone defined:

resource "aws_route53_zone" "primary" {
  name = "example.com"
}

Now you want to add subdomain api and point it to your Application Load Balancer, normally you would think to add CNAME to point to your ALB:

resource "aws_route53_record" "api" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "api"
  type    = "CNAME"
  ttl     = "5"
  records = ["your-alb-dns-name"]
}

but you will be charged extra for CNAME queries: Route53 Pricing

You incur charges for every DNS query answered by the Amazon Route 53 service, except for queries to Alias A records that are mapped to Elastic Load Balancing instances, CloudFront distributions, AWS Elastic Beanstalk environments, API Gateways, VPC endpoints, or Amazon S3 website buckets, which are provided at no additional charge.

To help you decide please read documentation: Choosing alias and non-alias

Route 53 charges for CNAME queries. vs. Route 53 doesn’t charge for alias queries to AWS resources.

So it’s better to use aliases to AWS resources if you can:

resource "aws_route53_record" "api" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "api"
  type    = "A"

  alias {
    name                   = "your-api-alb-dns-name"
    zone_id                = "your-api-alb-zone-id"
    evaluate_target_health = true
  }
}

Generating AWS policies in Terraform

Imagine that you have the following policy defined:

resource "aws_iam_user_policy" "circleci" {
 #...
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:PutObject"
        ]
        Resource = [
          "${var.ireland_bucket_arn}/*",
          "${var.mumbai_bucket_arn}/*",
        ]
      }
    ]
  })
}

with latest Terraform (>= v0.12.x) you can rewrite this to for loop and map values accordingly:

        Resource = [
          for arn in var.deployment_bucket_arns :
          "${arn}/*"
        ]

Now, you can only provide one variable to your module that contain a list of ARNs.

Dynamic blocks with for_each in Terraform

We have a module with AWS CodePipeline:

module "codepipeline" {
  source = "../modules/codepipeline"
...

this module configures multiple apps deployments in a single AWS Region. We use this module for staging and production environment.

For AWS CodeBuild we need to provide bunch of ENVs for docker build aka. build args.

resource "aws_codebuild_project" "portal" {
  # ...
  environment {
    environment_variable {
        name  = "MIX_ENV"
        value = var.environment
    }
    environment_variable {
      name  = "REPOSITORY_URI"
      value = var.repo_uri
    }
   # ...
  }

Since Terraform 0.12.x we can refactor that into dynamic block:

environment {
    dynamic "environment_variable" {
      for_each = var.build_args

      content {
        name  = environment_variable.value.name
        value = environment_variable.value.value
      }
    }
}

so we can provide args to the module as a variable:

module "codepipeline" {
  source = "../modules/codepipeline"

  build_args = [
    {
      name = "MIX_ENV"
      value = "prod"
     },
    {
      name = "REPOSITORY_URI"
      value = module.ecr.repo_uri
    }
  ]
...

This gives us better flexibility and safe separation from production and staging because we can introduce changes independently.

More info here: here

Inserting data in migrations

When you encounter a problem with inserting data in migrations and your Repo is not aware about created table yet, you need to use flush()

# ...
  def up do
    create table("players") do
      add :name, :varchar, null: false
      add :color, :varchar, null: false
      add :avatar, :varchar, null: false
    end

    create index("players", [:name], unique: true)

    flush() # 👷

    Repo.insert!(%Player{...})
  end
# ...

Replicating Rails `content_for()` in Phoenix

Typically in Rails layout you would do sth similar to this:

<%= yield :nav_actions %>

then in view (i.e. “resource_abc/show”):

<% content_for :nav_actions do %>
  <!-- whatever -->
<% end %>

To replicate this behavior in Phoenix use render_existing/3

In layout:

<%= render_existing @view_module, "nav_actions." <> @view_template, assigns %>

Then in your template folder (“resource_abc/“) you need to define extra file nav_actions.show.html.eex (for show action) - it will render content of nav_actions.show only if the file exists.

If you want to render nav_actions for all resource actions just skip @view_template:

<%= render_existing @view_module, "nav_actions",  assigns %>

In this case file should be named nav_actions.html.eex.

Terraform AWS - moving state to another module

If your infrastructure grows and you find that certain resources should be moved to its own module because they need to be shared with others (or you made a mistake by putting them in the wrong module in the first place), you can move the state using CLI rather than recreating resources from scratch.

let’s say you have:

module "s3" {
  source = "./modules/s3"
  ...
}

and inside you defined user with access policy:

resource "aws_iam_user" "portal" {...}

resource "aws_iam_user_policy" "portal" {...}

Use:

terraform state mv module.s3.aws_iam_user.portal  module.iam
terraform state mv module.s3.aws_iam_user_policy.portal  module.iam

After that you can move your resource definitions from s3 to iam module. At the end, run terraform plan - terraform shouldn’t detect any changes.

Documentation here.

Rake / rails console does not work in docker?

When using default ruby image in your Dockerfile (FROM ruby:2.5.1) if you encounter any problems with missing gems in your container when running rake task or rails console:

Could not find rake-x.y.z in any of the sources. Run bundle install to install missing gems.

That’s because you probably used:

RUN bundle install --deployment

You can fix it with:

RUN bundle install --without development test

Downgrading Heroku PG database

Accidentally I’ve provisioned paid plan for PG database. To downgrade such a database you need to:

# safety measures
heroku maintenance:on

# create a new database, new url will be given, for me it was: HEROKU_POSTGRESQL_ORANGE_URL
heroku addons:create heroku-postgresql:hobby-dev 

 # copy current db to ORANGE db
heroku pg:copy DATABASE_URL HEROKU_POSTGRESQL_ORANGE_URL

# make new database active
heroku pg:promote HEROKU_POSTGRESQL_ORANGE_URL

# ... and we are back
heroku maintenance:off

After promotion old database was renamed to PINK then I removed it from heroku panel.

There is also other strategy for downgrade but it does not work for free plan: docs

Simplifying Circle CI setup for Crystal

Rather than setting up test environment for Crystal by yourself you can use official docker image:

version: 2
jobs:
  build:
    docker:
      - image: crystallang/crystal:0.25.0
    steps:
      - checkout

      - restore_cache:
          keys:
            - shards-{{ checksum "shard.lock" }}
      - run:
          name: Install shards
          command: |
            shards check || shards
      - save_cache:
          key: shards-{{ checksum "shard.lock" }}
          paths:
            - lib/
            - .shards/

      - run:
          name: Run specs
          command: crystal spec

Using mocks library in Crystal

Mocks cannot be defined in describe/it body so this won’t work:

describe Bot::Slack do
  describe "#post_response" do
    it "sends formatted message" do
      Mocks.create_mock JsonClient do
        mock self.post(url, body)
      end
      # testing here

and you will get: can't declare class dynamically.

Correct way:

Mocks.create_mock JsonClient do
  mock self.post(url, body)
end

describe Bot::Slack do
  describe "#post_response" do
    it "sends formatted message" do
      # testing here

Custom validation error messages when using forms

Normally for rails model (i.e. Brand) you would do:

en:
  activerecord:
    errors:
      models:
        brand:
          attributes:
            subdomain:
              invalid: "should contain only 'a-z', '0-9' or '-' but not as starting or ending character"

When using validations in forms you need a small tweak:

  • activerecord => activemodel
  • brand => brand_form (assuming your form is BrandForm)

Final translation:

en:
  activemodel:
    errors:
      models:
        brand_form:
          attributes:
            subdomain:
              invalid: "should contain only 'a-z', '0-9' or '-' but not as starting or ending character"

Note: invalid is the key for validation type, full list here