Today I Learned

Inserting data in migrations

When you encounter a problem with inserting data in migrations and your Repo is not aware about created table yet, you need to use flush()

# ...
  def up do
    create table("players") do
      add :name, :varchar, null: false
      add :color, :varchar, null: false
      add :avatar, :varchar, null: false

    create index("players", [:name], unique: true)

    flush() # 👷

# ...

Trigger debugger with key combination

Whenever you need to freeze your website and check its state, you can trigger debugger at any given point.

Let’s say you want to check which classes are passed when element is hovered - move your mouse over it and trigger debugger. The page will freeze and you can browse the dom tree freely.

According to Google Chrome’s DevTools documentation:

Pause script execution (if currently running) or resume (if currently paused)

You need to have your DevTools opened

MacOs: F8 or Command+\

Windows/Linux: F8 or Control+\

Using multiple commands on heroku CLI

Need to use multiple heroku CLI commands one after another and don’t want to type -a foo-bar-baz every time?

Type export HEROKU_APP=foo-bar-baz in terminal and then just happily run heroku command on foo-bar-baz application.

heroku pg:backups:capture -a foo-bar-baz

# is equivalent to:

export HEROKU_APP=foo-bar-baz
heroku pg:backups:capture

PS. After that you might want to unset HEROKU_APP to remove variable.

Replicating Rails `content_for()` in Phoenix

Typically in Rails layout you would do sth similar to this:

<%= yield :nav_actions %>

then in view (i.e. “resource_abc/show”):

<% content_for :nav_actions do %>
  <!-- whatever -->
<% end %>

To replicate this behavior in Phoenix use render_existing/3

In layout:

<%= render_existing @view_module, "nav_actions." <> @view_template, assigns %>

Then in your template folder (“resource_abc/“) you need to define extra file (for show action) - it will render content of only if the file exists.

If you want to render nav_actions for all resource actions just skip @view_template:

<%= render_existing @view_module, "nav_actions",  assigns %>

In this case file should be named nav_actions.html.eex.

Babel loader transpilation for Jest

Let’s imagine a following situation: we need to create a new npm package with create-react-app, in order to bundle some files in a package to have them available in a project (it’s possible to use create-component-lib for this task). The aforementioned sometimes requires from us to update our babel config.

In my latest task I had a situation where my babel config contained invalid (for a situation) presets:

module.exports =
  "presets": ["@babel/preset-env", ["react-app", { "absoluteRuntime": false }]]

In the above example, the build process creates a transpiled code, which contains babel runtime for minimalizing package size:

var _interopRequireWildcard = require("@babel/runtime/helpers/interopRequireWildcard");

var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");

Object.defineProperty(exports, "__esModule", {
  value: true
exports.default = void 0;

var _defineProperty2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/defineProperty"));

var _objectSpread4 = _interopRequireDefault(require("@babel/runtime/helpers/esm/objectSpread"));

var _classCallCheck2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/classCallCheck"));

var _createClass2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/createClass"));

var _possibleConstructorReturn2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/possibleConstructorReturn"));

var _getPrototypeOf3 = _interopRequireDefault(require("@babel/runtime/helpers/esm/getPrototypeOf"));

var _inherits2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/inherits"));

Now when we start our test which uses a transpiled component it’s possible we receive error code like this:

Jest encountered an unexpected token

    By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules".

    Here's what you can do:
     • To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
     • If you need a custom transformation to specify a "transform" option in your config.
     • If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.

    You'll find more details and examples of these config options in the docs:


    ({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,global,jest){export default function _defineProperty(obj, key, value) {

    SyntaxError: Unexpected token export

      10 | exports.default = void 0;
      11 |
    > 12 | var _defineProperty2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/defineProperty"));
         |                                               ^

      at ScriptTransformer._transformAndBuildScript (node_modules/jest-runtime/build/script_transformer.js:403:17)
      at Object.require (dist/FormGenerator.js:12:47)

This happens because by default jest ignores transformations for everything in node_modules directory.

Now we have two possibilities to resolve this problem. In the first scenario we can add transformIgnorePatterns for our jest config to transpile babel runtime module:

transformIgnorePatterns: [

In the above example transpiled code still will be smaller in application runtime but it doesn’t crash our tests.

The second option is using another babel preset for proper transpilation:

module.exports = {
  plugins: [
  presets: [

Now after build, we get transpiled code working in a test environment, however, our code will be bloated with helpers required to mimic transpiled features (for example class inheritance):

Object.defineProperty(exports, "__esModule", {
  value: true
exports.default = void 0;

var _react = _interopRequireWildcard(require("react"));

var _propTypes = _interopRequireDefault(require("prop-types"));

var _formik = require("formik");

var _yup = require("yup");

var _classnames = _interopRequireDefault(require("classnames"));

var _moment = _interopRequireDefault(require("moment"));

var _BooleanField = _interopRequireDefault(require("./FieldTypes/BooleanField"));

var _EnumField = _interopRequireDefault(require("./FieldTypes/EnumField"));

var _MoneyField = _interopRequireDefault(require("./FieldTypes/MoneyField"));

var _TextField = _interopRequireDefault(require("./FieldTypes/TextField"));

var _QuarterDateField = _interopRequireDefault(require("./FieldTypes/QuarterDateField"));

Use `pluck` to fetch paginated results from S3 client

Some of AWS client calls provide responses with the limited amount of data (typically 1.000 items per response).

Example response may look as follows:

aws_client.list_objects_v2(bucket: bucket)

=> #<struct Aws::S3::Types::ListObjectsV2Output
 [#<struct Aws::S3::Types::Object
    last_modified=2019-03-13 14:25:04 UTC,

Because list_objects_v2 method takes continuation_token as an argument, one of the solutions to fetch all the records may be to loop through the responses using next_continuation_token until the next_continuation_token field is empty.

Instead, you can use the built-in enumerator in the response object, which will return results from all the pages (next pages will be fetched automatically by SDK):

aws_client.list_objects_v2(bucket: bucket).map { |page| page[:contents] }

=> [[#<struct Aws::S3::Types::Object
   last_modified=2019-03-13 14:25:04 UTC,
  #<struct Aws::S3::Types::Object
   last_modified=2019-03-13 13:43:30 UTC,
   ... ]

However, there is even simpler solution to achieve the same result. You can use pluck method as follows:

aws_client.list_objects_v2(bucket: bucket).pluck(:contents)

How to fix Elasticsearch 'FORBIDDEN/12/index read-only'

By default, Elasticsearch installed with homebrew on Mac OS goes into read-only mode when you have less than 5% of free disk space. If you see errors similar to this:

  [403] {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"},"status":403}

Or in /usr/local/var/log/elasticsearch.log you can see logs similar to:

flood stage disk watermark [95%] exceeded on [nCxquc7PTxKvs6hLkfonvg][nCxquc7][/usr/local/var/lib/elasticsearch/nodes/0] free: 15.3gb[4.1%], all indices on this node will be marked read-only

Then you can fix it by running the following commands:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Custom nginx proxy host name

server {
    listen 80 default_server;
    server_name ~^(?<developer>.+)\.dev\.selleo\.com$;
    client_max_body_size 5M;
    root   /usr/share/nginx/html;

    location / {
      set $backend https://$;
      proxy_pass $backend;
      proxy_set_header  X-Real-IP       $remote_addr;
      proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;

resolver does the job here!

Have you Cmd-C'd your overcommit and lost your changes?

During overcommit running I have interrupted the process and received following message

Interrupt signal received. Stopping hooks...

⚠  Hook run interrupted by user


Unable to restore working tree after pre-commit hooks run:
STDERR:unable to refresh index

To my horror, all my changes were lost! Fortunately, those were kept in stash, so simple git stash pop helped :) More info here

FactoryBot: Constructing objects using Dry::Types

If you face error similar to the one below

Dry::Struct::Error: [] :some_attribute_name is missing in Hash input

when building objects of the class using Dry::Types and FactoryBot, be advised, that

Although factory_bot is written to work with ActiveRecord out of the box, it can also work with any Ruby class. For maximum compatibility with ActiveRecord, the default initializer builds all instances by calling new on your build class without any arguments. It then calls attribute writer methods to assign all the attribute values. While that works fine for ActiveRecord, it actually doesn’t work for almost any other Ruby class.

The fix is to add following line to your factory definition

initialize_with  { new(attributes) }

Another way to use absolute paths in js 'require'

consider following files structure:

▾ helpers/
▾ services/

in the package.json add:

  "dependencies": {
    "dotenv": "^6.2.0",
    "express": "^4.16.4",
    "helpers": "file:helpers", <-- this line

now in the services/handleSlackCommand.js I can use

const { whatever } = require('helpers/timeHelper');

instead of

const { whatever } = require('../helpers/timeHelper');

Reducing main bundle size with React lazy

On one of our projects we were able to reduce main bundle size from ~1080kB to ~820kB only by lazy loading one library (recharts).

in router file

import React, {Component, lazy, Suspense} from 'react'
//other imports

const StatisticsComponent = lazy(() => import('./path/to/component'))

export class Root extends Component {
      <Router history={history}> 
          // other routes
          <Suspense fallback={<div>Loading...</div>}>
            <Route exact component={StatisticsComponent} path='/statistics' />
      </Router > 

How to make path accessible in Rails Engine

Since routes in engine are isolated by default from main application so that you can have the same routes without clashing using default route helpers like about_path from main app will result in an error when you try to load it from inside the engine or access engine root from main app.

To fix that issue assuming you have an engine mounted:

Blorgh::Engine.routes.draw do
  resources :articles

to enforce using engine roots you need to change about_path to blorgh.about_path and to enforce using main application routes you need to change it to main_app.about_path

ActiveSupport::IncludeWithRange gotcha

Lets see how ruby implements === for ranges.

As documentation say “Returns true if obj is an element of the range, false otherwise”. Let’s try it out.

2.5.1 :001 > (1..10) === 5
 => true

Looks fine… how about if we compare it to another range?

 2.5.1 :001 > (1..10) === (5..15)
 => false

Seems to work properly again. How about if one range is a sub range of the other one.

2.5.1 :004 > (1..10) === (5..6)
 => false

As expected. Those ranges are not equal after all. Or at least (5..6) is not an element that (1..10) holds.

What is surprising, is what happens if we run the same thing in rails console (5.2.0 at the time of writing). Suddenly

[1] pry(main)> (1..10) === (5..6)
=> true

WAT? It now checks if range is included in the original range! Rails do not override === itself though. After looking at what rails adds to range…

[2] pry(main)> (1..10).class.ancestors
=> [ActiveSupport::EachTimeWithZone,

…we have identified this suspicious module ActiveSupport::IncludeWithRange. Its documentation explains everything.

# Extends the default Range#include? to support range comparisons.
#  (1..5).include?(1..5) # => true
#  (1..5).include?(2..3) # => true
#  (1..5).include?(2..6) # => false

Now guess what ruby’s Range#=== uses behind the scenes

              static VALUE
range_eqq(VALUE range, VALUE val)
    return rb_funcall(range, rb_intern("include?"), 1, val);

Yes… include?. The consequences are… there are consequences ;) The most annoying one is related to rspec.

expect(1..10).to match(5..6) # => true
expect([1..10]).to include(5..6) # => true
expect([1..10]).to match_array([5..6]) # => true

It is not possible to easily compare array of ranges matching on exact begin and end values yet disregarding actual order. Also the match behaviour is really misleading in my opinion. The only matcher we can use safely here is eq, as expect(1..10).to eq(5..6) will fail properly.

How to change stubbed return value with another stub?

Simple - just re-define spy as a result of another stub

valid_token = instance_double(ValidToken)
allow(ValidToken).to receive(:new) { valid_token }
allow(valid_token).to receive(:to_s) { '123' }
allow(valid_token).to receive(:clear!) do
   allow(valid_token).to receive(:to_s) { '456' }
valid_token =
valid_token.to_s # 123
valid_token.to_s # 456

Terraform AWS - moving state to another module

If your infrastructure grows and you find that certain resources should be moved to its own module because they need to be shared with others (or you made a mistake by putting them in the wrong module in the first place), you can move the state using CLI rather than recreating resources from scratch.

let’s say you have:

module "s3" {
  source = "./modules/s3"

and inside you defined user with access policy:

resource "aws_iam_user" "portal" {...}

resource "aws_iam_user_policy" "portal" {...}


terraform state mv module.s3.aws_iam_user.portal  module.iam
terraform state mv module.s3.aws_iam_user_policy.portal  module.iam

After that you can move your resource definitions from s3 to iam module. At the end, run terraform plan - terraform shouldn’t detect any changes.

Documentation here.

integer limit is adjustable in activerecord migration

create_table 'example' do |t|
  t.integer :int                 # int (4 bytes, max 2,147,483,647)
  t.integer :int1, :limit => 1   # tinyint (1 byte, -128 to 127)
  t.integer :int2, :limit => 2   # smallint (2 bytes, max 32,767)
  t.integer :int3, :limit => 3   # mediumint (3 bytes, max 8,388,607)
  t.integer :int4, :limit => 4   # int (4 bytes)
  t.integer :int5, :limit => 5   # bigint (8 bytes, max 9,223,372,036,854,775,807)
  t.integer :int8, :limit => 8   # bigint (8 bytes)
  t.integer :int11, :limit => 11 # int (4 bytes)

ReactDatePicker Day off in Summer time issue

If you’re using react-datepicker, and the last time you’ve tested your date-picker was in winter time or sooner, please check if your date-picker still works properly.


  onChange={val => this.setValue(input, val)}
  selected={input.value ? moment(input.value) : null}

Seems pretty basic, right?

Date displayed:

React-date-picker displayed value

Real value:

Real value of date-picker




to props in your react-date-picker.

  onChange={val => this.setValue(input, val)}
  selected={input.value ? moment(input.value) : null}

You can read more about this issue Here.

How to add autoprefixer in webpack

Firstly we need to add this to our project using yarn/npm.

So yarn add autoprefixer.

After a successful installation, we need to declare which browsers we wanna use for our autoprefixer.

To declare that, we need to add to our packages.json file a few lines:

“browserslist”: [
   “> 1%“,
   “last 2 versions”

here, we can set something else (

After that, we need to configure the webpack config file (ie. webpack.config.js).

Firstly we require autoprefixer and we’re setting this as a variable (somewhere on the beginning of the file)

const autoprefixer = require('autoprefixer');


| We need to add postcss-loader loader between css-loader and sass-loader.

use: ['css-loader',
              loader: 'postcss-loader',
              options: {
                plugins: () => [autoprefixer()]

if we have more loaders it could look like that:

  module: {
    rules: [
        test: /\.(sass|scss)$/,
        loader: ExtractTextPlugin.extract({
          fallback: 'style-loader',
          use: ['css-loader',
              loader: 'postcss-loader',
              options: {
                plugins: () => [autoprefixer()]
        test: /\.css$/,
        loader: ExtractTextPlugin.extract({
          fallback: 'style-loader',
          use: ['css-loader'],
        test: /\.js/,
        use: ['babel-loader?cacheDirectory'],
        exclude: /node_modules/,

Now, we need to restart the server and you can enjoy working autoprefixer :)

Remove Docker containers/cache

docker system prune -a -f

WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Deleted Containers:

Total reclaimed space: 14.83GB

SASS / BEM - Not TIL but still some interesting magic

Case 1 - we don’t want to write parent classname again from deep nesting

  $this: &

    margin-top: 2.4rem
    #{$this}__title // so it is .some-class__title
      font-size: 1.7rem


  margin-top: 2.4rem
.some-class.--sub .some-class__title
  font-size: 1.7rem

Case 2 - we want to have a (or any another tag) before parent class from deep nesting

  margin-top: 2.4rem
  @at-root a#{&} // so it is a.btn
    font-size: 1.7rem


  margin-top: 2.4rem
  font-size: 1.7rem

Unicode special characters on IOS Mobile Safari


I’ve created custom checkbox using unicode checkmark: ✔️

Checkbox looks like this: Safari checkbox It looks the same on every browser/device except IOS mobile Safari, where it looks as following: Safari checkbox The problem is that both screens presents unchecked state, but on IOS Safari it looks more like it’s checked.

It turned out, that mobile Safari is the only one which translates ✔️ into emoji, which colors cannot be changed in any way.


To prevent Safari from translating special symbols into emoji add Variation Selector-16:

For HTML like this:


For CSS content like this:

content: '✔\fe0e'  

Where fe0e is mentioned above: Varation-selector-16. This variation code can be found Here

How to get XPath of Capybara's query

Have you ever found yourself in a situation, where you were trying to do something like e.g. click_link 'Approve' and Capybara was not able to find that element on the page despite the fact that it’s quite clearly visible, and you were asking yourself “what the heck is it searching for then?”. Or maybe your find(sth) is failing and you think it’s a bug in the Capybara 😱
Worry no more! Now you can easily check generated XPath used by Capybara*. In most cases, find(*args, **options) translates to:*args, session_options: current_scope.session_options, **options).xpath

E.g. to see XPath for click_on 'Approve':, 'Approve', session_options: current_scope.session_options).xpath

And XPath for find('tbody > tr > td:nth-child(2)'):'tbody > tr > td:nth-child(2)', session_options: current_scope.session_options).xpath

Then you can copy that XPath to the Chrome’s console and test it with $x('xpath').

* Presented solution doesn’t work with some types of more complicated queries, e.g. find('a', text: 'APPROVED') actually uses CSS selector instead of the XPath, and then filter results using Capybara::Result. You can check type of the selector used using .selector.format on your selector query.

Remove sensitive data from git repository.

When you forgot to use secrets from the very beginning - and some secrets landed in your repository eg. login/password/secret_key you can remove them in a simple way using filter-branch for example to remove password you need to use:

git filter-branch --tree-filter "find . -type f -exec sed -i -e 's/password/**REMOVED**/g' {} \;"

It will replace password with **REMOVED** in the whole repo and commits.

Rake / rails console does not work in docker?

When using default ruby image in your Dockerfile (FROM ruby:2.5.1) if you encounter any problems with missing gems in your container when running rake task or rails console:

Could not find rake-x.y.z in any of the sources. Run bundle install to install missing gems.

That’s because you probably used:

RUN bundle install --deployment

You can fix it with:

RUN bundle install --without development test

Making enumerator from method that yields values

Original challenge related to AWS SQS QueuePoller

The challenge was to test a static method that yields multiple values but should stop, when some condition is met.

Let’s imagine such method as follows

class Poller
  def self.poll(condition = -> {})
    counter = 0

    while true do
      yield counter += 1
      break if

The problem is with testing such method. We do not only need to test what it yields, but we also need to test and control when it stops. To control when it stops, we need to access the actual block, but to test what it yields, we either need yield_successive_args matcher or we need to fetch consecutive results.

It is possible by aggregating each yielded value and then asserting them altogether, but the resultant code is not nice. The solution would be to make an Enumerator from the poll method and use next to get consecutive results. It is also easy as described in this blog post. The problem is, that we do not want to write code that is only required by our tests.

So the idea is to add creating enumerators when the block is not provided dynamically to the class when testing.

Poller.define_singleton_method(:poll_with_enum) do |*args, &block| 
    return enum_for(:poll) unless block.present?
    poll_without_enum(*args, &block)
# alias_method_chain is deprecated
# Poller.singleton_class.alias_method_chain(:poll, :enum)
Poller.singleton_class.alias_method :poll_without_enum, :poll
Poller.singleton_class.alias_method :poll, :poll_with_enum

if we turn this into a helper…

def with_enumerated(subject, method_name)
    subject.define_singleton_method("#{method_name}_with_enum") do |*args, &block|
      return enum_for(method_name, *args) unless block.present?
      public_send("#{method_name}_without_enum",*args, &block)

    subject.singleton_class.alias_method "#{method_name}_without_enum", method_name
    subject.singleton_class.alias_method method_name, "#{method_name}_with_enum"

    subject.singleton_class.alias_method method_name, "#{method_name}_without_enum"
    subject.singleton_class.remove_method "#{method_name}_with_enum"

…then we could leverage it in our tests!

with_enumerated(Poller, :poll) do
  $stop = false
  poller = Poller.poll(condition = -> { $stop == true })

  first_value =
  expect(first_value).to eq 1

  $stop = true
  second_value =
  expect(second_value).to eq 2

  expect { }.to raise_exception(StopIteration)

Stubbing responses from AWS services

We have started integration with Amazon SQS recently and did need to write some unit tests related to it. Unfortunately stubbing AWS client library the regular way turned out to be pretty cumbersome and challenging. Fortunately AWS SDK for ruby provides tools that make it pretty comfortable.

# Simple stubbing...
sqs_response_mock =
sqs_response_mock.messages << 'abc')
Aws.config[:sqs] = {
    stub_responses: {
        receive_message: sqs_response_mock

# ...allows properly polling the queue
poller ='')
poller.poll do |msg|
  puts msg.body

# => abc

Documentation can be found here