Today I Learned

Use formik with context/hooks instead of render-props

Note that this feature is available in formik >= 2.0

Usually when you’re creating forms using formik you access formik props using render-prop pattern

export function MyForm() {
  return (
    <Formik
      initialValues={{ /* something here */ }}
      onSubmit={(values, actions) => {/* handler here */}}
    >
      {(formikProps) => (
        <form onSubmit={formikProps.handleSubmit}>
          <MyInputComponent value={formikProps.values.name} />
        </form>
      )}
    </Formik>
  )
}

Instead you can use formik as context provider (which it is under the hood actually) and use useFormikContext() hook or FormikConsumer component to consume provided form state.

Example below:

export function MyFormContainer() {
  return (
    <Formik
      {/* Under the hood Formik uses Context Provider */}
      initialValues={{ /* something here */}}
      onSubmit={(values, actions) => { /* handler here */ }}
    >
      {/* You do not have to use function here! */}
      {/* However you can, if you would like to :) */}
      <MyForm />
    </Formik>
  )
}

export function MyForm() {
  const { handleSubmit, values } = useFormikContext(); // formikProps

  return (
    <form onSubmit={handleSubmit}>
      <MyInputComponent value={values.name} />
    </form>
  );
}

This allows us to use formik props deep down in our nested components without drilling down formikProps.

validate! can raise various classes of exceptions

class ActiveRecord::Base and module ActiveModel::Validations raise different exceptions on failed validate!

consider the following classes:

class Foo
  include ActiveModel::Validations

  validates :foo, presence: true

  def foo
    nil
  end
end

class Organisation < ActiveRecord::Base
  validates :name, presence: true
end

Take a look on classes of raised exceptions:

[2] pry(#<Api::V1::HooksController>)> Foo.new.validate!
ActiveModel::ValidationError: Validation failed: Foo can't be blank
from /Users/bartoszmaka/.asdf/installs/ruby/2.6.2/lib/ruby/gems/2.6.0/gems/activemodel-6.0.0.rc1/lib/active_model/validations.rb:412:in `raise_validation_error'
[3] pry(#<Api::V1::HooksController>)> Organisation.new.validate!
ActiveRecord::RecordInvalid: Validation failed: Name can't be blank
from /Users/bartoszmaka/.asdf/installs/ruby/2.6.2/lib/ruby/gems/2.6.0/gems/activerecord-6.0.0.rc1/lib/active_record/validations.rb:80:in `raise_validation_error'

The highest common ancestor of those classes is StandardError

[4] pry(#<Api::V1::HooksController>)> ActiveModel::ValidationError.ancestors
=> [ActiveModel::ValidationError,
 StandardError,
 Exception,
 ActiveSupport::Dependencies::Blamable,
 ActiveSupport::ToJsonWithActiveSupportEncoder,
 Object,
 ActiveSupport::Tryable,
 ActiveSupport::Dependencies::Loadable,
 JSON::Ext::Generator::GeneratorMethods::Object,
 PP::ObjectMixin,
 Kernel,
 BasicObject]

[5] pry(#<Api::V1::HooksController>)> ActiveRecord::RecordInvalid.ancestors
=> [ActiveRecord::RecordInvalid,
 ActiveRecord::ActiveRecordError,
 StandardError,
 Exception,
 ActiveSupport::Dependencies::Blamable,
 ActiveSupport::ToJsonWithActiveSupportEncoder,
 Object,
 ActiveSupport::Tryable,
 ActiveSupport::Dependencies::Loadable,
 JSON::Ext::Generator::GeneratorMethods::Object,
 PP::ObjectMixin,
 Kernel,
 BasicObject]

If you’re using form objects - chances are, that you’re importing ActiveModel::Validations. Make sure that you rescue_from the proper exceptions.

Used versions: ruby 2.6.2p47, Rails 6.0.0.rc1

Decrease bundle size by importing Lodash correctly

If you use just one or a few Lodash functions in a file, try importing them directly from their respective files, one by one.

import _ from 'lodash'; // bad - 70.5K (gzipped: 24.4K)
import { has } from 'lodash'; // bad - 70.5K (gzipped: 24.4K)

import has from 'lodash/has'; // good - 9K (gzipped: 2.7K)

Alternatively, one can use babel-plugin-lodash or lodash-es. These packages come with sets of restrictions though.

It is worth noting that mixing these 2 import styles will cause the net gain to be negative. Basically, it will import the entire lodash + individual utilities twice. So, for this to be effective, the ‘good’ pattern needs to be enforced everywhere across the codebase.

Further reading

Revert^2

In the given scenario:

  1. Pull request was merged
  2. The connected branch#1 was removed
  3. Merged PR was reverted
  4. You don’t have branch#1, but you can see the reverted commit#1

All you have to do is to track the commit#1 and revert reverted commit.

$ git revert commit#1

Form with multiple submit buttons

When you need to use the form in rails with submitting to different URL, you probably will try to handle it with js. Now you don’t need to, just use formaction on submit button. The formaction override the URL in form action and used its own.

<%= form_for @article, url: {action: "create"} do |f| %>
  <%= f.text_field :title %>
  <%= f.text_area :body, size: "60x12" %>
  <%= f.submit "Create" %> #this will use URL from form
  <%= f.submit "Create Draft Article",  formaction: '/draft_articles' %> #this will use URL from formaction
<% end %>

Clean up local list of your git branches

Have you ever had to search through a long list of local branches, which are mostly obsolete for you?

To keep your local git repository slim and avoid a long list of branches while doing git branch, there is an easy strategy that can save you a bit of time.

First, always remember to push everything you did to the remote when you leave a desk, then just run in the console:

git branch | grep -v "master" | xargs git branch -D

It will remove all your local branches except master. If you need to restore any particular branch, you can fetch and checkout - it will wait for you on your remote!

Inserting data in migrations

When you encounter a problem with inserting data in migrations and your Repo is not aware about created table yet, you need to use flush()

# ...
  def up do
    create table("players") do
      add :name, :varchar, null: false
      add :color, :varchar, null: false
      add :avatar, :varchar, null: false
    end

    create index("players", [:name], unique: true)

    flush() # 👷

    Repo.insert!(%Player{...})
  end
# ...

Trigger debugger with key combination

Whenever you need to freeze your website and check its state, you can trigger debugger at any given point.

Let’s say you want to check which classes are passed when element is hovered - move your mouse over it and trigger debugger. The page will freeze and you can browse the dom tree freely.

According to Google Chrome’s DevTools documentation:

Pause script execution (if currently running) or resume (if currently paused)

You need to have your DevTools opened

MacOs: F8 or Command+\

Windows/Linux: F8 or Control+\

Using multiple commands on heroku CLI

Need to use multiple heroku CLI commands one after another and don’t want to type -a foo-bar-baz every time?

Type export HEROKU_APP=foo-bar-baz in terminal and then just happily run heroku command on foo-bar-baz application.

heroku pg:backups:capture -a foo-bar-baz

# is equivalent to:

export HEROKU_APP=foo-bar-baz
heroku pg:backups:capture

PS. After that you might want to unset HEROKU_APP to remove variable.

Replicating Rails `content_for()` in Phoenix

Typically in Rails layout you would do sth similar to this:

<%= yield :nav_actions %>

then in view (i.e. “resource_abc/show”):

<% content_for :nav_actions do %>
  <!-- whatever -->
<% end %>

To replicate this behavior in Phoenix use render_existing/3

In layout:

<%= render_existing @view_module, "nav_actions." <> @view_template, assigns %>

Then in your template folder (“resource_abc/“) you need to define extra file nav_actions.show.html.eex (for show action) - it will render content of nav_actions.show only if the file exists.

If you want to render nav_actions for all resource actions just skip @view_template:

<%= render_existing @view_module, "nav_actions",  assigns %>

In this case file should be named nav_actions.html.eex.

Babel loader transpilation for Jest

Let’s imagine a following situation: we need to create a new npm package with create-react-app, in order to bundle some files in a package to have them available in a project (it’s possible to use create-component-lib for this task). The aforementioned sometimes requires from us to update our babel config.

In my latest task I had a situation where my babel config contained invalid (for a situation) presets:

module.exports =
{
  "presets": ["@babel/preset-env", ["react-app", { "absoluteRuntime": false }]]
}

In the above example, the build process creates a transpiled code, which contains babel runtime for minimalizing package size:

var _interopRequireWildcard = require("@babel/runtime/helpers/interopRequireWildcard");

var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");

Object.defineProperty(exports, "__esModule", {
  value: true
});
exports.default = void 0;

var _defineProperty2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/defineProperty"));

var _objectSpread4 = _interopRequireDefault(require("@babel/runtime/helpers/esm/objectSpread"));

var _classCallCheck2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/classCallCheck"));

var _createClass2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/createClass"));

var _possibleConstructorReturn2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/possibleConstructorReturn"));

var _getPrototypeOf3 = _interopRequireDefault(require("@babel/runtime/helpers/esm/getPrototypeOf"));

var _inherits2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/inherits"));

Now when we start our test which uses a transpiled component it’s possible we receive error code like this:

Jest encountered an unexpected token

    By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules".

    Here's what you can do:
     • To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
     • If you need a custom transformation to specify a "transform" option in your config.
     • If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.

    You'll find more details and examples of these config options in the docs:
    https://jestjs.io/docs/en/configuration.html

    Details:

    /Users/mobile32/projects/formik-generator/node_modules/@babel/runtime/helpers/esm/defineProperty.js:1
    ({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,global,jest){export default function _defineProperty(obj, key, value) {
                                                                                             ^^^^^^

    SyntaxError: Unexpected token export

      10 | exports.default = void 0;
      11 |
    > 12 | var _defineProperty2 = _interopRequireDefault(require("@babel/runtime/helpers/esm/defineProperty"));
         |                                               ^

      at ScriptTransformer._transformAndBuildScript (node_modules/jest-runtime/build/script_transformer.js:403:17)
      at Object.require (dist/FormGenerator.js:12:47)

This happens because by default jest ignores transformations for everything in node_modules directory.

Now we have two possibilities to resolve this problem. In the first scenario we can add transformIgnorePatterns for our jest config to transpile babel runtime module:

transformIgnorePatterns: [
  '/node_modules/(?!@babel\/runtime)'
],

In the above example transpiled code still will be smaller in application runtime but it doesn’t crash our tests.

The second option is using another babel preset for proper transpilation:

module.exports = {
  plugins: [
    '@babel/plugin-proposal-class-properties',
  ],
  presets: [
    '@babel/preset-env',
    '@babel/preset-react',
  ],
}

Now after build, we get transpiled code working in a test environment, however, our code will be bloated with helpers required to mimic transpiled features (for example class inheritance):

Object.defineProperty(exports, "__esModule", {
  value: true
});
exports.default = void 0;

var _react = _interopRequireWildcard(require("react"));

var _propTypes = _interopRequireDefault(require("prop-types"));

var _formik = require("formik");

var _yup = require("yup");

var _classnames = _interopRequireDefault(require("classnames"));

var _moment = _interopRequireDefault(require("moment"));

var _BooleanField = _interopRequireDefault(require("./FieldTypes/BooleanField"));

var _EnumField = _interopRequireDefault(require("./FieldTypes/EnumField"));

var _MoneyField = _interopRequireDefault(require("./FieldTypes/MoneyField"));

var _TextField = _interopRequireDefault(require("./FieldTypes/TextField"));

var _QuarterDateField = _interopRequireDefault(require("./FieldTypes/QuarterDateField"));

Use `pluck` to fetch paginated results from S3 client

Some of AWS client calls provide responses with the limited amount of data (typically 1.000 items per response).

Example response may look as follows:

aws_client.list_objects_v2(bucket: bucket)

=> #<struct Aws::S3::Types::ListObjectsV2Output
 is_truncated=true,
 contents=
 [#<struct Aws::S3::Types::Object
    key="reports/report_2.csv",
    last_modified=2019-03-13 14:25:04 UTC,
    etag="\"5a7c05eb47dcd13a27a26d34eb13b0ec\"",
    size=466,
    storage_class="STANDARD",
    owner=nil>,
    ...
 ]
 name="awesome-bucket",
 prefix="",
 delimiter=nil,
 max_keys=1000,
 common_prefixes=[],
 encoding_type=nil,
 key_count=1000,
 continuation_token=nil,
 next_continuation_token="1wEBwtqJOGmZF5DXgu5UhTMv386wdtND0EQzkkOUEGPPeF8tC58BEbfBvfsVHKGnxNgHxvFARrcWdCPJXXgiMzUtpedrxZP2G9wu/0but8ALLHDGdZVD4OHb41DWQKocGGAOwr0wfOeN4hUoCzimKeA==",
 start_after=nil>

Because list_objects_v2 method takes continuation_token as an argument, one of the solutions to fetch all the records may be to loop through the responses using next_continuation_token until the next_continuation_token field is empty.

Instead, you can use the built-in enumerator in the response object, which will return results from all the pages (next pages will be fetched automatically by SDK):

aws_client.list_objects_v2(bucket: bucket).map { |page| page[:contents] }

=> [[#<struct Aws::S3::Types::Object
   key="reports/report_2.csv",
   last_modified=2019-03-13 14:25:04 UTC,
   etag="\"5a7c05eb47dcd13a27a26d34eb13b0ec\"",
   size=466,
   storage_class="STANDARD",
   owner=nil>,
  #<struct Aws::S3::Types::Object
   key="reports/report_1.csv",
   last_modified=2019-03-13 13:43:30 UTC,
   etag="\"dc7215c066f62c7ddedef78e123dbc7c\"",
   size=191722,
   storage_class="STANDARD",
   owner=nil>,
   ... ]

However, there is even simpler solution to achieve the same result. You can use pluck method as follows:

aws_client.list_objects_v2(bucket: bucket).pluck(:contents)

How to fix Elasticsearch 'FORBIDDEN/12/index read-only'

By default, Elasticsearch installed with homebrew on Mac OS goes into read-only mode when you have less than 5% of free disk space. If you see errors similar to this:

Elasticsearch::Transport::Transport::Errors::Forbidden:
  [403] {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"},"status":403}

Or in /usr/local/var/log/elasticsearch.log you can see logs similar to:

flood stage disk watermark [95%] exceeded on [nCxquc7PTxKvs6hLkfonvg][nCxquc7][/usr/local/var/lib/elasticsearch/nodes/0] free: 15.3gb[4.1%], all indices on this node will be marked read-only

Then you can fix it by running the following commands:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Custom nginx proxy host name

server {
    listen 80 default_server;
    server_name ~^(?<developer>.+)\.dev\.selleo\.com$;
    client_max_body_size 5M;
    root   /usr/share/nginx/html;

    location / {
      resolver 8.8.8.8;
      set $backend https://$developer-secret.app.selleo.com:443;
      proxy_pass $backend;
      proxy_set_header  X-Real-IP       $remote_addr;
      proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

resolver does the job here!

Have you Cmd-C'd your overcommit and lost your changes?

During overcommit running I have interrupted the process and received following message

Interrupt signal received. Stopping hooks...

⚠  Hook run interrupted by user

then

Unable to restore working tree after pre-commit hooks run:
STDOUT:
STDERR:unable to refresh index

To my horror, all my changes were lost! Fortunately, those were kept in stash, so simple git stash pop helped :) More info here

FactoryBot: Constructing objects using Dry::Types

If you face error similar to the one below

Dry::Struct::Error: [YourClassName.new] :some_attribute_name is missing in Hash input

when building objects of the class using Dry::Types and FactoryBot, be advised, that

Although factory_bot is written to work with ActiveRecord out of the box, it can also work with any Ruby class. For maximum compatibility with ActiveRecord, the default initializer builds all instances by calling new on your build class without any arguments. It then calls attribute writer methods to assign all the attribute values. While that works fine for ActiveRecord, it actually doesn’t work for almost any other Ruby class.

The fix is to add following line to your factory definition

initialize_with  { new(attributes) }

Another way to use absolute paths in js 'require'

consider following files structure:

...
▾ helpers/
    timeHelper.js
    timeHelper.test.js
▾ services/
    handleSlackCommand.js
    handleSlackCommand.test.js
  index.js
  package.json
  ...

in the package.json add:

{
  ...
  "dependencies": {
    "dotenv": "^6.2.0",
    "express": "^4.16.4",
    ...
    "helpers": "file:helpers", <-- this line
  },
}

now in the services/handleSlackCommand.js I can use

const { whatever } = require('helpers/timeHelper');

instead of

const { whatever } = require('../helpers/timeHelper');

Reducing main bundle size with React lazy

On one of our projects we were able to reduce main bundle size from ~1080kB to ~820kB only by lazy loading one library (recharts).

in router file

import React, {Component, lazy, Suspense} from 'react'
//other imports

const StatisticsComponent = lazy(() => import('./path/to/component'))

export class Root extends Component {
  
  render(){
    return(
      <Router history={history}> 
        <Switch>
          // other routes
          <Suspense fallback={<div>Loading...</div>}>
            <Route exact component={StatisticsComponent} path='/statistics' />
          </Suspense>
        </Switch>
      </Router > 
      )
  }
}

How to make path accessible in Rails Engine

Since routes in engine are isolated by default from main application so that you can have the same routes without clashing using default route helpers like about_path from main app will result in an error when you try to load it from inside the engine or access engine root from main app.

To fix that issue assuming you have an engine mounted:

Blorgh::Engine.routes.draw do
  resources :articles
end

to enforce using engine roots you need to change about_path to blorgh.about_path and to enforce using main application routes you need to change it to main_app.about_path

ActiveSupport::IncludeWithRange gotcha

Lets see how ruby implements === for ranges.

As documentation say “Returns true if obj is an element of the range, false otherwise”. Let’s try it out.

2.5.1 :001 > (1..10) === 5
 => true

Looks fine… how about if we compare it to another range?

 2.5.1 :001 > (1..10) === (5..15)
 => false

Seems to work properly again. How about if one range is a sub range of the other one.

2.5.1 :004 > (1..10) === (5..6)
 => false

As expected. Those ranges are not equal after all. Or at least (5..6) is not an element that (1..10) holds.

What is surprising, is what happens if we run the same thing in rails console (5.2.0 at the time of writing). Suddenly

[1] pry(main)> (1..10) === (5..6)
=> true

WAT? It now checks if range is included in the original range! Rails do not override === itself though. After looking at what rails adds to range…

[2] pry(main)> (1..10).class.ancestors
=> [ActiveSupport::EachTimeWithZone,
 ActiveSupport::IncludeTimeWithZone,
 ActiveSupport::IncludeWithRange,
 ActiveSupport::RangeWithFormat,
 Range,
 Enumerable,
 ActiveSupport::ToJsonWithActiveSupportEncoder,
 Object,
 RequireAll,
 PP::ObjectMixin,
 Nori::CoreExt::Object,
 ActiveSupport::Dependencies::Loadable,
 JSON::Ext::Generator::GeneratorMethods::Object,
 ActiveSupport::Tryable,
 Kernel,
 BasicObject]

…we have identified this suspicious module ActiveSupport::IncludeWithRange. Its documentation explains everything.

# Extends the default Range#include? to support range comparisons.
#  (1..5).include?(1..5) # => true
#  (1..5).include?(2..3) # => true
#  (1..5).include?(2..6) # => false

Now guess what ruby’s Range#=== uses behind the scenes

              static VALUE
range_eqq(VALUE range, VALUE val)
{
    return rb_funcall(range, rb_intern("include?"), 1, val);
}

Yes… include?. The consequences are… there are consequences ;) The most annoying one is related to rspec.

expect(1..10).to match(5..6) # => true
expect([1..10]).to include(5..6) # => true
expect([1..10]).to match_array([5..6]) # => true

It is not possible to easily compare array of ranges matching on exact begin and end values yet disregarding actual order. Also the match behaviour is really misleading in my opinion. The only matcher we can use safely here is eq, as expect(1..10).to eq(5..6) will fail properly.

How to change stubbed return value with another stub?

Simple - just re-define spy as a result of another stub

valid_token = instance_double(ValidToken)
allow(ValidToken).to receive(:new) { valid_token }
allow(valid_token).to receive(:to_s) { '123' }
allow(valid_token).to receive(:clear!) do
   allow(valid_token).to receive(:to_s) { '456' }
end
valid_token = ValidToken.new
valid_token.to_s # 123
valid_token.clear!
valid_token.to_s # 456

Terraform AWS - moving state to another module

If your infrastructure grows and you find that certain resources should be moved to its own module because they need to be shared with others (or you made a mistake by putting them in the wrong module in the first place), you can move the state using CLI rather than recreating resources from scratch.

let’s say you have:

module "s3" {
  source = "./modules/s3"
  ...
}

and inside you defined user with access policy:

resource "aws_iam_user" "portal" {...}

resource "aws_iam_user_policy" "portal" {...}

Use:

terraform state mv module.s3.aws_iam_user.portal  module.iam
terraform state mv module.s3.aws_iam_user_policy.portal  module.iam

After that you can move your resource definitions from s3 to iam module. At the end, run terraform plan - terraform shouldn’t detect any changes.

Documentation here.

integer limit is adjustable in activerecord migration

create_table 'example' do |t|
  t.integer :int                 # int (4 bytes, max 2,147,483,647)
  t.integer :int1, :limit => 1   # tinyint (1 byte, -128 to 127)
  t.integer :int2, :limit => 2   # smallint (2 bytes, max 32,767)
  t.integer :int3, :limit => 3   # mediumint (3 bytes, max 8,388,607)
  t.integer :int4, :limit => 4   # int (4 bytes)
  t.integer :int5, :limit => 5   # bigint (8 bytes, max 9,223,372,036,854,775,807)
  t.integer :int8, :limit => 8   # bigint (8 bytes)
  t.integer :int11, :limit => 11 # int (4 bytes)
end

ReactDatePicker Day off in Summer time issue

If you’re using react-datepicker, and the last time you’ve tested your date-picker was in winter time or sooner, please check if your date-picker still works properly.

Issue:

<DatePicker
  dateFormat="DD/MM/YYYY"
  onChange={val => this.setValue(input, val)}
  selected={input.value ? moment(input.value) : null}
  />

Seems pretty basic, right?

Date displayed:

React-date-picker displayed value

Real value:

Real value of date-picker

Solution:

Add

utcOffset={0}

to props in your react-date-picker.

<DatePicker
  dateFormat="DD/MM/YYYY"
  onChange={val => this.setValue(input, val)}
  selected={input.value ? moment(input.value) : null}
  utcOffset={0}
  />

You can read more about this issue Here.

How to add autoprefixer in webpack

Firstly we need to add this to our project using yarn/npm.

So yarn add autoprefixer.

After a successful installation, we need to declare which browsers we wanna use for our autoprefixer.

To declare that, we need to add to our packages.json file a few lines:

“browserslist”: [
   “> 1%“,
   “last 2 versions”
 ],

here, we can set something else (https://github.com/browserslist/browserslist#queries)

After that, we need to configure the webpack config file (ie. webpack.config.js).

Firstly we require autoprefixer and we’re setting this as a variable (somewhere on the beginning of the file)

const autoprefixer = require('autoprefixer');

!important:

| We need to add postcss-loader loader between css-loader and sass-loader.

use: ['css-loader',
            {
              loader: 'postcss-loader',
              options: {
                plugins: () => [autoprefixer()]
              }
            },
            'sass-loader'],

if we have more loaders it could look like that:

  module: {
    rules: [
      {
        test: /\.(sass|scss)$/,
        loader: ExtractTextPlugin.extract({
          fallback: 'style-loader',
          use: ['css-loader',
            {
              loader: 'postcss-loader',
              options: {
                plugins: () => [autoprefixer()]
              }
            },
            'sass-loader'],
        }),
      },
      {
        test: /\.css$/,
        loader: ExtractTextPlugin.extract({
          fallback: 'style-loader',
          use: ['css-loader'],
        }),
      },
      {
        test: /\.js/,
        use: ['babel-loader?cacheDirectory'],
        exclude: /node_modules/,
      },
    ],
  },

Now, we need to restart the server and you can enjoy working autoprefixer :)

Remove Docker containers/cache

docker system prune -a -f

WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Deleted Containers:
364bf6faea40cbb733f7fe71d4e6634115c091cf60bba0c3432f0675c4c19026
1b6c011fab7492f5f0f5fd81b23dc71c020524e369f3e04d1da7a6fdfbafdba7
8fd3ce7b846e2927e952378ac763aff2d7e96881e8fb3bf352d014764bbd7625
885c3efd8b6096af8c7227174462972a93d46666f5d1a5b01339b77b2a868baf
ea60a832bc8445da22fa6c460cfa9dc816843fa7176e73c96d6de62f3048ed2d
e96ac95f80f53727e51f313946184db7d768c57808b12dd6f4f69edf038f7c74
54967f9c6c2e74314441d07e26ac4dc0b38ba53a2d2b665ab9c9be9f8d369def
3841b5d1586db0a7b500a3fc3cf54f3628f23b5236c161a1fee2a3a80b4f5cb7

Total reclaimed space: 14.83GB

SASS / BEM - Not TIL but still some interesting magic

Case 1 - we don’t want to write parent classname again from deep nesting

.some-class
  $this: &

  &.--sub
    margin-top: 2.4rem
    #{$this}__title // so it is .some-class__title
      font-size: 1.7rem

equals:

.some-class.--sub
  margin-top: 2.4rem
.some-class.--sub .some-class__title
  font-size: 1.7rem

Case 2 - we want to have a (or any another tag) before parent class from deep nesting

.btn
  margin-top: 2.4rem
  @at-root a#{&} // so it is a.btn
    font-size: 1.7rem

equals:

.btn
  margin-top: 2.4rem
a.btn
  font-size: 1.7rem

Unicode special characters on IOS Mobile Safari

Issue

I’ve created custom checkbox using unicode checkmark: ✔️

Checkbox looks like this: Safari checkbox It looks the same on every browser/device except IOS mobile Safari, where it looks as following: Safari checkbox The problem is that both screens presents unchecked state, but on IOS Safari it looks more like it’s checked.

It turned out, that mobile Safari is the only one which translates ✔️ into emoji, which colors cannot be changed in any way.

Solution

To prevent Safari from translating special symbols into emoji add Variation Selector-16:

For HTML like this:

<p>✔&#fe0e;</p>

For CSS content like this:

content: '✔\fe0e'  

Where fe0e is mentioned above: Varation-selector-16. This variation code can be found Here

How to get XPath of Capybara's query

Have you ever found yourself in a situation, where you were trying to do something like e.g. click_link 'Approve' and Capybara was not able to find that element on the page despite the fact that it’s quite clearly visible, and you were asking yourself “what the heck is it searching for then?”. Or maybe your find(sth) is failing and you think it’s a bug in the Capybara 😱
Worry no more! Now you can easily check generated XPath used by Capybara*. In most cases, find(*args, **options) translates to:

Capybara::Queries::SelectorQuery.new(*args, session_options: current_scope.session_options, **options).xpath

E.g. to see XPath for click_on 'Approve':

Capybara::Queries::SelectorQuery.new(:link_or_button, 'Approve', session_options: current_scope.session_options).xpath

And XPath for find('tbody > tr > td:nth-child(2)'):

Capybara::Queries::SelectorQuery.new('tbody > tr > td:nth-child(2)', session_options: current_scope.session_options).xpath

Then you can copy that XPath to the Chrome’s console and test it with $x('xpath').

* Presented solution doesn’t work with some types of more complicated queries, e.g. find('a', text: 'APPROVED') actually uses CSS selector instead of the XPath, and then filter results using Capybara::Result. You can check type of the selector used using .selector.format on your selector query.

Remove sensitive data from git repository.

When you forgot to use secrets from the very beginning - and some secrets landed in your repository eg. login/password/secret_key you can remove them in a simple way using filter-branch for example to remove password you need to use:

git filter-branch --tree-filter "find . -type f -exec sed -i -e 's/password/**REMOVED**/g' {} \;"

It will replace password with **REMOVED** in the whole repo and commits.