Today I Learned

24 posts about #javascript

matchMedia - JS media queries

Instead of trying to copy media queries functionality in Javascript by using window.innerWidth or installing external packages for that.

Just simply use window.matchMedia which does exactly what you are looking for

if (window.matchMedia('all and (max-width: 767px)').matches) { 
  console.log('do something');
}

It works exactly like CSS media queries and has 100% browser support.

Generating JS multi dimensional arrays edge cases

tl;dr

Don’t use Array#fill to generate multi dimensional arrays

Use other methods instead, like this for example:

Array.from(
  { length: 3 },
  () => Array.from({ length: 3 })
)
Array(3).fill().map(
  () => Array(3).fill()
)

Long version

One of many ways to generate an array is to use JS Array constructor:

> a = Array(5)
[ <5 empty items> ]
> a.length
5

Array can be filled with some value using Array#fill method

> a = Array(5).fill('value')
[ 'value', 'value', 'value', 'value', 'value' ]
> a = Array(5).fill(0)
[ 0, 0, 0, 0, 0 ]
> a = Array(5).fill()
[ undefined, undefined, undefined, undefined, undefined ]
> a[1] = 42
42
> a
[ undefined, 42, undefined, undefined, undefined ]

So far it works fine, but be careful with filling array with a reference type, like an Array or Object

> b = Array(3).fill(Array(3).fill())
[ [ undefined, undefined, undefined ],
  [ undefined, undefined, undefined ],
  [ undefined, undefined, undefined ] ]
> b[1][1] = 42
42
> b
[ [ undefined, 42, undefined ],
  [ undefined, 42, undefined ],
  [ undefined, 42, undefined ] ]
> b[1] === b[2]
true

Each row of such generated array is actually the same object
It’s literally what the documentations says:

arr.fill(value[, start[, end]])
value: Value to fill the array with. (Note all elements in the array will be this exact value.)

MDN web docs
But not everybody has to know that, right?
And using Array#fill without that knowledge might lead to a hard to track bug in your application

Solutions

Fill the array with some value, and then map through the values

> a = Array(3).fill().map(() => Array(3).fill())
[ [ undefined, undefined, undefined ],
  [ undefined, undefined, undefined ],
  [ undefined, undefined, undefined ] ]
> a[1][1] = 42
42
> a
[ [ undefined, undefined, undefined ],
  [ undefined, 42, undefined ],
  [ undefined, undefined, undefined ] ]

Or use other method, like Array.from

> a = Array.from({ length: 3 }, () => Array.from({ length: 3 }))
[ [ undefined, undefined, undefined ],
  [ undefined, undefined, undefined ],
  [ undefined, undefined, undefined ] ]
> a[1][1] = 42
42
> a
[ [ undefined, undefined, undefined ],
  [ undefined, 42, undefined ],
  [ undefined, undefined, undefined ] ]
>

Offset pagination with subquery in Sequelize

When we are using offset pagination and try to use LIMIT in our query, we could encounter some problems due to Sequelize specification. There are two methods to select multiple records, findAll and findAndCountAll where it’s possible to define limit property. If our query contains some subqueries, then in Sequelize the limit could be applied to our subqueries rather than the main query.

After checking code on the GitHub repository, we can find additional property subQuery not posted in the official documentation. If we add it to findAll parameters with false value, the limit and offset are placed at the end of the main query and not evaluated to the subquery. Now should be received well-paginated results.

Questionnaire.findAndCountAll({
      attributes: ["id", "name"],
      order: [["id", "DESC"]],
      include: [
       /* Some associations */
      ],
      offset: 5,
      limit: 5,
      subQuery: false,
    });

The downside of disabling subqueries optimisation is a worse performance because all queries are separated. In every single case, it is essential to evaluate usability and performance cost before using that approach.

How to exclude code from production in build time

Sometimes you may want to add some code which will be available only in development, but you don’t want to mess your bundle with a dead code. You can do this with webpack version 4. Setting mode to production in your webpack config will remove any code that is not designed for selected environment.

if (process.env.NODE_ENV === 'development') {
  console.warn('This block will be removed during building you bundle')
}

Composing id from 2 URL segments in JSON Server router

Expectation

/properties/6366/statistics/reservations_booked_count should be rewritten to /statistics/6366_reservations_booked_count

Attempt A (fail)

server.use(jsonServer.rewriter({
'/properties/:propertyId/statistics/:statisticId': '/statistics/:propertyId_:statisticId'
}))

It seems that named matches does not work if not separated by at least one / from each other

Attempt B (success)

server.use(jsonServer.rewriter({
    '/properties/:propertyId/statistics/:statisticId': '/statistics/$1_$2'
}))

Github Pages hosted create-react-app does not render

Used this Github Action to build and deploy react app to Github Pages…

name: CD

on:
  push:
    branches:
    - master

jobs:
  build:
    
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@master
    - uses: actions/setup-node@v1
      with:
        node-version: '12.13'
    - run: npm install
    - run: yarn build
    - name: Deploy
      uses: JamesIves/github-pages-deploy-action@releases/v3
      with:
        ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
        BASE_BRANCH: master
        BRANCH: build
        FOLDER: build

…but the page rendered was blank. It seemed, that paths to assets were missing one segment, so attempt to load compiled js resulted in 404s.

Solution was to add following line to package.json

"homepage" : "https://github-user-or-org-name-here.github.io/project-name-here",

as project-name-here was the missing segment. Redeploy, and voila!

ember-cli-mirage/miragejs passthrough random url

ember-cli-mirage

miragejs

While developing an app with mocks, sometimes We need to passthrough a certain url, so pretender can pass the request to the specified url, so what We’d do is simply

server.passthrough("https://domain.com/pass_here")
or
server.passthrough("https://domain.com/**")

to pass all requests to this domain. But what if We deal with aws lambdas which can have generated urls like here https://domain-<random-stuff>.com?

Up until recently, We couldn’t handle that, at least not so easily, but luckily an undocumented feature was added which allows for passthrough to take a function as argument which has to return either true to passthrough the request or false to be caught by pretender.

(function argument is allowed in ember-cli-mirage#1.1.4 and miragejs#0.1.31)

e.g server.passthrough((request) => request.url.includes("domain") Should handle pesky auto-generated urls.

Using Light Sensor in Google Chrome

Light Sensor needs to be enabled in Chrome: go to: chrome://flags/#enable-generic-sensor-extra-classes and enable it.

Then you will be able to get sensor readings

const sensor = new AmbientLightSensor();
sensor.start();
sensor.addEventListener('reading', () => {
  console.log(sensor.illuminance);
});

There are also activate and error events on the sensor object that can be listened to.

Full article: https://blog.arnellebalane.com/using-the-ambient-light-sensor-api-to-add-brightness-sensitive-dark-mode-to-my-website-82223e754630

Test multi-promise function without return

The tested function:

_setDevices = () => {
  const { selectCamera } = this.props

  WebRTCDevices.getPermissionForDevices({ video: true })
    .then(() => WebRTCDevices.getDevices(['cameraType']))
    .then(devices => {
      const chosenDeviceId = devices.length ? devices[0].deviceId : null
      selectCamera(chosenDeviceId)
    })
}

WebRTCDevices.getPermissionForDevices and WebRTCDevices.getDevices both return promises.

I want to test that selectCamera prop is called with the correct argument.

If we mock return values of those functions with the externally resolvable promise we can test it as follows:

it('calls select camera prop', done => {
  let resolveDevicePermissionPromise
  const permissionPromise = new Promise(resolve => {
    resolveDevicePermissionPromise = resolve
  })
  let resolveGetDevicesPromise
  const getDevicesPromise = new Promise(resolve => {
    resolveGetDevicesPromise = () => {
      resolve([{ deviceId: '1111' }])
    }
  })
  props.selectCamera = jest.fn()
  jest.spyOn(WebRTCDevices, 'getPermissionForDevices').mockReturnValue(permissionPromise)
  jest.spyOn(WebRTCDevices, 'getDevices').mockReturnValue(getDevicesPromise)
  const { component } = renderWrapper()

  component._setDevices()
  resolveDevicePermissionPromise()
  resolveGetDevicesPromise()

  // Move our expect to the end of queue, so "thens" from `component._setDevices()` execute before
  setTimeout(() => {
    getDevicesPromise.then(() => {
      expect(props.selectCamera).toHaveBeenCalledWith('1111')
      jest.resetAllMocks()
      done()
    })
  }, 0)
})

Waiting for content to be expanded in Cypress tests

When testing in Cypress, sometimes there are cases that content within accordion needs to be verified. The challenge is that sometimes, Cypress displays an error that an element is not yet visible, but it is on the page. A simple solution could be to wait for the element to be visible:

cy.contains('Text within the accordion')
  .should('be.visible');

One downside of this approach is that if there are more elements in the accordion, the last one should be check for visibility. Otherwise, Cypress will not wait for expanding the whole accordion and will proceed further with tests, e.g. trying to click an element that is not visible and hidden within the accordion.

An alternative approach to resolve the issue with the expanded accordions is to check its height and wait till it reaches a certain height:

cy.getByTestId('the-accordion')
  .invoke('height')
  .should('be.gt', 700);

Decrease bundle size by importing Lodash correctly

If you use just one or a few Lodash functions in a file, try importing them directly from their respective files, one by one.

import _ from 'lodash'; // bad - 70.5K (gzipped: 24.4K)
import { has } from 'lodash'; // bad - 70.5K (gzipped: 24.4K)

import has from 'lodash/has'; // good - 9K (gzipped: 2.7K)

Alternatively, one can use babel-plugin-lodash or lodash-es. These packages come with sets of restrictions though.

It is worth noting that mixing these 2 import styles will cause the net gain to be negative. Basically, it will import the entire lodash + individual utilities twice. So, for this to be effective, the ‘good’ pattern needs to be enforced everywhere across the codebase.

Further reading

Another way to use absolute paths in js 'require'

consider following files structure:

...
▾ helpers/
    timeHelper.js
    timeHelper.test.js
▾ services/
    handleSlackCommand.js
    handleSlackCommand.test.js
  index.js
  package.json
  ...

in the package.json add:

{
  ...
  "dependencies": {
    "dotenv": "^6.2.0",
    "express": "^4.16.4",
    ...
    "helpers": "file:helpers", <-- this line
  },
}

now in the services/handleSlackCommand.js I can use

const { whatever } = require('helpers/timeHelper');

instead of

const { whatever } = require('../helpers/timeHelper');

How to add autoprefixer in webpack

Firstly we need to add this to our project using yarn/npm.

So yarn add autoprefixer.

After a successful installation, we need to declare which browsers we wanna use for our autoprefixer.

To declare that, we need to add to our packages.json file a few lines:

“browserslist”: [
   “> 1%“,
   “last 2 versions”
 ],

here, we can set something else (https://github.com/browserslist/browserslist#queries)

After that, we need to configure the webpack config file (ie. webpack.config.js).

Firstly we require autoprefixer and we’re setting this as a variable (somewhere on the beginning of the file)

const autoprefixer = require('autoprefixer');

!important:

| We need to add postcss-loader loader between css-loader and sass-loader.

use: ['css-loader',
            {
              loader: 'postcss-loader',
              options: {
                plugins: () => [autoprefixer()]
              }
            },
            'sass-loader'],

if we have more loaders it could look like that:

  module: {
    rules: [
      {
        test: /\.(sass|scss)$/,
        loader: ExtractTextPlugin.extract({
          fallback: 'style-loader',
          use: ['css-loader',
            {
              loader: 'postcss-loader',
              options: {
                plugins: () => [autoprefixer()]
              }
            },
            'sass-loader'],
        }),
      },
      {
        test: /\.css$/,
        loader: ExtractTextPlugin.extract({
          fallback: 'style-loader',
          use: ['css-loader'],
        }),
      },
      {
        test: /\.js/,
        use: ['babel-loader?cacheDirectory'],
        exclude: /node_modules/,
      },
    ],
  },

Now, we need to restart the server and you can enjoy working autoprefixer :)