RubyOnRails Testing Antipatterns — Part 2/2

・5 min read
RubyOnRails Testing Antipatterns — Part 2/2

In part 1 we have covered antipatterns related to stubbing and problem with implementation details that are a overcomplicated just for the sake of being super easy to tests. In second part we are going to focus more on tests themselves.

Overusing framework / library methods

The consequences of overusing framework or library methods when writing specs are similar to those of stubbing library methods. It might be ok to use ORM features to find a record now and then, but the more complex methods we use to facilitate writing of our specs, the more change resistant they become. It might happen, that updating a library our project depends on, breaks tests without breaking actual implementation. Also, using complex methods might hide some of the intentions that otherwise would make tests more comprehensible.

As mentioned before, using simple methods from a framework or other libraries won’t hurt, but usually we should be using just language / standard library features in their simplest form. When it is necessary to use library of some kind to facilitate our tests, it might be a good idea to wrap the usage in a helper method or matcher that will make its use more readable and comprehensible.

Overusing feature tests

Feature tests (system tests / integration tests) are a very powerful tool used a lot when following behavior-driven development approach. Still, those can be easily overused, especially if we use them to test all side-effects and exceptions. Not only feature tests require more code (both setup and assertions), but are also much slower than unit tests. What is more, it might be also more difficult to identify reasons behind some bugs due to low tests granularity.

In order not to overuse feature tests, we need to understand what is the main reason for writing them. In BDD the main reason is to have some starting point for further development. Another one, regardless of our following BDD or not, is that feature tests act as a high-level safety net, ensuring happy-paths within our application are working fine. If we use feature tests solely for those two purposes, and we write lower-level (unit) tests for other cases, we should end up with the right balance between the both kinds.

Optimizing tests too much

Refactoring test suite to the point where nothing is duplicated and everything is written in the shortest, most compact way possible, might be tempting at first, but may backfire in the long term. Using loops, complex helpers, shared examples and contexts are just a few ways to achieve very slim tests implementation. One could argue that it makes tests more maintainable, as each update will require less and smaller changes, but in my opinion, there is more to maintainability than just simple amount of code. I opt to look at maintainability as an effort required to make a change. This involves understanding and finding what needs to be changed, ensuring whether it is safe to apply the change without unexpected side-effects and then introducing the change itself. The “smarter”, sophisticated and optimized your tests are, the less time making actual change it will take, yet more time might be needed during the preparation for it. The alternative approach to highly optimized tests is to write readability-oriented ones. More on the topic can be found in my another article — An opinionated guide to readable rspec, but even without taking it to the extreme, sometimes adding some extra description or tolerating a bit of duplication can change a lot.

Overtaking tests with implementation

It is a highly tempting practice. Let’s imagine we implement a simple listing in a TDD approach. In attempt to make our tests green we fetch records from DB. But hey! We know we will need those ordered by name, so why not include a short, simple order(name: :asc) while we are here. Well, tempting as it might be, such practice (implementing more than necessary) can result in code that will remain untested. To make matters worse, such a code in most cases will not be detected by a code coverage tool as untested, and therefore, we won’t know whether we need to add missing tests unless we do it right afterwards.

To counter this situation, we need to make sure that all implementation is driven by tests failing first. Anytime we are able to remove a piece of code without turning test suite red should signal the necessity of writing missing specs or a possibility of code removal. As usual, there might be cases of code we do not need to test due to some reasons, but this should always be a conscious and not accidental choice.

Including irrelevant data

Including irrelevant data in setup of our tests usually decreases their readability. This happens not only because the size of actual setup is larger than necessary, but also one might find it confusing by being encouraged to find correlations between data and assertions that are simply not there.

Usually it is enough to just consciously stop practice of including irrelevant data when preparing setup for tests, but sometimes this data is necessary because it makes an object or set of objects valid. In this case, we should follow a rule of a thumb to prepare our factories in such a way, we end up with valid objects by default (without providing any attributes explicitly). This will allow us to hide the irrelevant data in factories. To ensure we always end up with valid objects when producing those using factories, we can enable factories linting.

Summary

Many of problems presented above are very unlikely to happen if one follows behavior-driven development approach to testing, hence following BDD is highly recommended. Finally, while to every rule there is an exception, keeping in mind a few antipatterns we should be aware of can vastly improve quality of our test suite in the long term.

Our services
See what we can create for You
Our Services