In the previous part of this article we focused on making specs we try to write as readable as possible. This however, comes at a cost of duplication and bloated individual tests. While this is a cost we intend to pay, there are ways to mitigate the impact of readability oriented specs on the overall size of our suite.
Reducing assertions phase LOC
Using custom matchers Using custom matchers not only allows improving tests readability, but in many cases it also allows to slim down the assertions count. Custom matchers can originate from some ready to use libraries like shoulda-matchers or rspec-json_expectations but nothing stops us from rolling our own solution.
Using proper matchers Sometimes it is even unnecessary to introduce any new matchers to the project. Using the existing ones effectively can help in reducing the size of assertions section as well. For instance, to check whether the given result is a collection where order is not important, but includes objects of given class with given attributes we might use several separate assertions. It is possible to write one instead though, e.g.
Using spies and message expectations to our advantage
It is important to realize when to use message expectations (expect(subject).to receive(:method)
) and when to use spies (allow(subject).to receive(:method); expect(subject).to have_received(:method)
) while writing our specs. Even though act-arrange-assert way of composing a test is preferred, asserting message expectations using spies will not only make some specs bulkier, but may also make them less readable.
Always use spies if the correlation between mocked object and assertions is not clear — for instance, when testing DestroyUser
service that should send a delayed user notification after the user is destroyed using some of separate service.
This way we maintain act-arrange-assert structure of a test and also explicitly ensure in assertions section that the expected behaviour was triggered. However, if the correlation between the setup and subject is already clear, then using regular message expectations seems to be a reasonable exception. In this case let’s consider testing a wrapper for some client library.
The second assertion here is of very little value, as the only thing it checks is whether we called the client with correct argument list. We can already clearly see that the client was called as result attributes contain information from client’s response. Therefore, it feels reasonable to move arguments checking to mocking (setup) phase and drop last assertion.
Reducing setup phase LOC
Testing less edge cases within one example If we are after slimming down the setup phase of our spec, the simplest way to achieve that result is by reducing the amount of cases we want to verify and moving them to separate test. This can sometimes make tests more clear and comprehensible at a cost of larger suite.
vs
Hiding irrelevant setup in factories In some cases, explicitly creating certain objects in a setup phase of our test is of very little value in terms of showing correlations between setup and assertions. In such cases it might be a good idea to hide such objects creation in factories (using traits or other means).
vs
Hiding irrelevant setup in helper methods It happens that some parts of our spec seem to be irrelevant in terms of making the test understandable, but are too specific for given case to extract them to factories. Extracting such a code to a helper method living in the test itself can help make each test slimmer and more readable at the same time.
vs
Introducing abstract setup definitions If the context we need to describe is very difficult to be expressed simply by creating objects and mocking interfaces, we may resort to even more sophisticated use of helper methods. Below we can find an example extracted from the real codebase.
Preparing such a complex context as the one represented in rates table above would require massive amount of stubbing or (if this data originates from external service) recording a vcr cassette, that would effectively hide all important knowledge from test setup. get_nightly_price_for method contains logic that parses the table and transforms it into JSON that would be originally returned from the external service. It also contains logic responsible for calling class being tested the proper way and returning the result.
Formatting code specifically for testing purposes It is worth considering having different code formatting rules for tests. Rarely do we need to paste large hashes of data in actual codebase, while in specs the story is different. For instance, in the hash below it might be beneficial to keep several keys in one line, respecting only maximum line length, instead of dedicating a separate line for each individual key.
vs
Excluding irrelevant data The most self-explanatory rule of all. Any data that is not relevant should be kept away from the test setup. It is not that intuitive in some cases though. For instance, when mocking responses from an API we need to decide if we want to use the whole response (that might be large and complex) or just a part of it that is useful in our scenario. When writing tests for client library, showing the whole response might be beneficial, whereas when writing tests for a wrapper, we might focus only on fields we actually use.
Keeping integration testing under control The less we mock, the more we test at once. It is totally fine with BDD approach and helps in reducing the overall size of our test suite. Still, if the thing we test interacts with many objects across the whole system, the corresponding tests might require setting up those additional objects and thus effectively enlarging the setup section of such tests. There is no golden rule here, but if the setup section feels a bit too large in such case, mocking some part of the solution might be beneficial in resolving this problem.
Reducing overall tests count and size
Introducing helper methods for retrieving result of behavior If we realize that there is a pattern of how we trigger the tested behavior or retrieve its result, it might be worth considering to introduce one or a couple of helper methods facilitating this process and potentially reducing code responsible for that in each test. For instance, the following code:
could be rewritten to
with a help of following module
Multiple assertions per test Even though it is definitely not something pure-TDD followers would recommend, multiple assertions per one test can help in effectively reducing overall size of the test suite. It would be best to have those assertions correlated with one thing being tested, but it should not be a rule that cannot be broken. For instance, in controller specs testing that response is 200 OK, the changes to database were applied correctly and correct services were called with correct params all of these assertions are reasonable to be included in the same test sharing the same setup section.
Testing things once only, unless necessary otherwise In BDD it is not natural to test same things twice, but sometimes it might be reasonable too. By default tests for all edge cases and non-major execution paths should be handled by lower-level tests in which it does not seem necessary to provide general testing as well. Default behaviour was tested by higher-level testing after all. Still, if some class becomes sort of “library” class that might be used directly in other parts of the system, it might be beneficial to provide a full unit-testing for such entity. This way potential user of such class will have a place in which he can check what are its capabilities and usage examples.
Skipping low value tests Again, following BDD approach helps in skipping low value tests, but to restate — testing relations, attribute accessors, migrations and similar things do provide very little value and in many cases makes refactoring less comfortable. All of those aspects should be tested indirectly by some higher-level testing. This will save us a few lines of code as well.
Testing multiple edge cases within one test Even though it contradicts recommendation related to reducing setup section size, testing multiple edge cases in one test can greatly reduce the total amount of tests and the size of a suite. It is a matter of finding the right balance and always taking readability concerns into account. The more edge cases we handle in one test, the more difficult it might be to find correlations between setup and assertions. Good test description might help, but only to some extent and not in all cases.
Taking advantage of integration testing in all kinds of tests Last but not least, it is worth realizing how certain level of integration testing on all levels affects the size of the test suite. The more integration testing we allow in each spec, the less specs we will need altogether. Therefore, usually most of our tests beside the lowest-level ones are not isolated completely from underlying dependencies.
Summary
Focusing on tests readability has a great impact on the size of those tests, which in turn can reduce the level of readability we fought so hard for. Having a couple of techniques described above in our toolbox can significantly help in mitigating this unwanted effect by making test suite slimmer and each test even more comprehensible.