An Opinionated Guide To Readable RSpec (part 1 of 2)

・9 min read
An Opinionated Guide To Readable RSpec (part 1 of 2)

Importance of testing

It is hard to underestimate a value and importance of automated tests. Creating confidence about if the code works the right way and thus enabling safe refactoring is just one thing. Another one is that test suite can act as a way to document behavior and also, especially when approached the BDD way, can drive the code design and architecture. We cannot forget that tests are just a bunch of code, and as it is the case with any kind of code, those can be written in a better or worse way.

There is only one way

The only way is a TDD way. Seriously. Obviously, there are many advocates of writing tests after implementation. It might feel that it is easier (sometimes it is) and faster (sometimes it also is) but the fact remains, it will backfire sooner or later. Usually if the feature or application we implement is really trivial, then writing tests for it might become an overkill. This is rarely a case though, as our apps tend to grow larger and larger and then we suddenly realize, we are missing the safety net in form of reliable test suite. Yet the sad fact is, that adding good quality tests after the implementation is much harder. Not only might we have already introduced a code (architecture) that is hard to test, but also reverse-engineering of all test cases we need to cover might be a very time consuming undertaking. What is more, it is an investment that might be hard to justify to our customer. To make the long story short, in this article we assume, that all new code introduced is driven by writing tests first.

Outside-in (BDD) versus Inside-out

One of decisions we need to make when introducing a test suite to our application is to what tests to start with when implementing a new feature. Without going into too many details, there are two major approaches. The first one is to test “inside-out”, and thus focus on “how” — which is usually correlated with more up-front planning in relation to implementation and results in more low-level / unit testing. The second approach is so-called behavior driven development (testing “outside-in”) that more focuses on “why”. For this opinionated guide we will follow the BDD way, as it tends to put fewer constraints on actual implementation and results in smaller number of tests covering the same number of cases. Also it works better in the context of the next issue we are after — readability.

DRY oriented versus readability oriented

Another decision to make before diving-in into writing tests is more closely related to code organization. A very common approach is to treat tests similarly to regular code, and optimize them as much as possible. Such optimization can involve using lots of shared code, abstractions, helper methods and so on. With this approach we can write tests quickly, efficiently keeping them compact at the same time, yet there is also one significant downside — reduced readability / intelligibility. Those tests might be totally understandable for us during the process of writing, but next time we see them and would like to untangle the context of each one it might become much more difficult, especially if it was not us who created those tests. It might be necessary to navigate through many places in and outside of given test file to sort out what some neat and tidy one liner actually verifies. I intend to present a slightly different approach that focuses on maximizing the readability of tests we write. Despite some costs correlated with such approach, I find resultant test suite to be much easier and pleasurable to work with.

In pursue of readability

To sum up, our goal is to produce readable specs the BDD way. Vastly improving readability of tests we write is a matter of following a few simple rules.

Identify the subject First step in identifying the subject is to realize what is the outermost layer of a feature we are going to implement. Usually this might be some user facing functionality — then we should consider feature or view specs to start with. If we just need to expose an API endpoint, then request or controller specs will be our starting point. If our job is to write some script, then maybe rake, seed, datafix or similar tests are the right way to go. Sometimes we will just end up writing some lower level unit tests first, but the important thing is to start at the highest reasonable level of abstraction. This way we will leave ourselves greater amount of freedom during the implementation phase and also it will be easier for us to specify initial expectations.

Organize stages of test To improve readability of a given test, try to organize its code around three separate stages: setup, subject and assertions. This approach, also referred to as act-arrange-assert or given-when-then, suggests to put all code responsible for setting up context first, then execute the subject of a test and verify results as the last step. Separating those stages with a blank line will further help in visually identifying each of them.

Keep everything in “it” This is where the readability oriented approach differs the most from the optimization oriented one. The key is to keep most, if not all code required to execute test within individual it block. This means that we should drop all usages of let, subject and described_class and significantly limit the usage of both filters (before, around and after) and shared contexts / examples. This means more code will be duplicated with tests setup growing. However, the key advantage is, that while reading a test, developer will be fully aware of the context and will immediately see how test setup affects assertions. With heavily optimized specs code it might require a lot of navigation within spec file and external files to fully understand the environment in which assertions are checked.

Be explicit about data To make the process of identifying correlations between the test setup and the assertions easier, it might be beneficial to be as explicit about data as possible. One way is to limit the use of local variables, and just use explicit (yes, duplicated) values directly. Those values should also be more or less real in context of domain represented, i.e. assigning user the age of 200 isn’t a good idea. Furthermore, do only introduce a data to spec if it is actually relevant when asserting expectations — the less there is to grasp the easier it will be to comprehend the whole test. Also using tools like ffaker to generate fake, random data might be cool when seeding development database, but in specs it just adds ambiguity without introducing any significant advantages.

We should also resign from using fixtures in favor of using factories. Fixtures, usually stored in separate files, hide important context of spec, necessary to understand it to the fullest extent. One of exceptions to this rule is recording HTTP communication (i.e. using vcr gem) — if amount of data that needs to be stubbed is massive as in this case, it might be actually better to move it out from spec file and thus improve its readability.

Put extra effort in naming contexts and expectations Last but not least, we can waste our efforts outlined in previous points by not documenting our tests properly and effectively. There are three major blocks dedicated to documenting our tests: describe, context and it. describe helps us to define our subject — usually top-level describe is either a feature name or a class name, while lower level describe define methods, constants etc. To emphasize the type of the subject we are going to test, it might be useful to establish some ground rules related to naming.

  • Prepend dot when testing class-level methods, i.e. ".send_notifications!"
  • Prepend hash when testing instance methods, i.e. "#full_name"
  • Prepend colon when testing specific parts of hash, i.e. ":data"
  • Use upper case letters when describing consts, i.e. "MAX_PROJECTS"
  • Use actual class names, when describing classes, i.e. ProjectMembership
  • Use regular text, when describing features and everything else, i.e. "User dashboard"

Another documentation related block is context. Contexts should clearly describe… context (or environment) in which our assertions will be verified. It is a good practice to always start context descriptions with “when” and keeping those descriptions concise. Each context should describe only one aspect of environment (i.e. "when a user is admin and current day is Sunday" should be splitted into two nested contexts) and we should refrain from including contexts in expectations (i.e. it "sends notification when email is provided" should have "when email is provided" extracted to context).

Last block is it. We already know we should “keep everything in it”, but what is also important is proper description that defines what it means. The good practice is to use present simple tense to describe expectations and keep those expectations concise, precise and unambiguous. If expectations description is too long, it might indicate we test too much in one example and it might be a good idea to split it.

Also, given example clearly shows correlation between descriptions and stages of test.

Img

Wrap up

Focusing on tests readability does involve some costs, like code duplication and more constraints on how tests are composed. Benefits however, overweight those costs in my opinion, and we end up with much more comprehensible, maintainable and reliable test suite. Are we ultimately fated to write massive amounts of almost identical code though? Fortunately, there are a couple of techniques that will make your suite slimmer without sacrificing readability. Those will be tackled in the second part of this article.

Our services
See what we can create for You
Our Services