With Ruby on Rails as our primary backend technology, we have developed a set of practices for testing our applications. Our goal was to maximize the efficiency of writing tests that cover the most critical parts of the software built. Some means to that end are presented below.
TEST DRIVEN DEVELOPMENT
We usually use TDD to keep decent test coverage and preserve an easy-to-follow, maintainable application design. Still it is not our goal to achieve maximum tests isolation, as we acknowledge the so called Test Induced Design Damage, where producing easily testable code can result in a more complex architecture than is in fact necessary. This is also why we do not stick to TDD 100% of our time — on occasion we decide to experiment with the code first to find the most readable, comprehensible and optimal solution. Then we either TDD it from scratch or cover the code resulting from such experimentation with the missing tests.
We tend to start our testing with high-level feature tests (the so called Outside-in testing approach). Following this practice we can jump start implementing a new feature very quickly and we always end up with a test that covers at least the most critical paths of a given feature. This is also the place where we focus on readability the most. Using rSpec and Capybara we endeavor to write tests in a way that would be easy to follow not only for other developers but also for non-technical people. When we need to put an even greater emphasis on test scenario readability for non-programmers, we resort to Cucumber as our tool of choice; for the price of a slightly higher cost of writing and maintaining the suite, we end up with acceptance documentation that can easily be followed by the quality assurance team. It is also worth mentioning that covering a big picture with feature tests while at the same time covering edge cases/specifics within lower level tests, we usually end up with a smaller overall amount of tests that handle the same amount of logic (we strive not to test the same thing multiple times).
Driving bug fixes with tests that reproduce bugs being fixed is an important routine in our testing practices. We thus ensure that a given error is handled in a sustainable way. We sometimes also re-evaluate the logic surrounding a detected bug to cover it with even more low level tests accounting for edge cases that were not foreseen before.
CONTINUOUS INTEGRATION AND DELIVERY
CI is the most important service when it comes to testing software in the long term. When our software grows, so does the test suite that takes longer and longer to execute. To stay efficient we delegate executing the whole suite to specialized services like TravisCI or CircleCI. All the tests need to pass on CI before one integrates a given feature with the application. We use the power of the CI service not only to run our tests, but also to ensure that our code style is consistent (by running linters like Rubocop or ESLint), code does not include simple security vulnerabilities (e.g. by using Brakeman) and its overall quality does not drop (i.e. by using Rubycritic). In this way it is not uncommon for us to deliver new features on a daily basis rather than in weekly (or other kind of) iterations.
If it is necessary we also add cross-service integration tests that verify the interfaces of the services we use in a given application. Thus we can react fast if a given service provider applies changes to their API or simply stops handling our requests. This is usually beneficial in applications where high availability is a critical factor.
Besides, if performance is crucial we can also provide load/stress testing scripts and analysis using such tools as JMeter, loadtest, siege, New Relic, etc. Due to the added cost of maintaining such tests, we add them only if the cost is justified by the benefits and only for the most critical parts of the system.
Manual testing is an integral part of the feature delivery process. Before a given functionality is handed over to end users or the quality assurance team, we manually test it on staging and/or production environment. Verifying a solution against acceptance criteria significantly limits the number of rejections and makes the delivery process smoother and more efficient. This is usually accomplished by a person responsible for implementing a given feature, but on larger teams it may be handled by a dedicated team member or just another developer to simulate the end-user experience to a greater extent.
Writing software with an adequate amount of tests that are introduced at the right time allows us to deliver code that is not only reliable but also comprehensible. This is especially important when new developers join the team. Following the testing practices listed above also ensures that the velocity in which new features or changes are applied is not crippled with an accumulating Technical Debt. It also renders system-wide optimizations and refactorings possible without sacrificing overall application stability. If you would like to explore the related topic of keeping your application healthy, feel free to check out The four indicators of a healthy Ruby On Rails project.