Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
+01 (414) 230 - 5550
Dariusz Pieńczykowski, Technology


The first day of the workshops. On the first day, we had workshops about basics of react-native. Three guys: Mike Grabowski, Michał Chudziak and Nader Dabit lead them. They gave us useful advice about best practices, how to order static methods of JS files and conventions about naming folders and splitting them well. While talking about expo, routes, styling and writing unit tests as well as e2e, they helped everyone with kindness and calmness. The workshop had some elements for everyone. It helped beginners to start native adventure and also standardized knowledge as well as gave some new information including tips for people working with it.


The second day of the workshops. On the second day, there were two paths of workshops. I chose navigation workshop and animations. On the first workshop, I heard about available navigations in the community, about a difference between those and which one is better to each app. Workshops were lead by Ferran Negre Pizarro & Raúl Gómez Acuña who had excellent knowledge in this topic. On the workshops, we created the same project using react-navigation and react-native-navigation. Both are good in use, but for me, better, easier and more comfortable is react-navigation. We learned how to style application header and buttons for navigation. Workshops were more technical than the day before and many people got lost. A helper for them was a repository with code created by workshop presenters. The second workshop was animations lead by Jani Eväkallio & Phil Plückthun – very good, funny and optimistic guys. On the workshop, we were implementing animations for the app created by leaders. In the meantime, leads were asking people questions about ideas how to fix issues connected with animations in apps.  For me, it was the best workshop I have ever took part in due to the way they were guided, professionalism and because I’m very interested in this topic.



Conference 1st day On the conference, I heard a lot of interesting presentations. I don’t want to describe each of them, I’ll just write a quick brief about what I saw and heard. Presenters gave me a lot of interesting pieces of information about RN, JS and Native. They talked about their experience with giving good examples and hints. They showed me how to build a simple application using Ignite as starting template for RN, they talked about the Facebook engine – Yoga, the difference between web dev,  react-native and art. They also shared their ideas about connecting react-native with Raspberry. I heard about new improvements in starting developing like Slack RN or They mentioned solutions for debugging performance and memory. I learned how to build new plugins for RN and style it using better plugin than StyleSheets.create. Here are the most exciting (for me as Frontend developer specialized in styling) information/things that I’ll remember from the conference:   The day finished with the evening party. It was great. Nice and beautiful place, skewers, sausages, beer and drinks for free for everyone. I had time for talking to others and spent nice time with the react-native community. I listened people from other companies, as they were talking about their own projects, happiness and sadness.


Conference 2nd day On the second day, the conference was started by a guy from FB with an interesting presentation about animations and hacking them. After that, I heard about cross platforms using RN and options for RN – Weex and Flutter, about good replacement for JS language – oCamel (ReasonML) because it’s written in meta native. Networking and connecting things with the Internet. The offline version of my app using redux. Implementing payments on RN in few minutes. Snapshot plugin for multiple simultaneous simulators and generating screenshots for iOS. And many, many other things which were interesting more or less, but still, interesting on QA session as the last point of the program.Me with Mike Grabowski – conference conceptor and Co-funder at   I’m very glad I was at this event. Everything was buttoned up. It was fascinating and even better motivated me to work. I hope, next year there will be a continuation of it ;)


Łukasz Gaweł, Technology

AscendHIT – a client of ours – needed to establish which speech recognition solution to use in order to replace the old Dragon speech recognition software, which can run only on desktop computers. The goal of the research project we undertook for them was to find a robust web-based SR tool which could – among other things – support medical vocabulary and which could be speaker independent (i.e., the system does not need to learn to recognize specific individuals). 


Classics from major market players 

We started the research by exploring the most popular tools which could be used for the purpose. Unsurprisingly, we first turned to check the ML-based Google Cloud Speech API, i.e. the Google speech to text conversion tool. Together with the client we decided to postpone further investigation when it turned out that the tool does not support medical vocabulary and it is impossible to teach the system new words (as of July 2017). Continue major player who created and markets a customizable speech to text converter is IBM with its IBM Watson. We chose to test the latter hoping that the plethora of features it comes with would cover our requirements; the free trial period offered would also help to limit the cost of the research project for the client. Having deployed an instance on IBM Watson server, we were impressed by the speed of speech processing and by the fact that the solution was speaker independent. It turned out though that Watson does not support medical vocabulary; we did not give up immediately but tried to resolve the problem somehow. Watson instances are very easy to teach – you do not need to generate the audio recordings of new words to teach the system. You just need to prepare and deliver text documents and – since Watson “knows” the language phonetics – it will learn the new words by processing just text files without the need to provide the accompanying audio recordings. We actually imported some medical reports containing specialized vocabulary and launched the Watson teaching process. The results generated improved but we faced another issue: medical jargon is rich in a number of words which can only be understood in specific contexts and we had an impression we were not in a position to feed the system with sufficient input to teach Watson the medical vocabulary well enough. We would need a much richer body of medical texts with the vocabulary involved to implement this scenario. Having weighed the pros and cons, we decided to drop Watson and check some other solutions. The next solution we decided to check was the Microsoft Bing Speech Recognition Interface. The latter also comes with a free trial period and the speed of speech processing is very impressive too. Unfortunately, the tool does not support medical vocabulary and – more importantly – it is very hard to teach as you need to generate both sound files and corresponding text files in order to do so. The associated obstacles of teaching the system medical vocabulary would be much harder to overcome. 


The best is not always the fastest 

In the next phase of the research project we moved on to scrutinize the Nvoq Sayit speech processing tool. It is rather hard to google find on the net. Interestingly enough, the review of their documentation revealed that the solution had actually been designed specifically for use in medical contexts. It recognises a large number of medical vocabulary items and – on top of that – it is also very easy to teach – just like IBM Watson. What is more, the provider supports smartphones to function as external microphones. Though we were not able to find a free trial version, when we contacted their support team, they willingly provided us with credentials to access their servers for testing purposes. With some help from the Nvoq support team, we were able to create two prototypes based on their API. The first solution was a web application created in Ruby on Rails which streams audio data from the computer microphone to the Nvoq server and then returns the proper dictation text to the user. The other prototype was a web application integrated with the Nvoq Mobile Microphone App which fetches mobile phone audio data, processes it and returns the output to the web app to be displayed as text to the user. On the whole, the prototypes helped us achieve our project objectives; the only issue remaining is with the processing speed but the latter was not of vital importance for this project. There is hope for some positive change in the near future though – the Nvoq support team are going to support the Web Sockets technology (as of July 2017) so we should be able to get the output much faster when they succeed in doing so. All in all, the client decided to use Nvoq Sayit in particular because of the specialized support for medical vocabulary.


All is relative 

The key lesson we learnt from the research project is that the best way to discover and choose a suitable solution is to find, explore and compare many different alternatives assessing them with a very specific context in mind. Initially, the solutions often look very similar and it is only after a more detailed analysis that the differences start to emerge along a better understanding of the tool’s suitability for a specific application. The other solutions you discard in the research process may not necessarily be worse, but they just lend themselves better for other contexts. If – for example – you need to support multiple speakers (without training the system to recognize/understand their individual speech patterns) but you do not need to support custom words – the Google solution should probably be selected as the best in the set of tools we researched. While the IBM Watson tool may in turn be the best choice when you need to support custom words and you need to process multiple texts from learning materials which include these words in many different contexts. To sum up, identify a number of alternative solutions, analyze and compare them keeping the peculiarities of your specific application context in mind. 



Requirements / FeaturesGoogle Cloud Speech APIIBM WatsonNvoq SayItMicrosoft Bing Speech Recognition
medical vocabulary supportnonoyesno
ease of teaching new vocabularynot supportedeasyeasyvery hard
speech processing speedvery fastvery fastslowfast
free trial periodyesyesavailable only for big companiesyes
pricing$0.024 per audio minutes transmitted$0.015 per audio minutes transmittedcalculated individually$ 4.00 per 1000 transactions (1 transaction = any request sent to the server which contains audio recording)
* tool parameters and characteristics described as of July 2017

Błażej Kosmowski, Technology

With Ruby on Rails as our primary backend technology, we have developed a set of practices for testing our applications. Our goal was to maximize the efficiency of writing tests that cover the most critical parts of the software built. Some means to that end are presented below.


We usually use TDD to keep decent test coverage and preserve an easy-to-follow, maintainable application design. Still it is not our goal to achieve maximum tests isolation, as we acknowledge the so called Test Induced Design Damage, where producing easily testable code can result in a more complex architecture than is in fact necessary. This is also why we do not stick to TDD 100% of our time — on occasion we decide to experiment with the code first to find the most readable, comprehensible and optimal solution. Then we either TDD it from scratch or cover the code resulting from such experimentation with the missing tests.


We tend to start our testing with high-level feature tests (the so called Outside-in testing approach). Following this practice we can jump start implementing a new feature very quickly and we always end up with a test that covers at least the most critical paths of a given feature. This is also the place where we focus on readability the most. Using rSpec and Capybara we endeavor to write tests in a way that would be easy to follow not only for other developers but also for non-technical people. When we need to put an even greater emphasis on test scenario readability for non-programmers, we resort to Cucumber as our tool of choice; for the price of a slightly higher cost of writing and maintaining the suite, we end up with acceptance documentation that can easily be followed by the quality assurance team. It is also worth mentioning that covering a big picture with feature tests while at the same time covering edge cases/specifics within lower level tests, we usually end up with a smaller overall amount of tests that handle the same amount of logic (we strive not to test the same thing multiple times).


Driving bug fixes with tests that reproduce bugs being fixed is an important routine in our testing practices. We thus ensure that a given error is handled in a sustainable way. We sometimes also re-evaluate the logic surrounding a detected bug to cover it with even more low level tests accounting for edge cases that were not foreseen before.


CI is the most important service when it comes to testing software in the long term. When our software grows, so does the test suite that takes longer and longer to execute. To stay efficient we delegate executing the whole suite to specialized services like TravisCI or CircleCI. All the tests need to pass on CI before one integrates a given feature with the application. We use the power of the CI service not only to run our tests, but also to ensure that our code style is consistent (by running linters like Rubocop or ESLint), code does not include simple security vulnerabilities (e.g. by using Brakeman) and its overall quality does not drop (i.e. by using Rubycritic). In this way it is not uncommon for us to deliver new features on a daily basis rather than in weekly (or other kind of) iterations.


If it is necessary we also add cross-service integration tests that verify the interfaces of the services we use in a given application. Thus we can react fast if a given service provider applies changes to their API or simply stops handling our requests. This is usually beneficial in applications where high availability is a critical factor.

Besides, if performance is crucial we can also provide load/stress testing scripts and analysis using such tools as JMeterloadtestsiegeNew Relic, etc. Due to the added cost of maintaining such tests, we add them only if the cost is justified by the benefits and only for the most critical parts of the system.


Frontend frameworks and libraries like ReactJSEmberJS etc. play a big role in web application development nowadays. It is usually much more efficient to cover the logic that resides in the frontend part of the application with dedicated javascripts tests than with much slower feature specs; such an approach is even more important when developing single page applications that are highly decoupled from backend code. We use a variety of tools that support us in maintaining proper tests coverage in this context, like qunitmochachaijest or enzyme.


Manual testing is an integral part of the feature delivery process. Before a given functionality is handed over to end users or the quality assurance team, we manually test it on staging and/or production environment. Verifying a solution against acceptance criteria significantly limits the number of rejections and makes the delivery process smoother and more efficient. This is usually accomplished by a person responsible for implementing a given feature, but on larger teams it may be handled by a dedicated team member or just another developer to simulate the end-user experience to a greater extent.


Writing software with an adequate amount of tests that are introduced at the right time allows us to deliver code that is not only reliable but also comprehensible. This is especially important when new developers join the team. Following the testing practices listed above also ensures that the velocity in which new features or changes are applied is not crippled with an accumulating Technical Debt. It also renders system-wide optimizations and refactorings possible without sacrificing overall application stability. If you would like to explore the related topic of keeping your application healthy, feel free to check out The four indicators of a healthy Ruby On Rails project.


Technology, Tomasz Bąk

The most popular approach to testing React components is to use either Mocha+Chai+Enzyme or Jest+Enzyme. In this article, we will describe our React components testing practices with Jest+Enzyme which are also applicable to Mocha+Chai.

If you are new to testing React components you should read also:



In larger JavaScript projects we put tests close to implementation in __tests__ subfolder. Usually, tests for a component are grouped by structure and behaviour is added on top of it, like: 



Minimal component tests verify that the component renders properly aka smoke testing or “Build Verification Testing”. It can be done with Enzyme or Jest snapshot:


 describe(MainSection Component, () => {
 test(render, () => {
 const { wrapper } = setup()

The later generates __snapshots__/MainSection.spec.js.snap file.

Changes in snapshots are confirmed locally via ‘u’ in the jest cli and committed to the git repository, so PR reviewer can see them. You can read more on Snapshot Testing

At the moment we limit usage of snapshots to component rendering and complex json (i.e. chart configurations).



You have to keep in mind that tests are something you have to write and maintain. Writing good tests requires as much craft as creating the application code.

Tests are automated quality assurance and document for developers. The larger the project and team is the more detailed tests you need.

I try to think of future you getting back to this component or refactoring it — what would your expectations from tests be?

  • Isolated — all interactions with external services should are mocked
  • Specific — if change small functionality you would like to get specific test failure message
  • They describe what the system does not how so that you can easily refactor

Let’s go through some practices that we find helpful in achieving those goals.



The benefit of using explicit setup() function is that in any test it is clear how the component was initialized. The setup object is also good place to hook some helper functions that interact with wrapper, i.e.


 const setup = propOverrides => {
 const props = Object.assign({
 completedCount: 0,
 activeCount: 0,
 onClearCompleted: jest.fn(),
 }, propOverrides)
 const wrapper = shallow(<Footer {props} />)
 return {
 clear: wrapper.find(.clear-completed),
 count: wrapper.find(.todo-count),
 describe(count, () => {
 test(when active count 0, () => {
 const { count } = setup({ activeCount: 0 })
 expect(count.text()).toEqual(No items left)
 test(when active count above 0, () => {
 const { count } = setup({ activeCount: 1 })
 expect(count.text()).toEqual(1 item left)
view rawsetup.spec.js hosted with ❤ by GitHub



Testing behaviour in practice it goes down to testing if certain inputs and simulated events result in expected results i.e.

 describe(clear button, () => {
 test(no clear button when no completed todos, () => {
 const { clear } = setup({ completedCount: 0 })
 test(on click calls onClearCompleted, () => {
 const { clear, props } = setup({ completedCount: 1 })
 expect(clear.text()).toEqual(Clear completed)
view rawbehaviour.spec.js hosted with ❤ by GitHub

You can see how setup() makes writing those tests really fast!



Sometimes we have to write many similar tests with just one input variable changed. This can be addressed with helper function that generates test:

 describe(todo list, () => {
 const testFilteredTodos = (filter, todos) => {
 test(`render ${filter} items`, () => {
 const { wrapper, footer } = setup()
 expect(wrapper.find(TodoItem).map(node => node.props().todo)).toEqual(todos)
 testFilteredTodos(SHOW_ALL, todos)
 testFilteredTodos(SHOW_ACTIVE, ])
view rawgenerate.spec.js hosted with ❤ by GitHub

It reads much batter and is easier to maintain.



Practices described in this article:

  • put tests close to implementation in __tests__ subfolder
  • always start with simple component rendering test aka smoke testing then test behaviour
  • think of future you getting back to this component or refactoring it
  • use explicit setup() and return common shortcut variables with it
  • use helper functions that generates tests

We hope you found this article helpful. You can find working example code at my fork of redux todomvc code at


Błażej Kosmowski, Technology



You have a piece of functionality that you need to add to your system. You see two ways to do it, one is quick to do but is messy — you are sure that it will make further changes harder in the future. The other results in a cleaner design, but will take longer to put in place.” Martin Fowler

I doubt there are many coders out there who have not been asked by a customer at least once in their career about the necessity of refactoring or testing. They will also have encountered another question — “Can we do it later?”, where “later” in practice usually means “never”. It is not (that) hard to justify the necessity of test coverage to the customer, yet justification for refactoring seems to be less tangible, because — as the story goes — “if it works and is tested, why touch it?”. Typical arguments against refactoring are higher development cost and delays in delivery. What customers are often not aware of is not only that the arguments may not be true, but also that skipping refactoring may actually be incurring the so-called Technical Debt and thus laying the foundation for costs and delays further down the road. This debt will be paid sooner or later unless the project development stops. There are multiple high-level consequences of Technical Debt that keeps building up in the system, to name just a few:

  • new features delivered at a much slower pace

  • higher costs of new developers joining the project

  • inaccurate estimations resulting in missed deadlines

  • vendor lock-in where it becomes impossible to change the software vendor without a thorough system rewrite

The thing is that refactoring can be done with different purposes in mind and with different effects on technical debt. Some of the refactoring types can, in fact, be postponed or even ice-boxed without significant consequences, yet managing technical debt and mitigating its effects plays a big role in keeping our projects healthy. “Paying technical debt” should never appear in your backlog. If it happens it means that it was either a tolerated mess or it was not intended. Even if it was not conscious it should be named after identification.




When you decide to take on a technical debt, you had better make sure that your code stays squeaky clean. Keeping the system clean is the only way you will pay down that debt.” Uncle Bob

Refactoring for quality is the only type of refactoring that should not even be mentioned or considered as something that might be a subject for extraction into a separate task. This type of refactoring does not prevent technical debt inherently; it prevents something much worse — a mess. These refactoring procedures/activities include but are not limited to:

  • keeping code style consistent

  • following established best practices

  • keeping security in mind

  • keeping code testable

  • keeping code readable

  • conscious use of libraries and design patterns


Why should we not mention refactoring for quality as something special? Because it needs to be an integral part of our development process. The more complex the code is the more readable it should be and the more time should be spent on planning, research and refactoring. Not only is refactoring the last step in Red-Green-Refactor loop of the TDD cycle but it is also much easier to apply it immediately rather than to be picked up later. Here are a couple of reasons why it is so:

  • it is more efficient to refactor when one is in scope of the problem

  • if refactoring is not done, it may affect testability and architecture when code is integrated with by other developers

  • does not make developers procrastinate / defer the task forever

  • refactoring tasks are difficult to be handed over to other developers

  • refactoring early saves time spent on code-review and communication

The decision to make a mess is never rational, is always based on laziness and unprofessionalism, and has no chance of paying off in the future. A mess is always a loss.” Uncle Bob

Hence keeping quality high should not be a matter of making a decision. It should be a matter of just doing it. It adds some overhead when the project starts but pays off many times over when development progresses.





Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.” — Donald Knuth

This is the first type of refactoring that might be considered to be postponed. It might even not be considered to be refactoring because performance tuning sometimes happens at the cost of readability or code cleanliness or might even be regarded as counterproductive.

If the cost of making some performance optimization is low, it falls under the Refactoring for Quality category and should be addressed immediately. For instance, using database capabilities to search through a set of records instead doing it manually outside of database costs almost nothing in development effort while it may impact performance significantly.

In turn, when we identify some important performance problems, resolving those might involve some refactoring tasks (vs. scaling) that need to be scheduled and prioritized separately.




Building code in a decomposed way with consciously applied design patterns inherently leads to codebase that is reusable. Sometimes though preparing a given solution in a reusable way requires a significant time investment. This may collide with business goal of delivering the feature fast. Even though such code might knowingly be intended to be reused in the future, such code may be an introduction of conscious and justified technical debt that may be addressed later if necessary.




This is a similar situation to refactoring for reusability but it is performed on a larger scale. Extracting libraries, dividing monolithic applications into or augmenting them with micro-services, introducing architecture that relies on plugins will with all likelihood require some significant refactoring of the codebase, even well before the actual extraction happens. As the extraction itself is usually a task of its own, it is reasonable to treat correlated refactoring that renders extraction possible in the same way.




Refactoring for metrics is a type of refactoring that should be applied with caution. Depending on the project and technology there might be multiple metrics measuring a variety of aspects of our code. They are often useful and indicate, for instance, missing code coverage, unhealthy code due to issues with its complexity or size, etc. In most cases, such matters can be addressed immediately, but if the time investment needed is significant, such refactoring can be postponed as well.

Refactoring for metrics also has its dark side. Sometimes the only result of such refactoring is a degradation of code quality, i.e. the degradation associated with the decomposition introduced, unnecessary tests, etc. We should always remember that automated code analysis may not capture our intentions behind conscious design decisions. In such cases, we should not only restrain from refactoring such code but also add proper exceptions that will stop those false alarms from harassing us.




Refactoring legacy code almost always concerns refactoring to raise code quality and reusability. While sometimes considered to be fun, this is the most tedious chore. Sometimes we can treat a chunk of code as legacy even if it was just added to codebase — this applies for example to low-quality code that was not refactored in an ongoing fashion and piled up in pull request that nobody wants to approve. In such a scenario, one can notice that this is where we may lose the most time in comparison to refactoring on the spot. Something that might be written in n hours incorporating refactoring all along the way may take much longer when refactored later on in the process. Still, it needs to be done if compromising code quality is the only alternative.

Refactoring genuine legacy code is a different story — if we take over some existing codebase with questionable quality and outdated libraries, it is something that definitely needs to be managed and usually requires a lot of planning. Refactoring everything in one run or rewriting the whole codebase rarely is an option. Instead, an iterative approach coupled with a focus on the most problematic areas as well as balancing refactoring costs and benefits should guide us in the process. There is usually no quick and simple way here.




Sooner or later we will be forced to provide a hotfix, where delivering a solution apparently has a bigger value than refactoring. In many cases, hotfixes are small changes that will not need any refactoring after all, but sometimes the situation can be different.

It might be a good idea to have some protocol for such circumstances, e.g opening a Pull Request that is geared towards refactoring right after integrating the hotfix with the codebase. People tend to spot long-running Pull Request more easily than some low-priority chores in the project management software. Such Pull Requests — tagged properly — might be mentioned during daily stand-ups until the necessary refactoring is applied and merged. This is just an example of a solution and each team might address this problem in a different way.




Refactoring, when applied on time, can help keep our code healthy and easy to work with. It will also benefit the team velocity — the latter will not degrade due to the accumulation of technical debt. On the one hand, optimizing for speed or reusability, improving our metrics or just handling old legacy code might all be managed separately and intentionally postponed. On the other hand, beside some well-justified exceptions, refactoring aimed at keeping code quality high should be applied as part of a regular, ongoing development process and not treated as some special, optional step that needs to be handled after delivering the feature. After all: “Preventing technical debt is what allows development to be agile in the long run”. Dan Radigan, Senior Agile Evangelist, Atlassian



Dawid Pośliński, Technology

Disclaimer: In this article, the point is not to deprecate other front-end solutions. The main goal is to point out that there is no one single technology that fits every context, i.e. every use case, every project or every development team.


Keep an open mind


If your main goal is just to extend an existing application by adding some extra Javascript to it and you are looking for some fancy tool to do so, EmberJS might be an overkill for the job. Why? In most such cases, you will need a tool for building a dynamic user interface. In that case, you can use for example ReactJS. It will give you much more freedom. You will not have to spend your time working on a sophisticated solution dedicated for Single Page Applications (SPA) such as EmberJS and you can thus deliver a feature faster. If you do not know yet what you will need in the future, the solution should stay as thin as possible and EmberJS is not the best choice for you. EmberJS is more like a ready to use Swiss Knife, but it does not mean you always have to be prepared for a nuclear war.



Following every single rule is not an option. EmberJS is not the best place for being a rebel. Of course, even if you use EmberJS you can mess things up inside your application, but because it has plenty of conventions, it is much more difficult to spoil things than with other solutions. Do you like to do whatever you want? In EmberJS you can’t. With EmberJS you have some rules that you will just need to follow.



It may sound strange but thanks to EmberJS your coding is limited to the minimum. As I mentioned before, EmberJS is like a Swiss Knife. It has tonnes of built-in solutions and thanks to that, in many cases, you do not have to code too much to achieve your goals. For instance, if you want to start using JSON API, GraphQL or FireBase, all you have to do is to replace the adapter of the application. The adapter connects your data source (API) to the local data storage. When you follow the conventions, all of this happens auto-magically — just like that. If you like to write a lot of code, reinvent the wheel, EmberJS is not the best choice for you.

To conclude, we should always look for a right tool to do the job at hand. From my experience, EmberJS is one of the best choices for building a SPA at the moment. But before you pick the tool, ask yourself the key question: Am I really building a SPA right now?


Błażej Kosmowski, Technology

Before starting development on a legacy project, we are often asked to do a quick review of its quality. Some projects are easy to analyze and the task just boils down to a look into a couple of classes, routes file, tests etc.; yet from time to time we realize that the actual problems might be harder to discover than we thought. There is also a situation in which we do not have access to the codebase before we sign an agreement and start to work on the project. In such a situation it may pay off to establish the key practices that the development team might have (or might not have) followed. Those practices can not only give us some quick and precise feedback on the project quality but they can also serve as a checklist for keeping our existing and new projects healthy.

The list below has been designed for projects based on the Ruby on Rails framework, but it can easily be adapted for other frameworks, languages and projects.




“Test Driven Development” features in almost all programming-related job postings, at least in the Ruby on Rails ecosystem. But do we actually do it? And to what extent? What is our coverage? How much does it cost to maintain a green test suite? Those are all valid questions if we intend to follow routines that keep our project in a healthy shape.

It is not impossible to write even medium-sized projects without specs at all or cover tiny projects with hundreds of tests with minimum value. One’s job is to find that sweet spot in which our tests bring a lot of value while not taking us many extra hours of work just to maintain the suite after each small change. Pure TDD is also something that can bring us problems in application/architecture design (the so called Test Induced Design Damage, that DHH has some great talk about), when our drive to make the code testable makes it less readable and less comprehensible.

It might be a good idea to consider the Outside-In Testing approach to keep our testing practices under control. In this case development is driven by high-level tests (i.e. feature specs) and then each lower-level concern or edge case is handled by more specific tests. Thus we do not end up testing the same thing multiple times and with some occasional experimentation before “TDDing” we can end up with a pretty decent architecture.

All in all, the most basic question is about tests presence. If there are no/few tests in the codebase it is a fairly reliable indication that you may have to face some significant / challenging problems.



Well, the actual question should be “Do you use design patterns to keep your code quality high?”. There is no use for design patterns where there is no place for them, so those should only be applied consciously. Times when “you should keep your controllers slim and models fat” are long gone now. Models (or in fact most classes) do deserve to be kept slim as well. It virtually boils down to keeping each class responsible for only one thing — the so called Single Responsibility Principle.

The decorators and service objects, mentioned in an old yet still valid post by @brynary about the slimming of models, became quite popular, but that is not all we can do to keep our classes testable while keeping their responsibility narrow. “Design Patterns in Ruby” (by Russ Olsen) is a great source for diving into more sophisticated solutions and extending our skills toolbox with some nifty techniques.

Leaving design patterns aside, it deserves to be mentioned, that keeping our controllers actions limited to the RESTful ones can also help in keeping those classes in good shape. DHH himself has some strong opinion about it, as it is outlined in this post by Jerome Dalbert. Also following Sandi Metz Rules for developers should do some good to the shape of our classes.

To sum up, the point is not to use as many design patterns as possible, the goal is to keep our classes slim, with narrow responsibilities and public interfaces. If a design pattern is a means to an end here, let’s use it! If we add a bit of refactoring here and there in the classes themselves to keep the method complexity low, we will end up with comprehensible, reusable and testable code at the cost of added indirection (which a decent coder can deal and live with). At least, this is the plan. On the other hand, if the project is of a moderate size and using few design patterns and/or class slimming techniques, we can expect some of the classes to be huge code dumps, which are not very comfortable to work with. This might not be true for projects with very simple logic and requirements, yet it is a warning sign we should be aware of.



Ok, so there we have this magic code — almost no code there and everything just works! Awesome! Well sort of. There are so many means to keep our code slim (do not mistake with slim classes) — gems, libraries, convention-over-configuration-everything! It looks nice at the beginning , yet later it turns out we need to customize something (especially something not foreseen by the library author), or somebody else is to work with our code. The “magic” now becomes a problem as it makes a given implementation harder to understand, requires diving into multiple libraries, crafting customized solutions to modify the default behavior. The code looks nice, but it is not easy to work with. The key is to decide if a given solution really gives us much benefit in terms of less code being written, or is it ok to just write a couple of lines and keep code comprehensible and easily modifiable. After all being explicit about a given behavior is advantageous in most cases.

In Ruby on Rails (or for that matter in Ruby itself) there is also this very tempting thing that drives “magic” to a new level. Metaprogramming. When you discover it, you just have to love it. This love dies quickly if it was not you who coded the piece. Dynamically creating methods or altering their behavior makes tracking what code does a living hell and renders stack traces useless. I may be exaggerating a bit, but it definitely can make things much more complex to understand.

There is one more place where we might pay even more attention as regards the trade-off between readability and being DRY (Don’t Repeat Yourself). Specs. One might find that keeping tests very DRY makes them much less readable; e.g. when we need to jump around a file to locate those “shared” things, aggregate them in our head, find what the subject is all about and then finally what is tested in those “shared examples” and “shared contexts” — the word “shared” becomes our foe. Keeping test prerequisites/setup, executing code being tested and asserting the result all in one place renders the test (more) comprehensible, yet it does so at the cost of repetition, the cost we are ready to pay for the benefit that comes with it. The benefit is even more valuable, when we need to alter/fix somebody else’s tests or the corresponding code that is tested.

So should we write everything from scratch and limit the use of libraries to the minimum? By no means! We just need to be aware of how a given solution affects the readability of our code. There are so many gems and libs that make the coding of some aspects of our application easier. Many of the them encapsulate useful logic and solutions that would otherwise force us to reinvent the wheel in our project. But some are just adding the “magic” — let’s be cautious here. Metaprogramming is not a bad thing either, but let’s make sure we use it where it is necessary and more beneficial than alternative approaches. Finally, let’s keep our tests in a good, readable shape. If somebody finds it hard to understand the code we have written, the tests will be the place in which they will seek for the answers first.



One thing that is really painful for a reviewer when analysing the code is the same comment being repeated all over again; another remark about wrong indentation, yet another comment on using a better method alias, and still one more mention about a method not covered with specs… The good news is, if we get tired of repeating same comments all over again, all sorts of automation tools come to the rescue. Those that might indicate a healthy rails project can be divided into three categories.

Code style standardization tools and services will take care of the basics: how the code is laid out, if the code style is consistent (i.e. houndrubocop), if best practices (i.e. Rails Best Practices) are applied when necessary or even if there are no obvious security holes (i.e. brakeman) in our implementation.

The second group is similar to the first in that it also depends on static analysis but it focuses more on such metrics as test coverage (i.e. simplecov) or the quality of code (i.e. rubycriticCodeClimateCodeBeatPullReview).

The last group of basic automation tools and services encompasses CI / CD (continuous integration / continuous delivery) solutions. Even the most comprehensive test suite is not worth a dime if nobody runs it. CI services help in executing and monitoring the current status of our tests suite. It can even stop our deployment if we go red. There are many players on the market, i.e. CircleCITravisCIJenkinsCodeShip, etc.

Can we live without those tools and keep our project in a good shape? We surely can! It will however require much more attention, more thorough code reviews and a good amount of discipline. Why just not make use of this great services and be liberated to focus on much more interesting stuff?




A project that is driven by Outside-In testing, uses various design patterns to keep classes slim and testable, balances the code DRYness and readability as well as takes advantage of automated code analysis is most likely a healthy well-crafted piece of software. Even without looking at the code, if you inquire the team about those aspects of the development process you can get a general understanding of project condition. As regards new projects, this list may help you establish a set of practices that will keep the project bullet-proof from the very beginning.



Through the week we are discussing some tech ideas, sharing interesting technology articles and resources. To show our places on the web we decided to publish Weekly Developers Digest with links to the content we were talking about on our internal DEV channel.



Clean Code concepts adapted for JavaScript

Software engineering principles, from Robert C. Martin’s book Clean Code, adapted for JavaScript. This is not a style guide. It’s a guide to producing readable, reusable, and refactorable software in JavaScript.



Lebab transpiles your ES5 code to ES6/ES7. It does exactly the opposite of what Babel does. If you want to understand what Lebab exactly does, try the live demo.


13 best practices to secure your web application

Everyone agrees that web application security is very important but few take it seriously. Here’s a 13-step security checklist that you should follow before deploying your next web application.

I have added links to some npm modules which assist in solving some of these problems, where appropriate.


Grid system

Options for structuring your pages with Bootstrap, including global styles, required scaffolding, grid system, and more.


React’s Five Fingers of Death.

Master these five concepts, then master React. The five key concepts are: Components, JSX, Props & State, The Component API, Component Types



MongoDB ransom attacks soar, body count hits 27,000 in hours



Have you already heard about a new shine JavaScript framework/library that have been released recently? In JS ecosystem they spring up like mushrooms so you can have your `developer chase mode` always on. Learning something new is by definition a good thing, don’t get me wrong. Still, if you look at JavaScript Fatigue, it is a pretty overwhelming situation.


Pigeon Maps

ReactJS maps without external dependencies


The Crons are Here

Sometimes we need to test our projects not because we have changed anything, but because maybe our dependencies have changed, or we want to have an automatic build to get a nightly release out.


TOP 50 developer tools of 2016

Want to know exactly which tools should be on your radar in 2017? Our 3rd annual StackShare Awards do just that! We’ve analyzed thousands of data points to bring you rankings for the hottest tools



Generation of diagrams and flowcharts from text in a similar manner as markdown.
Ever wanted to simplify documentation and avoid heavy tools like Visio when explaining your code?
This is why mermaid was born, a simple markdown-like script language for generating charts from text via javascript.–


Announcing Alacritty

Alacritty is a blazing fast, GPU accelerated terminal emulator. It’s written in Rust and uses OpenGL for rendering to be the fastest terminal emulator available. Alacritty is available on GitHub in source form.


Docker in Production

9 Critical Decisions for Running Docker in Production. You’ve got your Rails or Rack-based Ruby app built. It’s even running in Docker off your laptop and other developers on your team have it up-and-running as well. Everything looks good, so time to ship it.


Know IT all

Skills checklist for developers.


How to Build a High Velocity Development Team

Your market — every market — is being disrupted by transformative new technology. Not just better apps, but quantum leaps in tech evolution that will redefine the human experience.



Through the week we are discussing some tech ideas, sharing interesting technology articles and resources. To show our places on the web we decided to publish Weekly Developers Digest with links to the content we were talking about on our internal DEV channel.




The (very) Best of Material Design in 2016

We collected the most loved Material Design inspiration & resources in 2016! Huge congrats to all creatives who surpassed themselves demonstrating the beauty and power of Material Design — from UI concepts to prototypes, tools & libraries.


Programming languages

A list of programming languages that are actively developed on GitHub.


Facebook open-sources Atom in Orbit, a web-based IDE

Facebook developers have crafted a version of the Atom open-source text editor that can be deployed in a web browser. Atom in Orbit, as the new technology is called, is now available on GitHub under a BSD-3 Clause open-source license, and a demo app lets you take the tool for a spin.



Like spacemacs, but for vim.


Why the fuss about serverless?

Serverless will fundamentally change how we build business around technology and how you code. Your future looks more like this (simply take the Co-Evolution of Architectural Practice (platform) map and remove the legacy lines).


Concurrency in Rails 5.0

The new architecture primarily involves ActiveSupport::Reloader (a global one of which is in Rails.application.reloader) and ActiveSupport::Executor (a global one of which is in Rails.application.executor). Also ActiveSupport::Dependencies::Interlock (a global one of which is at ActiveSupport::Dependencies.interlock.



Datadog is a SaaS (Software-as-a-Service) monitoring service for IT, operations and development teams. Datadog enables them to turn the massive amounts of data produced by their applications, tools and services into actionable insights, and ultimately to keep their applications and services up and running. Data is sent to Datadog directly from the customer’s infrastructure and through services like GitHub. Datadog aggregates the data, analyzes it, and delivers it back to the customer through an easy-to-use monitoring dashboard accessible via a web browser.


How the Elastic Stack Changed Goldman Sachs

See how Goldman Sachs leverages Elasticsearch to solve business problems and manages Elasticsearch usages centrally. Deep dive into a business use case that tracks trade flow across multiple systems in real time.


Aggregation features, Elasticsearch vs. MySQL (vs. MongoDB)

To make the MySQL Document Store primary programming interface, the X DevAPI, a success we should provide building blocks to solve common web development tasks, for example, faceted search. But is there any chance a relational database bend towards a document store can compete around aggregation (grouping) features? Comparing Elasticsearch, MySQL and MongoDB gives mixed results. GROUP BY on JSON gets you pretty far but you need to watch out for pitfalls…


The top rising JavaScript trends to watch in 2017

The JS community didn’t hesitate to chime in with a ton of new tech that you should be watching for in 2017. I’ve made everyone’s lives easier and compiled an easy-to-digest list (with context) here.


React Interview Questions

For the record, asking someone these questions probably isn’t the best way to get a deep understanding of their experience with React. React Interview Questions just seemed like a better title than Things you may or may not need to know in React but you may find helpful none the less.


Thinking Statefully

Getting used to React involves changing how you solve certain kinds of problems. It reminds me a little bit of learning to drive on the other side of the road.


Idiomatic Redux: Thoughts on Thunks, Sagas, Abstraction, and Reusability

I’ve spent a lot of time discussing Redux usage patterns online, whether it be helping answer questions from learners in the Reactiflux channels, debating possible changes to the Redux library APIs on Github, or discussing various aspects of Redux in comment threads on Reddit and HN. Over time, I’ve developed my own opinions about what constitutes good, idiomatic Redux code, and I’d like to share some of those thoughts. Despite my status as a Redux maintainer, these are just opinions, but I’d like to think they’re pretty good approaches to follow.


Animating particles using React Motion

While working on a personal open-source project Container Hive, I faced quite a few challenges to get particles animating correctly between each Docker container. Container Hive tries to help you visualize how everything fits together between your services. Here’s what that currently looks like…


React or Vue: Which Javascript UI Library Should You Be Using?

In 2016 React cemented its position as king of the Javascript web frameworks. This year saw rapid growth of both its web and native mobile libraries, and a comfortable lead over main rival Angular.

But 2016 has been an equally impressive year for Vue. The release of its version 2 made a huge impression on the Javascript community, attested to by the 25,000 extra Github stars it gained this year.



Through the week we are discussing some tech ideas, sharing interesting technology articles and resources. To show our places on the web we decided to publish Weekly Developers Digest with links to the content we were talking about on our internal DEV channel.




Ruby 2.4.0 Released

Ruby 2.4.0 is the first stable release of the Ruby 2.4 series. It introduces many new features.


Performance techniques

10 things I learned making the fastest site in the world


Autodesk Circuits

Autodesk Circuits empowers you to bring your ideas to life

with free, easy to use online tools.



Generates regular expressions that match a set of strings.


CSS box-decoration-break

In this snippet, we introduce the box-decoration-break property and take a look at the granular control it gives us on inline elements, like creating multi-line padded text.



Recycle — Truly Functional and Reactive way of writing React apps


Brain Freeze

Brainfreeze takes advantage of redux architecture. That means that a lot of how Brainfreeze works is very similar to how redux, or a redux-like library works. The basic structure of Brainfreeze follows the dispatch an action => reducer => update/change state => return a new state.


Pigeon Maps

ReactJS maps without external dependencies



Add redux scaffolding into your folder


Introduction to React

Facebook’s React is “a library for user interfaces”. Its single role is to manage and render your user interface. On its own it does nothing more… no communication with your API, no central state management, and no extra logic.

That aside, let’s talk about how React works and why it’s taken the UI world by storm.


React Router

React Router keeps your UI in sync with the URL. It has a simple API with powerful features like lazy code loading, dynamic route matching, and location transition handling built right in. Make the URL your first thought, not an after-thought.


Scaling Responsive Animations

Scaling our websites and applications so that they look great on every screen can be difficult. A big portion of that difficulty can be trying to get specific components, particularly ones that have pieces that have to stay a certain size (like animations), to look good regardless of the screen size. In this post, we’ll cover how to help keep our responsive animations sized the way we want them.


Docker in Production

9 Critical Decisions for Running Docker in Production. You’ve got your Rails or Rack-based Ruby app built. It’s even running in Docker off your laptop and other developers on your team have it up-and-running as well. Everything looks good, so time to ship it.


The 15 most popular Ruby links of 2016

Here’s a round-up of articles, news and tutorials that readers clicked on in 2016:


Vue in 2016

During the past 12 months, Vue’s growth has been consistently exceeding my expectations — the project has grown from a relatively niche framework to one that is now often compared to the biggest players in the field.