Quantcast
Viewing all articles
Browse latest Browse all 64

How we made JavaScript testing 15x faster

Testing is an important pillar in our engineering infrastructure. We have hundreds of A/B experiments running at any given time. To keep these experiments running smoothly, it’s critical to have numerous tests running as part of the build.

Unfortunately, our Javascript test framework was beginning to creak under the strain of hundreds of test files and dozens of simultaneous experiments. It was slow, taking 15 minutes to run the full suite of tests, and often broke due to experiments changing behavior and network/browser issues. As a result, trust in the system degraded, and the tests were removed from our automated build processes until the system could be repaired.

This was an excellent opportunity to revisit our web testing framework, keep what was working, throw away what wasn’t and build a testing framework that would scale for us. The result is our new web testing framework we call Affogato (yes, named after the espresso and ice cream beverage, because automated test coverage is sweet!).

Image may be NSFW.
Clik here to view.


Making tests faster

We tried optimizing our JavaScript testing framework by running them in multiple parallel headless browsers. But the full suite still took too long, and this setup would lead to unpredictable machine resource contention issues. Browsers are hungry beasts from a resource perspective.

This is where jsdom saved the day. It’s a node.js command-line utility implementing WHATWG DOM and HTML standards and isn’t concerned with rendering, painting and other tasks that make a browser CPU and memory hungry. Internal benchmarks showed remarkable 5-20x increases in speed for most of our tests. DOM-heavy tests had the biggest performance improvements.

To take advantage of our build system, which leverages an arbitrary number of processor cores, we broke our suite of tests into small chunks for our test runner to consume. As the number of tests we have scale into the thousands, we can scale the number of cores the tests run on. While we anticipate eventually having to run test suite on multiple machines the speed improvements ensure that one beefy computer is fast enough for the foreseeable future.

Making tests reliable

Ensuring our tests were reliable and trustworthy required a multi-faceted approach. First, we avoided costly data lookups and network transfers by using fixtures, which are files containing JSON-data that describe an object to be tested. The testing framework uses fixtures to create the appropriate mock objects. Fixtures make it easy to test various object states without manually writing the boilerplate code to instantiate mock objects.

Unfortunately, network requests cannot be eliminated completely from web tests without making the tests less powerful or expressive. If a developer wants to test some web code that makes a call to our server-side API, we want to facilitate this impulse, not discourage it. To eliminate test failures caused by network hiccups when making these server-side calls, we wrote an XHR recorder which listens to ajax requests and saves responses to files for playback later. To avoid having to refactor our web code to support this recorder, we patched the JavaScript XHR object directly. The recorder has the bonus side effect of reducing test runtimes by 30 percent on average.

Making tests pleasant to write

With multiple experiments always running, a single experiment can change an arbitrary number of code paths in both the JavaScript and our API. This means experiments need to be a top-level concept in our testing framework and that setting up the “experiment environment” for a given unit test should be as simple as possible. We wrapped the Mocha framework with some rich syntactic sugar that made it easy to target an arbitrary number of experiments as part of a given test. Using Sinon.JS and its sandboxing, the global environment is automatically cleaned up after every test is finished. An ES6 promise polyfill was used to make writing asynchronous tests simpler.

Finally, for a little bit of whimsy, we went full bore on the coffee and cream metaphors, so writing a test begins with typing “cream.sugar(…)”. We figured no one could object to working with something so sweet.

Tying it all together

The overall test suite runs an order of magnitude faster than before, down to one minute instead of 15. In the three months since we started using the framework internally, there hasn’t been any reported incidents of a test failure caused by a flaky test. Its ability to test experiments have put the nail in the coffin to several recurring experiment-related bugs in difficult-to-understand code. Additionally, internal feedback on the new framework has been positive.

The performance and reliability improvements allow us to run the tests all the time. Every time an engineer saves a web file, the relevant tests are run immediately and will fire an alert if a test fails. When a pull request (PR) is submitted to our web repository we run the tests using Jenkins and deny the PR if there is a failure. This fast-feedback makes debugging a test much easier and faster. Time spent debugging test failures has plummeted, trust in our tests has improved and tests are being regularly written again.

David West is a software engineer on the Web team

Acknowledgements: The leap forward in testing at Pinterest was made possible with significant contributions from Kelei Xu, Jeremy Stanley and the Web team.

For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.


Viewing all articles
Browse latest Browse all 64

Trending Articles