Things became not so good once I started checking in code, and realised I had to wait ALMOST 2 HOURS for the whole CI build flow to go through. This problem was confounded by the fact that some tests were flaky, meaning that if it failed by chance you'd have to start the whole process again.
In my mind these sorts of problems can be tacked from 2 angles:
- Fix the root cause of the problem. There are many tests that take a long time to run.
- Fix the symptom of the problem. CI is taking a long time to run.
In this post we'll look at the first point. For the second point I've had great success using the likes of Docker to partition functional tests and run them in parallel, but I'll perhaps address that in a separate post.
After a few more weeks on the project I started to get a good idea of what could be slowing the tests down. The main problem was the long running functional tests, so I focussed my efforts there.
- Don't open the browser when it isn't needed. The tests had been split into 2 categories - unit and functional. All functional tests included a Grails mixin that essentially caused a browser window to open, despite the fact that a good proportion of the functional tests were only testing REST API calls. In reality there were 3 categories of tests, each requiring its own distinct setup procedure.
- Don't include mixins you don't need. The version of Grails we were using at the time, 1.3, seemed to perform terribly when the number of mixins was large (i.e. more than 5). Careful but tediously slow inspection of all the tests revealed that for many the number of mixins could be reduced because some of them were simply not used. Perhaps the result of a copy and paste error?
- Setup once. Many of the functional tests included expensive setup code that was executing before each individual test case rather than once per test class (e.g. in Spock setup rather than setupSpec). An example of such a requirement might be to add a particular user to the database so that the test can login to the system. If all test cases in a test are using this user, then this action only needs to happen once.
- Check your connection code. I had a look through the code that was responsible for opening up connections for testing the REST API and found that for whatever reason 2 connections were being used for each request. This seemed sloppy and didn't take long to fix as all tests were using the same method.
- Stop waiting around. Some of the browser tests included suspicious sleep and waitFor statements which were there because apparently the tests "don't pass otherwise". We all know that browsers have their quirks, and you have to work with them rather than against them in your tests. It's worth going over your sleep and waitFor statements to see if they are really needed any more.
I think these issues could be common to many projects, but obviously you'll have to prioritize them yourself based on cost and value added.