Friday, 30 October 2015

Java/Spring vs. Grails


I've just come off a project where I spent a year working with Grails. I remember when I joined the project I wasn't over the moon about the choice of technology, having worked with Grails and Groovy on and off a few times before and not loved it. However, I was looking forward to seeing a 'proper' project written in Grails and perhaps having my mind changed.

Here is a list of the things I liked and didn't like:

Default scaffolding

You can choose if a domain object should have the basic CRUD pages and actions available. There is also some amount of configurability if you'd like to deviate from the norm, but not as much as other frameworks I've seen (e.g. Django).

Useful collections code

In the Java world a lot of time is spent writing code to navigate and perform actions on collections. Groovy adds many useful methods onto the Collection interface such as each, find, and collect, so you can do things like this:

def list = [5, 10, 15, 20, 25, 30]
def result = list.find { it > 10 }


Plugins

Plugins integrate very closely into your application to change the way that it runs or builds. For example, the HTML resources plugin can be added to make it easier to reference resources in your grails views, 

Testing tools

Spock is a unit testing tool that uses the dynamic nature of groovy to enforce all tests to declare given, when, and then sections. Combined with the fact that you can use spaces in test names, generally the unit tests were very easy to read.

Geb is for testing through the browser, and it's tests follow the same format as Spock tests but allow you to very easily interact with the browser using the page object pattern. Once again you end up with very clear tests.

Browser integration

I was using IntelliJ IDEA which is apparently the best IDE for grails but it was far from perfect. I'm not sure whether it was down to which version of Grails we were using, but running a unit test that might take less than 1 second in Java to run could take up to 10 seconds in Grails. At some points the IDE would decide that it needed to rebuild the whole project first, despite having only changed one line of a test.

Slow startup speed

Even for a simple 'Hello world!' application Grails takes 28 to startup on my machine, whereas a basic Spring Boot application takes 7 seconds. Enough said.

Flexibility breeds inconsistency

Grails does have multiple ways of doing the same thing, such as how you define a method in your controller or how you declare a variable. When I joined the team this had been taken full advantage of and there were many inconsistencies in the code base that we had to iron out.

Code generation

Of course it's better to generate code and config rather than write it yourself but why not do away with it altogether as in Spring Boot? My hello world app in Grails contains reams of configuration that I may want to tweak in the future, but until then it's just adding to the noise.

Overall Grails is a bit of a mixed bag for me, and I'd probably think twice before jumping onto another project with it. It has definitely got its plus points, but for now I'm going to look at some other frameworks to see if there's anything better.

Tuesday, 27 October 2015

Why are these tests so damn slow? Part 2

In a previous post I discussed the first of 2 approaches to tackle a slow running CI build, namely to attack the source of the problem and to try to figure out why there are so many tests that take so long to run. In this post I'll discuss the second approach, which is essentially one of brute force.

If after making the effort to really optimise your tests, speed them up individually, and remove any duplication you are still not satisfied with the performance, then perhaps this approach can help you. Here are some ideas of where to start:
  1. Improve the hardware of the machine on which you are running the tests. This may sound obvious but it's commonly overlooked, especially when using a cloud based Jenkins server which a team might assume is highly specced. I remember being frustrated by this point on one particular project when the operations team refused to add more hardware. Suffice to say that they didn't like my suggestion of instead running Jenkins on an old laptop that was lying around, even though it would have been faster!
  2. Run your tests in separate phases. It's very common to have unit and functional test phases but for really slow running builds you might want to consider adding more. For example, if your project has functional tests that test a CMS in the browser and a REST API you could set these up to run in CI as separate Jenkins jobs. One benefit of this is that if you know you've only changed code in the front end CMS code, once these tests pass you should have a high level of confidence that you code hasn't broken anything, without necessarily waiting for the REST API tests to finish. 
  3. Parallelise. Running unit tests in parallel is fairly common, and Maven makes it pretty easy. Unfortunately, on a long running CI build unit tests are not normally the biggest problem, and we need to look at functional tests instead. 
Let's look at point 3 in more details now.

The problem you might have with parallelising your functional tests is that in many cases your functional tests running on one server might be running against a system that's deployed on another.


This was the case on a recent project, where we had a Jenkins master, a Jenkins slave that ran the functional tests, and a server to which the system under test was deployed every build. If you want to parallelise here it suddenly becomes very expensive. Consider the case where you want to split your tests in two, where you'll need:

  1. Another Jenkins slave to run the second set of tests
  2. Another server on which to deploy the system. We can't use the same instance of the system under test as then tests may interfere with each other. Other servers that this system uses may also need to be duplicated (e.g. database).
  3. Some way of orchestrating the process.


This quickly becomes a headache especially when you decide to split your tests again, and again. However, I've had great success using Docker to remedy this particular problem, as it allows you to very quickly create images and run containers. Hopefully I'll talk more about it in another post.

Monday, 26 October 2015

Why are these tests so damn slow? Part 1

On joining a recent Grails project at a media organisation I was generally impressed by the level of test coverage. Decent quality unit tests written with the Spock framework and a suite of functional tests running in the browser using Geb. All good, right?

Things became not so good once I started checking in code, and realised I had to wait ALMOST 2 HOURS for the whole CI build flow to go through. This problem was confounded by the fact that some tests were flaky, meaning that if it failed by chance you'd have to start the whole process again.

In my mind these sorts of problems can be tacked from 2 angles:

  1. Fix the root cause of the problem. There are many tests that take a long time to run.
  2. Fix the symptom of the problem. CI is taking a long time to run. 
In this post we'll look at the first point. For the second point I've had great success using the likes of Docker to partition functional tests and run them in parallel, but I'll perhaps address that in a separate post.

After a few more weeks on the project I started to get a good idea of what could be slowing the tests down. The main problem was the long running functional tests, so I focussed my efforts there. 

  1. Don't open the browser when it isn't needed. The tests had been split into 2 categories - unit and functional. All functional tests included a Grails mixin that essentially caused a browser window to open, despite the fact that a good proportion of the functional tests were only testing REST API calls. In reality there were 3 categories of tests, each requiring its own distinct setup procedure.
  2. Don't include mixins you don't need. The version of Grails we were using at the time, 1.3, seemed to perform terribly when the number of mixins was large (i.e. more than 5). Careful but tediously slow inspection of all the tests revealed that for many the number of mixins could be reduced because some of them were simply not used. Perhaps the result of a copy and paste error?
  3. Setup once. Many of the functional tests included expensive setup code that was executing before each individual test case rather than once per test class (e.g. in Spock setup rather than setupSpec). An example of such a requirement might be to add a particular user to the database so that the test can login to the system. If all test cases in a test are using this user, then this action only needs to happen once.
  4. Check your connection code. I had a look through the code that was responsible for opening up connections for testing the REST API and found that for whatever reason 2 connections were being used for each request. This seemed sloppy and didn't take long to fix as all tests were using the same method.
  5. Stop waiting around. Some of the browser tests included suspicious sleep and waitFor statements which were there because apparently the tests "don't pass otherwise". We all know that browsers have their quirks, and you have to work with them rather than against them in your tests. It's worth going over your sleep and waitFor statements to see if they are really needed any more.
I think these issues could be common to many projects, but obviously you'll have to prioritize them yourself based on cost and value added.