Carl Willimott - Tech Blog: Random testing large rails applications

Random testing large rails applications

January 07, 2024

4 min read

Back

We recently embarked on the gigantic task of splitting our large rails application into engines across our domain boundaries. Whilst the overall goal is to move to a Service-Oriented Architecture (SOA), with loosely coupled interfaces, one the first steps was to ensure we move our files into the correct module namespaces.

DomainOne::SomeClass
DomainTwo::SomeOtherClass

As you can imagine, this isn't exactly a trivial task. With over 500k lines of code in the application and 1000's of files this is quite a large undertaking. One of the real issues is that you can't easily run your specs locally. As you know, having a rapid feedback loop is very important when developing as sitting around and waiting for tests to run is never a fun task. Now, I know what you are thinking. Why not just push up to the CI and let it handle the testing?

Leveraging the CI

Firstly, it's important to mention that we do heavily use the CI already. In fact, we have 48 GitHub runners just for running our tests. If you were to run all the specs locally it would take about 60 minutes on a decent macbook, and even on the CI with those parallel executions it can take 12-15 minutes (assuming nothing fails, and you need to re-run).

What I find myself doing is to push by branch frequently, looking in the CI for any failures and then work on those failures locally. This works fine most of the time, but it can't be quite time-consuming when waiting for your job to start because other peoples jobs are running. I also do feel bad sometimes that I'm overusing the CI. These cloud compute resources are not free, and whilst many people argue that this shouldn't be an issue, I still have a laptop here which is more than capable of running a select number of tests without too much trouble.

So what's the solution?

Well, whilst I am quite happy with this workflow, I do need to add that it's not really a first-class solution. Eventually, when everything is in its own engine, this shouldn't even be an issue, that being said, for me, I want something that will give me some quick feedback on how the moving of classes into modules is going, which is why I came up with the following shell functions.

function rspec_rand() {
    # Default to 20 if no argument is provided
    local number_of_files=${1:-20}
    # Store the list of files in a variable
    SPEC_FILES=$(ls spec/**/*_spec.rb | sort -R | head -n $number_of_files)
    # Print out the files and then run each one
    echo "$SPEC_FILES" | tee /dev/tty | xargs rspec
}

function rspec_last() {
    if [ -n "$SPEC_FILES" ]; then
        # Assuming there has been an execution previously in this shell session, re-run those again
        echo "$SPEC_FILES" | tee /dev/tty | xargs rspec
    else
        echo "No previous spec files found. Run rspec_rand first."
    fi
}

If you add these functions to your ~/.zshrc file (or equivalent) and reload your shell session you will be able to run x amount of random specs.

# Use the default of 20
$ rspec_rand 

# Override the default
$ rspec_rand 10

# Re-run the last specs (useful when fixing any issues)
$ rspec_last

For a sample of say 20 spec files, this probably takes about 10-30 seconds, depending on the number of test cases. This is perfectly manageable in a development scenario.

Conclusion

Is this going to be useful for everyone? No, probably not.

For me, I have a very particular use case. Although I'm not changing any business logic (unless it relies on a class name for some reason), moving files into a namespace does require a lot of changes across the application. With this method I am easily able to get a flavour for how the process is going.

I will still push up muy branch and leverage the CI, especially as it's gets closer to all the tests passing, as it's easy to find out which tests need attention, but I personally will keep using this method as a sort of random sampler.