Integration Testing Rake Tasks using RSpec Feature Specs

I have a set of business critical cron jobs that run overnight, once a week. If they don't run... it's bad news; really bad news. There's nothing worse than getting into the office, only to find out that one of the jobs threw an error and didn't complete. You get to a point where testing all the pieces in isolation just isn't good enough. I need to be confident that all of the pieces will work together. Time to get some integration tests in there (or feature specs, as we call them in the world of RSpec.)

A feature spec is a high level test that walks through an entire process to ensure that all of the pieces work together as expected. In a standard Rails application, that means using a headless browser to hit the UI, manipulate it in some way, and then check for an expected output. It touches all the layers; controller, model, view, and any other classes in between. It simulates a user initiating a process, interacting with an interface in a consistent, repeatable way, and expecting to get the same result each time. 

What do you do when the thing initiating isn't a user? What if the UI isn't a web application? These might sound out of place for Rails, but if you're anything like me, you see them all the time. Crontab initiates multiple processes every day that run through a command line interface. These are rake tasks. I consider them features of my application, no different than the various features available through the web UI.

EDIT: Given that there has been some confusion in the comments, I just want to clarify my position a bit up front.

  1. The point here is about end-to-end tests. Nothing more, nothing less. Call them feature specs, call them integration tests. It's about being confident that your application will run, as expected, in your production environment.
  2. I'm not advocating that Rake tasks should be unit tests. I'm not talking about testing Rake itself or treating a Rake task like a class. I'm talking about ensuring that every entry point into your application has test coverage the exercises the entire application, from end-to-end. (See my response to the first comment for more info)
  3. I'm not advocating for raw code to be put in Rake tasks. I use a simple Rake task in the examples and raw code is the easiest way to illustrate the intention of the code, without required pages of code samples. In general, I do advocate for using a Ruby class to encapsulate the functionality of a Rake task. Regardless of the content of a Rake task, be it raw code or an object with a single method being called, end-to-end testing is still relevant. (See my response to the second comment for a detailed example, with code)

Now that we've gotten that out of the way... :)

Like any feature, to be able to test them, we need to be able to initiate the process and check for an expected output. Let's start with an example: say we are building an e-commerce web application that has a nightly cron job to update the inventory for all of the products. You might expect the structure of the rake task too look something like this:

namespace :products do
  desc "Update the inventory for all products"
  task :update_inventory => :environment do
    #...code to update inventory goes here...

Looks all too familiar I'm sure. Now let's look at a a feature spec to test this task.

require "rails_helper"
require "rake"

feature "crontab updates the inventory" do
  before do
    load Rails.root.join("lib/tasks/products.rake")

  after { Rake.application.clear }

  scenario "nightly at 12am" do
    create_list :product, 3, has_inventory: false
    task = Rake::Task["products:update_inventory"]

    expect {
    }.to output(/Inventory updated for 3 products/).to_stdout


Let's break down what's going on here: 

The first thing that is different from a standard feature spec is that we need to require the rake gem. This allows the Rake::Task to be instantiated and invoked. 

There are 2 hooks, a before and an after. The before hook loads the rake task under test into memory and defines the :environment task (that's what loads the Rails environment so that we can access our Models and such). The after hook clears the rake task from memory, to ensure that subsequent tests do not inherit any state.

Then we have the scenario that we're testing. Like any good test, it has the setup first, where it creates a list of 3 products (using FactoryGirl) and initializes the Rake::Task that we want to exercise as part of this test.

Since the rake task sends its output to STDOUT, we need to use RSpec's "expect with block" syntax. The block captures the output from the rake task and allows us to match against it. Finally, calling the #invoke method on the Rake::Task is what runs the task, just as if it had been run via the command line using the bundle exec rake products:update_inventory syntax.

Conceptually this is the same as the feature specs that we all know and love. In practice the real difference is that we're invoking a rake task instead of loading up a web page, and matching against STDOUT instead of against an HTTP response. 

Finally, we just need to write the code to make the test pass. A trivial implementation might look something like this

namespace :products do
  desc "Update the inventory for all products"
  task :update_inventory => :environment do
    products_to_update = Product.where(has_inventory: false)
    updated = 0
    products_to_update.each do |product| 
      if product.update_inventory
        updated += 1
    puts "Inventory updated for #{updated} #{'product'.pluralize(updated)}"

A passing feature spec gives us a high level guarantee that all of the pieces work together from end to end. It only covers the specific use case of the task and relies on the assumption that the Product class has it's own unit tests to cover all possible cases. In the end, it gives us confidence that when crontab initiates the task, it will run as expected.

4 responses
I think this way you're testing something that you shouldn't be testing: rake. You can assume rake is working, as it has its own spec suite; therefor, you can wrap your rake task body in a service object (e.g. a `ProductInventoryUpdater`) with a single `#run` method, then test just the service object. Also, such service object could be reused in many places - a controller, another script, and so on - so binding its test to rake is not so wise IMHO.
Hi Andrea, thanks for the comment. I think you're misunderstanding what the SUT is here. It's not Rake itself that's being tested. Consider it more like an integration test. A standard Rails integration test doesn't test Rails, it tests that all the pieces of the application work together. This type of testing is no different. It's testing a slice through the entire stack, not Rake. Your mention of service objects is more akin to what should be tested at the unit level. As an example, if you use a service object in a controller, you should still use an integration test to make sure that the entire application acts as expected. Just having unit tests on a service object, is not enough to give you confidence that all the piece of the application are working together. Hopefully that adds some clarity
I agree with Andrea. Countless times developers add Rake tasks and put raw code in them. They don't necessarily write tests. But having that raw code in a Rake task means it can only be invoked with Rake. It also means you can't make any modifications at runtime, and it isn't reusable within your application. Putting all of that code into a service object makes it possible to: invoke that code with rake, use that code in a controller, run that code via `rails runner`, or manually invoke it from the Rails console. Additionally, if you invoke it via the Rails console, you can override instance methods in order to change it's behavior. This is an especially useful feature, given the nature of many Rake tasks that are used to fixup data after a migration, or backpopulate a table or something similar. Sometimes (esp. due to developers tendency not to write specs for one-time tasks) once we get into production we realize we forgot something and need to fix it. Written as a Rake task, we must make the fix, commit, and redeploy. Written as a service object, we have the ability to make the change dynamically from a Rails console and employ the fix immediately, rather than going through a rushed but time-consuming deploy cycle. In production, of course, these minutes matter. That's why I always advocate that any Rake task be written first as a service object (which also makes them easier to test) and then only invoked by the new rake task.
Hey Jake, thanks for the comment! I wrote a whole response and accidentally deleted it. I'm going to try to do it justice and re-write, an albeit distilled, version of it. I'm not advocating putting raw code into Rake tasks. That's not the point. The only reason there is code in the Rake task example is to illustrate the example clearly. My point still stands whether it's raw code or what you call a service object. All of my production Rake tasks use a class, something akin to what you call a service object. We'll just jump straight into a somewhat contrived example. We have an application, it has a class called ProductInventoryUpdater, which has a single method called .run. The .run method is invoked in 2 places: a controller, in the update method; and a Rake task. class InventoryController < ApplicationController def update end end namespace :products do desc "Update the inventory for all products" task :update_inventory =< :environment do end end Question: Would you write an integration test that exercises the controller and the ProductInventoryUpdater? If your answer is no, then we disagree about what is important to test in a Rails application. If your answer is yes, then we agree that integration tests that slice through the entire application are important. Scenario: The .run method on the ProductInventoryUpdater gets renamed to .call Both the controller and the rake task now fail. They're calling .run, but it doesn't exist anymore. Unit tests on the ProductInventoryUpdater class *will not* catch these errors. An integration test through the browser (i.e. a feature spec) will catch the error in the controller (granted a controller spec would also catch it, but the whole point here is integration tests). Given your comment, the Rake task is untested. The change to the ProductInventoryUpdater causes it to fail, though your test suite remains green. CI passes and that bug gets deployed into production. That's the context and the motivation for my initial post. We agree that integration tests through the browser are important. I'm saying that integration tests are just as important when the UI is the command line.