Saturday 19 January 2013

Layered Builds

The kind of builds we do on our desktop are optimised for speed. We want that cycle of write test, write code, build, run test to be as fast as possible. In compiled languages such as C++, C# and Java  we can’t do a “clean” build every time as we’d spend most of our time waiting whilst files are deleted and the same libraries are built over and over again. The downside to this is that we can instead occasionally waste time debugging what we think is a problem with our code, only to discover that it was a bad build[1].

In contrast when we’re producing a build on the build server, no doubt via a Continuous Integration (CI) server, we want reliability and repeatability above all else. At least, we do for the build that produces the final artefacts that we’re going to publish and/or deploy to our customers.

Build Sequence

As a rule of thumb these are the high-level steps that any build (of any type of target) generally goes through:-

  1. Clean the source tree
  2. Compile the binaries
  3. Run the various test suites
  4. Package the deployment artefacts
  5. Publish the artefacts

For repeatable builds you need to start with a clean slate. If you can afford it that would be to start from an empty folder and pull the entire source tree from the version control system (VCS). However, if you’re using something like ClearCase with snapshot views[2] that could take longer than the build itself! The alternative is to write a “clean” script that runs through the source tree deleting every file not contained in the VCS, such as .obj, .pdb, .exe, etc. Naturally you have to be careful with what you delete, but at least you have the VCS as a backup whilst you develop it.

Once the slate is clean you can invoke the build to create your binaries (if your language requires them). For a database you might build it from scratch or apply deltas to a copy of the current live schema. Once that’s done you can start running your various test suites. There is a natural progression from unit tests, that have the least dependencies, through component tests to integration tests. The suite run time, which may be determined by the work required to get any dependencies in play, will be a factor in how often you run them.

With the tyres well and truly kicked you can move on to packaging the artefacts up for formal testing and/or deployment to the customer. The final step then is to formally publish the packages such as copying them to a staging area for deployment. You’ll probably also keep a copy in a safe place as a record of the actual built product.

There will of course be many other little steps like publishing symbols to your symbol server and generating documentation from the source code and this will sit in and around the other major steps. What determines how often you run these other steps might be how much grunt you have in your build server. If you’re lucky enough to have a dedicated box you might churn through as much of this as possible every time, but if all you’re allowed is a Virtual Machine, where the hardware is shared with a dozen other VMs you’ll have to pick and choose[3]. And this is where the layered builds come in.

Continuous Build

As I described earlier, your desktop build and test run will likely cut corners on the basis that you want the fastest feedback you can have whilst you’re making your changes. Once you’re ready to publish them (e.g. to an integration branch) you integrate your colleague’s latest changes, build and test again and then commit.

At this point the CI server takes over. It has more time than you do so it can wipe the slate clean, pull all the latest changes, build everything from scratch and then run various tests. The main job of the Continuous Build is to watch your back. It makes sure that you’ve checked everything in correctly and not been accidentally relying on some intermediate build state. Finally it can run some of your test suites. How many and of what sort depends on how long they take and whether any other subsystem dependencies will be in a compatible state (e.g. services/database).

The balance is that the more tests you run the longer your feedback cycle between builds (if you’ve only got a single build agent). Ideally the build server shouldn’t be a bottleneck but sadly it’s also not the kind of thing the bean counters might understand is essential. Corners you can choose to cut are, say, only doing a debug or release build and therefore only running those tests. Given that developers normally work with debug builds it makes sense to do the opposite, as that’s what you’re going to deliver in the end.

Deployment Build

Next up is the deployment build. Whereas the continuous build puts the focus on the development team, the deployment build looks towards what the external testers and customer ultimately needs. Depending on what your deliverables are you’ll probably be looking for the final piece of mind before letting the product out of your sight. That means you’ll build whatever else you missed earlier and then run the remainder of your automated tests.

At this point the system is likely to go into formal testing (or release) and so you’ll need to make sure that your audit trail is in place. That means labelling the build with a unique stamp so that any bugs reported during testing or release can be attributed to an exact revision of the source code and packages. Although you should be able to pull down the exact sources used in the build to reproduce a logic problem, you might still have to deploy the actual package to a test machine if the problem could be with the build or packaging process.

You may still choose to cut some corners at this point, or have a set of automated tests that you simply cannot run because the other necessary subsystems are not part of the same build.

Full System Build

If the entire product contains many subsystems, e.g. database, services, client, etc. you probably partition your code and build process so that you can build and deploy each subsystem independently. Once a codebase starts to settle down and the interfaces are largely fixed you can often get away with deploying just one part of the overall product to optimise your system testing.

The one thing you can’t do easily if your codebase is partitioned into large chunks is run automated tests against the other subsystems if they are not included within the same build. Each commit to an integration branch should ideally be treated as atomic, even if it crosses subsystems (e.g. database and back-end services)[4] so that both sides of the interfaces are compatible. If you’ve built each subsystem from the same revision and they all pass their own test suites then you can reliably test the connections between them. For example, the database that you’ve just built and unit tested can be reused to run the tests that check the integration between any subsystems that talk to it.

My 2012 ACCU conference presentation “Database Development Using TDD” has some slides near the end in the Continuous Integration & Deployment section that shows what this looks like.

Further Reading

Roy Osherove is currently putting together a book called Beautiful Builds and has been posting some useful build patterns on his blog.

 

[1] Ah, “Incremental Linking” and the “Edit & Continue” feature of Visual C++, now there’s something I turn off by default as it has caused me far too much gnashing of teeth in the past. OK, so it was probably fixed years ago, but just as I always turn on /W4 /WX for a new project, I make sure everything ever known to botch builds and crash VC++ is turned off too.

[2] Dynamic views aren’t suitable for repeatable builds as by their very nature they are dynamic and you can pick up unintentional changes or have to force a code freeze. With a snapshot view you get to control when to update the source tree and you can also be sure of what you’re labelling. The alternative would be to adopt a Branch For Release policy and then use due diligence (i.e. code freeze again) to not update the branch when a build is in progress. Personally that sounds a little too volatile and disruptive.

[3] I discussed this with Steve Freeman briefly at the ACCU conference a few years ago and he suggested that perhaps you should just keep performing a full build every time with the expectation that there will be some lag, but then you can always deploy the moment another build pops out. I’d like to think that everyone commits changes with the Always Be Ready to Ship mentality but I’d be less trusting of an intraday build on a volatile branch like the trunk.

[4] When the team is split along technology lines this becomes harder as you might be forced to use Feature/Task Branches to allow code sharing, or your check-ins become non-atomic as you “pass the baton”.

No comments:

Post a Comment