Monday 8 October 2012

Putting the Cost of Change Into Perspective

One of the topics in Programming Pearls by Jon Bentley that I still find most valuable on a daily basis is the Back of the Envelope Calculation. Often just a rough calculation can help you see the orders of magnitude involved with a task and let you decide whether to categorise it as “small stuff” that’s not worth sweating about or “potentially costly” that deserves more thought.

Refactoring is a good case in point. Sometimes there is a direct need to refactor code to allow you make a safe change, such as when dealing with legacy code. But there is also the “boy scout” bit where you might have a little extra time to just do a bit of litter picking and delete some unused code, or improve the names of some related types/methods. The conscientious programmer always feels uneasy about doing this because the value of the change is far less tangible and therefore it can be seen by some to be approaching the realm of gold-plating.

On a different, but vaguely related note I read an article in the June 2012 edition of MSDN Magazine about the benefits of the new code-based configuration support in MEF. Now, I still struggle to see the advantages of an Inversion of Control (IoC) container and this article did nothing to help me see what the significant savings are that I’d see from adopting such a technology[*]. Anyway, about half way through the piece the author suggests that if he needed to add support for “a new weather service provider” he could implement the service and then plumb the changes into the app with just a line or two of code. His use of “That’s It!” may well be intended to be ironic because implementing most new functionality is clearly non-trivial in comparison to the mechanism needed to plumb it in; but it’s not an isolated case where I’ve seen an IoC container presented in the same light as a Silver Bullet.

What both of these cases have in common though is that the costs are often presented within the perspective of the “development time” of a feature - the time taken to “write the code”, and hopefully (at least) unit test it. But there can be so much more to make a feature not just done, but “done done”. Even if you’re using Continuous Integration and Continuous Deployment so that your build and deployment times are minimal, the level of system-testing coupled with the original discussions that lead up to the decision to implement it may be orders of magnitude more than the time you’ll spend doing a little cleaning-up or hand-crafting a factory[#].

Of course no change is made in isolation and so it could be said that the build, deployment and system testing costs are shared across all the features currently being developed. Even then I still think doing the maths is worth it as it may well show that the effort is just a drop in the ocean. Where you do need to be more careful is holding up a build & deployment to get “one more tiny change in” as this can be more disruptive than productive. It also means your process is probably not as fluid as it could be.

By way of an opposite example a request was once made by a project manager to look into making it easier to add support for new calculation types. In his mind it should be simple to configure the inputs, parse the output and then persist it without needing a code change; and in essence he was right. We could have re-designed the code to allow the whole process to be “more dynamic” - swapping code complexity for configuration complexity. My personal objection though was not on technical grounds (it sounded entirely useful after all) but on the grounds that we spent all our time trying to work out why the structure of the inputs and outputs didn’t match our expectations. Or having a few round trips with our 3rd party as we point out a variety of cases where it failed. Or writing test cases to make sure we can detect when this structure changes again unexpectedly in the future. In other words changing the production code is just a small part of the overall cost of getting the entire feature into production.

I guess the flipside to all this is the old saying about taking care of the pennies and the pounds will take care on themselves, after all nobody goes out of their way to be inefficient on purpose. Of course if it’s the same penny every time then the usual rules about automation apply. Either way just be sure that you know what it is you’re giving up in return - the lunch will almost certainly not be free and the best-before date will eventually arrive.

 

[*] I’ve worked on both desktop applications and back-end systems and I still don’t see the allure. But I’ve not worked on a modern large-scale web site or heavily “screens” based desktop app so that may explain it. My only experience is with helping someone remove their dependency on one as a by-product of making their code more unit testable so I’m definitely not “feeling” it. However, it’s still lodged in my Conscious Incompetence ready to be put to good use when/if that day finally comes.

[#] That’s one of the things about IoC containers that appears snake-oil-like to me - the age old GoF Design Patterns such as Abstract Factory, Factory Method & Facade are trivial to implement and have the added bonus of not forcing any of my code to have a direct dependency on a framework. One of the techniques I used in the footnote above when improving unit test coverage, was to use Facade to factor out the IoC code.

No comments:

Post a Comment