Tuesday, 11 November 2014

Relentless Refactoring

During the first decade of my programming career refactoring wasn’t “a thing” so to speak [1]. Obviously the ideas behind writing maintainable code and how to keep it in tip top shape were there, but mostly as an undercurrent. As a general rule “if it ain’t broke, don’t fix it” was a much stronger message and so you rarely touched anything you weren’t supposed to. Instead you spent time as an “apprentice” slowly learning all the foibles of the codebase so that you could avoid the obvious traps. You certainly weren’t bold enough to suggest changing something to reflect a new understanding. One of the major reasons for this line of reasoning was almost certainly the distinct lack of automated tests watching your back.

No Refactoring

The most memorable example I have of this era is a method called IsDataReadyToSend(). The class was part of an HTTP tunnelling ISAPI extension which supported both real-time data through abuse of the HTTP protocol and later a more traditional polling method. At some point this method, which was named as a query-style method (which generally implies no side-effects) suddenly gained new behaviour - it sent data too in certain circumstances!

I didn’t write this method. I had to badger the original author, after being bitten by the cognitive dissonance for the umpteenth time, to change it to better reflect the current behaviour. Of course in retrospect there was such a strong odour that it needed more than a name change to fix the real design problem, but we were already haemorrhaging time as it was due to HTTP weirdness.

It seems funny now to think that I didn’t just go in and change this myself; but you didn’t do that. There was an air of disrespect if you went in and changed “somebody else’s” code, especially when that person was still actively working on the codebase, even if you had worked with them for years.

Invisible Refactoring

As I became aware of the more formal notion of refactoring around the mid 2000’s I still found myself in an uncomfortable place. It was hard to even convince “The Powers That Be” that fixing code problems highlighted by static code analysis tools was a worthy pursuit, even though they could be latent bugs just waiting to appear (See “In The Toolbox – Static Code Analysis”). If they didn’t see the value in that, then changing the structure of the code just to support a new behaviour would be seen as rewriting, and that was never going to get buy-in as “just another cost” associated with making the change. The perceived risks were almost certainly still down to a lack of automated testing.

Deciding to do “the right thing” will only cause you pain if you do not have some degree of acceptance from the project manager. My blog posts from a few years ago pretty much says it all in their titles about where I was at that time: “Refactoring – Do You Tell Your Boss” and “I Know the Cost but Not the Value”. These were written some time later on the back of a culture shift that appeared to go from “refactoring is tolerated” to “stop with all this refactoring and just deliver some features, now”, despite the fact that we were hardly doing any and there was so much we obviously didn’t understand about what we were building.

Prior to that project I had already started to see the benefits when I took 3 days out to do nothing but rearrange the header files in a legacy C++ system just so that I could make a tiny change to the one of the lowest layers [2]. Up to that point it was impossible to test even a tiny change without having to connect to a database and the message queuing middleware! The change I had to make was going to be tortuous unless I could make some internal changes and factor out some of the singletons.

Deciding to embark on such a pursuit, which I did not believe would take 3 whole days when I started, was incredibly hard. I knew that I was going to have to lie to my manager about my progress and that felt deeply uncomfortable, but I knew it would pay huge dividends in the end. And it did. Once I could get unit tests around the code I was going to change I found 5 bugs in the original implementation. My change then became trivial and I could unit test the hell out of it. This was incredibly significant because testing at the system scale could easily have taken well over a week as it was right in the guts of the system and would have required a number of services to be running. My code change integrated first time and I was spared the thought of trying to debug it in-situ.

That 3 days of hard slog also paved the way for unit testing to become a viable proposition on the infrastructure aspects of the codebase. Sadly the way COM was overused meant that there was still a long way to go before the rest of the stack could be isolated (See “Don’t Rewrite, Refactor”).

Visible Refactoring

After hiding what I had been doing for a few years it was nice to finally be open about it. The project that had taken a turn for the worse slowly recovered once it had gone live and the technical debt was in desperate need of being repaid. However this was really about architectural refactoring rather than refactoring in the small. Even so the effects on the design of not doing it were so apparent that it became tolerated again and so we could openly talk about changing the design to iterate it towards providing the basis for some of the features that we knew were important and still on the backlog. This is far less desirable than just doing the work when it’s actually required, but at least it meant we could make forward progress without being made to feel too guilty.

Relentless Refactoring

The last few projects I’ve worked on have taken refactoring to a new level (for me), it’s no longer something to feel guilty about it’s now just part of the cost of implementing a feature. As our understanding of the problem grows so the design gets refined. If the tests start to contain too many superfluous details they get changed. The build process gets changed. Everything is up for change with the only barrier to refactoring being our own personal desire to balance the need to deliver real functionality, and therefore direct value first, without letting the codebase drift too far into debt. When everything “just works” the speed at which you can start to turn features around becomes immense.

By being transparent about what we’re doing our clients are completely aware of what we are up to. More importantly they have the ability to question our motives and to channel our efforts appropriately should the schedule dictate more important matters. I don’t have any empirical evidence that what we’re doing ensures that we deliver better quality software, but sure feels far more friction free than most other codebases I’ve ever worked on, and I don’t believe that is solely down to the quality of the people.

Today I find myself readily tweaking the names of interfaces, classes, methods, test code, etc. all in an attempt to ensure that our codebase most accurately reflects the problem as we understand it today. My most recent post on this topic was “Refactoring Or Re-Factoring” which shows how I believe the message behind this technique has changed over time. In my new world order a method like IsDataReadyToSend() just wouldn’t be given the time to gain a foothold [3] because I now know what a mental drain that can be and quite frankly I have far more important things my customer would like me doing.

 

[1] This was the mid 1990’s so it probably was “a thing” to the likes of Beck, Fowler, etc. but it wasn’t a mainstream practice.

[2] This was largely an exercise in replacing #includes of concrete classes with forward declarations, but also the introduction of pure abstract base classes (aka interfaces) to allow some key dependencies to be mocked out, e.g. the database and message queue.

[3] Unless of course it’s made it’s way into a published interface.

No comments:

Post a Comment