Tuesday, 7 October 2014

Who’s Maintaining the 100 Foot View?

Last year I watched Michael Feathers give the keynote at Agile Cambridge 2013. It was another one of his Software Archaeology based talks and he touched on a few of the usual topics, such as Technical Debt and the quaint, old-fashioned notion of Big Design Up-Front (BDUF) via an all-encompassing UML model. We all chuckled at the prospect of generating our system from The Model and then “just filling in the blanks”.

Whilst I agree whole-heartedly with what he had to say it got me thinking a bit more about the level of design that sits between the Architect and the Programmer. Sadly I only got to run my thoughts briefly passed Michael as he wasn’t able to hang about. I think I got a “knowing nod of agreement”, but then I may also have been given the “I’m going to agree so that you’ll leave me alone” look too :-).

What I’ve noticed is that teams are often happy to think about The Big Picture and make sure that the really costly aspects are thought through, but less attention is paid to the design as we start to drill down into the component level. There might be a couple of big architecture diagrams hanging around that illustrate the overall shape of the system, but no medium or small diagrams that hone in on the more “interesting” internal parts of the system.

In “Whatever Happened to UML?” I questioned why the tool fell out of favour, even just for notional convenience which is how I use it [1]. I find that once a codebase starts to acquire functionality, especially if done in a test-first manner, it is important to put together a few rough sketches to show how the design is evolving. Often the act of doing this is enough to point out inconsistencies in the design, such as a lack of symmetry in a read/write hierarchy or a bunch of namespaces that perhaps should be split out into a separate package.

In C# the class access-level “internal” is worth nothing if you bung all your code into a single assembly. Conversely having one assembly per namespace is a different kind of maintenance burden, so the sweet spot is somewhere in between. I often start with namespaces called “Mechanisms” and “Remote” in the walking skeleton that are used for technical bit-and-bobs and proxies respectively. At some point they will usually be split off into separate assemblies to help enforce the use of “internal” on any interfaces or classes. Similar activities occur for large clumps of business logic when it’s noticed that the number of common dependencies between them is getting thin on the ground, i.e. the low cohesion can be made clearer by partitioning the codebase further.

To me refactoring needs to happen at all levels in the system - from architecture right down to method level. Whilst architectural refactorings have the potential to be costly, especially if some form of data migration is required, the lower levels can usually be done far more cheaply. Moving code around, either within a namespace in the same package or by splitting it off into separate packages should be fairly painless if the code was already well partitioned in the first place and the correct access modifiers used (i.e. internal and private) and adhered to.

And yet I see little of this kind of thinking going on. What worries me is that in the rush to embrace being “agile” and to adhere to the mantra of doing “the simplest thing that could possibly work” we’ve thrown the proverbial baby out with the bath water. In our desire to distance ourselves from being seen to be designing far too much up front we’ve lost the ability to even design in the small as we go along.

 

[1] Interestingly, Simon Brown in his talk at Agile on the Beach 2014 (Agility and the essence of software architecture), questioned whether there was any real value even in the UML notion as a common convention. It’s a good point and I guess as long as you make it clear whether it’s a dependency or data flow diagram you’ll know what the arrowheads correspond to.

No comments:

Post a Comment