Thursday, 30 June 2016

Turning Technical Tasks Into User Stories

When working on a mature system it’s all to easy to fall into describing the work as a bunch of technical tasks. Those in the trenches that have been working on the system for years will no doubt know how to implement many of the features that come up and so what happens is that they get put on the backlog as technical stories.

Whilst in the end everything will no doubt end up as a technical task to be done by a member of the team anyway, you don’t want to get to that point too quickly. The problem with putting technical stories straight on the backlog is that it often locks the product owner out of the development process due to it’s solution oriented nature.

A common malaise in the software development industry is the weak excuse of “we’ve always done it this way”. Breaking features down into technical stories plays right into this way of thinking because there is a focus on trying to leverage the skills you already have without looking at the bigger picture first. Instead of just launching straight in you need to take a step back and survey the landscape to see if it’s changed since the last time you had a look around.


One of the things that can go out the window when you frame a story around the implementation is the ability of the product owner to prioritise. The developers, who know the ramifications of it not being done, can end up flannelling the product owner as they mistake the PO’s intent of discovering “when”, not “if” it will be done.

I remember a story at a company which basically described configuring a bunch of stuff. This was not an unusual task, the developers knew what needed doing, but the highly technical nature of the story description sat uneasily with me. So, along with the team, I tried to see if we could elevate it to a user focused story to try and eek out the essence of it.

I started by asking what would happen if this task wasn’t done. We had already identified plenty of waste elsewhere and I wanted to make sure this wasn’t also just waste. The conversation focused on the effects of the primary user (an advisor) but I eventually managed to get them to explain what it meant for the actual end user (i.e. the customer).

At this point the truth started to unfold. Firstly it became apparent that the behaviour enabled by this work might be invoked the day the feature when live, but that was highly unlikely because of the product (think warranty). Also, if it wasn’t done at go-live time the end user would still get a service, but you could consider it as a degraded level of service. There was probably even a manual way of mitigating the risk should the task not get completed before the feature went live and the unlikely scenario materialise.

At this point the light-bulb went on and the developers could now see that this work, whilst probably required in the longer term, was not essential now. What we had done by turning the technical task into a user story was unlock the product owner so that they could now evaluate the work in the context of the overall business plan and prioritise it appropriately, along with understanding the risk of later delivery.

This is a common problem when teams adopt a more agile approach. They assume that the same amount of work needs to be done, which may well be true, but at the very least the order in which it needs to be done could be radically different. Traditionally you get the product when it’s ready – all the value comes at the end – but an agile approach looks to unlock value sooner. To achieve that you need to elevate your thinking and look at the problem from the customer’s point-of-view. Essentially you need to be able to help the product owner weigh up the alternatives (“I Know the Cost but Not the Value”).

Problem Solving

The second issue I’ve noticed around the “we’ve always done it this way” mentality is that you end up with tasks that assume there is only one solution [1]. Once again it may turn out to be the eventual solution, but you owe it to yourselves to at least consider whether this time things might be different. Perhaps there are other constraints or changes that mean it’s not applicable this time around; just because we know one way of solving the problem does not imply it’s the best, or most cost-effective.

One example I had of this was a user story I saw which involved coding some help text into a system. Once again I tried to get the development team to view user stories not as an absolute specification of what to build, but more generally as a problem that needs to be solved. In particular I wanted them to start thinking about whether there were any low-tech or even non-IT related options.

During the discussion it became apparent that the primary user (once again an advisor) only needed the text whilst they were learning about the new feature, once they knew it the text would be redundant. My first suggestion was to just print the text out on some paper and leave it on their desks or stick it on the side of the monitor. When they were confident they knew it they could bin it or put it in their drawer for safe keeping.

Sadly there were some bizarre rules in place about not being allowed stuff on their desk. So instead I suggested pasting the text into an HTML page that was accessed from their desktop when they needed it. This was discarded for other reasons, but what it did do was cause someone in the team itself to mention another help system that the users accessed which was easier to update. The existence of this other, newer system, along with a whole raft of other nuggets of information about the users and the environment they worked in were unearthed through that one conversation. Consequently the team began to understand that their job was not to simply “write code” but was in fact to solve people’s problems through whichever means best satisfied the constraints.

I don’t know which solution was eventually chosen, but once again the wider conversation allowed the product owner to make a better judgement call about if, and when, it needed doing.

Training Wheels

Like so many aspects of adopting an agile development process this is just a technique, not a hard-and-fast rule. What I detected was a “smell” in each of these stories and so I made the team work hard to get to the crux of what really needing delivering. Like the classic story format “as a user, I want…” it’s all about trying to discover who wants what and why. Once you start thinking that way more naturally you can ditch the training wheels and use whatever techniques work best for the team. If you feel confident cutting-to-the-chase sometimes then do it.

[1] “There's nothing more dangerous than an idea if its the only one you have.” -- Émile Chartier

Friday, 24 June 2016

Confusing Assert and Throw

One of the earliest programming books I read that I still have a habit of citing fairly often is Steve Maguire’s Writing Solid Code. This introduced me to the idea of using “asserts” to find bugs where programming contracts had been violated and to the wider concept of Design by Contract. I thought I had understood “when and where” to use asserts (as opposed to signalling errors, e.g by throwing an exception) but one day many years back a production outage brought home to me how I had been lured into misusing asserts when I should have generated errors instead.

The Wake Up Call

I was one of a small team developing a new online trading system around the turn of the millennium. The back end was written in C++ and the front-end was a hybrid Java applet/HTML concoction. In between was a custom HTTP tunnelling mechanism that helped transport the binary messages over the Internet.

One morning, not long after the system had gone live, the server hung. I went in to the server room to investigate and found an assert dialog on the screen telling me that a “time_t value < 0” had been encountered. The failed assert had caused the server to stop itself to avoid any potential further corruption and wait in case we needed to attach a remote debugger. However all the server messages were logged and it was not difficult to see that the erroneous value had come in via a client request. Happy that nothing really bad had happened we switched the server back on, apologised to our customers for the unexpected outage, and I contacted the user to find out more about what browser, version of Java, etc. they were using.

Inside the HTML web page we had a comma-separated list of known Java versions that appeared to have broken implementations of the Date class. The applet would refuse to run if it found one of these. This particular user had an even more broken JVM version that was returning -1 for the resulting date/time value represented in milliseconds. Of course the problem within the JVM should never have managed to “jump the air gap” and cause the server to hang; hence the more important bug was how the bad request was managing to affect the server.

Whilst we had used asserts extensively in the C++ codebase to ensure programmatic errors would show up during development we had forgotten to put validation into certain parts of the service - namely the message deserialization. The reason the assert halted the server instead of silently continuing with a bad value was because we were still running a debug build as performance at that point didn’t demand we switch to a release build.

Non-validated Input

In a sense the assert was correct in that it told me there had been a programmatic error; the error being that we had forgotten to add validation on the message which had come from an untrusted source. Whereas something like JSON coming in over HTTP on a public channel is naturally untrusted, when you own the server, client, serialization and encryption code it’s possible to get lured into a false sense of security that you own the entire chain. But what we didn’t control was the JVM and that proved to be our undoing [1].

Asserts in C++ versus C#

In my move from C++ towards C# I discovered there was a different mentality between the two cultures. In C++ the philosophy is that you don’t pay for what you don’t use and so anything slightly dubious can be denounced as “undefined behaviour” [2]. There is also a different attitude towards debug and release builds where the former often performs poorly at the expense of more rigorous checking around contract violations. In a (naïve) sense you shouldn’t have to pay for the extra checks in a release build because you don’t need them due to you having removed (in theory) any contract violations via testing with the debug build.

Hence it is common for C++ libraries such as STLport to highlight entering into undefined behaviour using an assert. For example iterators in the debug build of STLport are implemented as actual objects that track their underlying container and can therefore detect if they have been invalidated due to modifications. In a release build they might be nothing more than an alias for the built-in pointer type [3].

However in C# the contract tends be validated in a build agnostic manner through the use of checks that result in throwing an explicit ArgumentException / ArgumentNullException type exception. If no formal error checking is used up front then you’ll probably encounter a more terse NullPointerException or IndexOutOfBoundsException when you try and dereference / index on the value sometime later.

Historically many of the assertions that you see in C++ code are validations around the passing of null pointers or index bounds. In C# these are catered for directly by the CLR and so it’s probably unsurprising that there is a difference in culture. Given that it’s (virtually) impossible to dereference in invalid pointer in the CLR and dereferencing a null one has well defined behaviour there is not the same need to be so cautious about memory corruption caused by dodgy pointer arithmetic.

Design by Contract

Even after all this time the ideas behind Design by Contract (DbC) still do not appear to be very well known or considered useful. C# 4 added support for Code Contracts but I have only seen one codebase so far which has used them, and even then its use was patchy. Even the use of Debug.Assert as a very simple DbC mechanism never appears to be used, presumably because there is so little use made of debug builds in the first place (so they would never fire anyway).

Consequently I have tended to use my own home-brew version which is really just a bunch of static methods that I use wherever necessary:

Constraint.MustNotBeNull(anArgument, “what it is”);

When these trigger they throw a ContraintViolationException that includes a suitable message. When I first adopted this in C# I wasn’t sure whether to raise an exception or not, and so I made the behaviour configurable. It wasn’t long before I stripped that out along with the Debug.Assert() calls and just made the exception throwing behaviour the default. I decided that if performance was ever going to be an issue I would deal with it when I had a real example to optimise for. So far that hasn’t happened.

Always Enabled

If I was in any doubt as to their value I got another reminder a while back when developing an in house compute grid. I added a whole bunch of asserts inside the engine management code, which strictly speaking weren’t essential as long as the relationship between the node manager, engine and task didn’t change.

One day someone added a bit of retry logic to allow the engine to reconnect back to its parent manager, which made perfect sense in insolation. But now the invariant was broken as the node manager didn’t know about engines it had never spawned. When the orphaned task finally completed it re-established the broken connection to its parent and tried to signal completion of its work. Only it wasn’t the same work dished out by this instance of the manager. Luckily an internal assert detected the impedance mismatch and screamed loudly.

This is another example where you can easily get lulled into a false sense of security because you own all the pieces and the change you’re making seems obvious (retrying a broken connection). However there may be invariants you don’t even realise exist that will break because the cause-and-effect are so far apart.

Confused About the Problem

The title for this post has been under consideration for the entire time I’ve been trying to write it. I’ve definitely thought about changing it twice before to “Confusing Assert and Validation” as essentially the decision to throw or not is about how to handle the failure of the scenario. But I’ve stuck with the title because it highlights my confused thinking.

I suspect that the reason for this confusion is because the distinction between a contract violation and input validation is quite subtle. In my mind the validation of inputs happens at the boundary of a component, and it only needs to be done once. When you are inside that component you should be able to assume that validation has already occurred. But as I’ve described earlier, a bug can allow something to slip through the net and so input validation happens at the external boundary, but contract enforcement happens at an internal boundary, e.g. a public versus internal class or method [4].

If the input validation triggers its significance is probably less because the check is doing the job it was intended to do, but if a contract violation fires it suggests there is a genuine bug. Also by throwing different exception types it’s easier to distinguish the two cases and then any Big Outer Try Blocks can recover in different ways if necessary, e.g. continue vs shutdown.

Another way to look at the difference is that input validation is a formal part of the type’s contract and therefore I’d write tests to ensure it is enforced correctly. However I would not write tests to ensure any internal contracts are enforced (any more than I would write tests for private methods).


I started writing this post a few years ago and the fact that it has taken this long shows that even now I’m not 100% sure I really know what I’m doing :o). While I knew I had made mistakes in the past and I thought I knew what I was doing wrong, I wasn’t convinced I could collate my thoughts into a coherent piece. Then my switch to a largely C# based world had another disruptive effect on my thinking and succeeded in showing that I really wasn’t ready to put pen to paper.

Now that I’ve finally finished I’m happy where it landed. I’ve been bitten enough times to know that shortcuts get taken, whether consciously or not, and that garbage will end up getting propagated. I can still see value in the distinction of the two approaches but am happy to acknowledge that it’s subtle enough to not get overly hung up about. Better error handling is something we can always do more of and doing it one way or another is what really matters.

[1] Back then the common joke about Java was “write once, test everywhere”, and I found out first-hand why.

[2] There is also the slightly more “constrained” variant that is implementation-defined behaviour. It’s a little less vague, if you know what your implementation will do, but in a general sense you still probably want to assume very little.

[3] See “My Kingdom for a C# Typedef” for why this kind of technique isn’t used.

[4] The transformation from primitive values to domain types is another example, see “Primitive Domain Types - Too Much Like Hard Work?”.

Wednesday, 15 June 2016

Mocking Framework Expectations

I’ve never really got on with mocking frameworks as I’ve generally preferred to write my own mocks, when I do need to use them. This may be because of my C++ background where mocking frameworks are a much newer invention as they’re harder to write due to the lack of reflection.

I suspect that much of my distaste comes from ploughing through reams of test code that massively over specifies exactly how a piece of code works. This is something I’ve covered in the past in “Mock To Test the Outcome, Not the Implementation” but my recent experiences unearthed another reason to dislike them.

It’s always a tricky question when deciding at what level to write tests. Taking an outside-in approach and writing small, focused services means that high-level acceptance tests can cover most of the functionality. I might still use lower-level tests to cover certain cases that are hard to test from the outside, but as tooling and design practices improve I’ve felt less need to unit test everything at the lowest level.

The trade-off here is an ability to triangulate when a test fails. Ideally when one or more tests fail you should be able home in on the problem quickly. It’s likely to be the last thing you changed but sometimes the failure is slightly further back but you’ve only just unearthed it with you latest change. Or, if you’re writing your first test for a new piece of code and it doesn’t go green, then perhaps something in the test code itself is wrong.

This latter case is what recently hit the pair of us when working on a simple web API. We had been fleshing out the API by mocking the back-end and some other dependent services with NSubstitute. One test failure in particular eluded our initial reasoning and so we fired up the debugger to see what was going on.

The chain of events that lead to the test failure ultimately were the result of a missing test method implementation on a mock. The mocking framework, instead of doing something sensible like throwing an exception such as NotImplementedException, chose to return the equivalent of default(T) which for a reference type is null.

While a spurious null reference might normally result in a NullReferenceException quickly after, the value was passed straight into JArray.Parse() which helpfully transformed the bad value into a good one – an empty JSON array. This was a valid scenario (an empty set of things) and so a valid code path was then taken all the back out again to the caller.

Once the missing method on the mock was added the code started behaving as expected.

This behaviour of the mock feels completely wrong to me. And it’s not just NSubstitute that does this, I’ve seen it at least once before with one of the older C# mocking frameworks. In my mind any dependency in the code must be explicitly catered for in the test. The framework cannot know what a “safe” default value is and therefore should not attempt to hide the missing dependency.

I suspect that it’s seen as a “convenience” to help people write tests for code which has many dependencies. If so, then that to me is a design smell. If you have a dependency that needs to be satisfied but plays no significant role in the scenario under test, e.g. validating a security token, then create a NOP style mock that can be reused, e.g. an AlwaysValidIdentityProvider and configure it in the test fixture SetUp so that it’s visible, but not dominant.

A tangential lesson here is one that Sir Tony Hoare is personally burdened with, which is the blatant overloading of a null string reference as a synonym for “no value”. One can easily see how this can happen because of the way that String.IsNullOrEmpty() is used without scant regard for what a null reference should symbolise. My tale of woe in “Extension Methods Should Behave Like Real Methods” is another illustration of this malaise.

I did wonder if we’d shot ourselves in the foot by trying to test at too a high a level, but I don’t believe so because you wouldn’t normally write unit tests for 3rd party components like a JSON parser. I might write characterisation tests for a 3rd party dependency such as a database API if I wanted to be sure that we really knew how it would handle certain types of data conversion or failure, or if we rely on some behaviour we might suspect is not clearly defined and could therefore silently change in a future upgrade. There did not feel like there should have been anything surprising in the code we had written and our dependencies were all very mature components.

Of course perhaps this behaviour is entirely intentional and I just don’t fit the use case, in which case I’d be interested to know what it is. If it’s configurable then I believe it’s a poor choice of default behaviour and I’ll add it to those already noted in “Sensible Defaults”.

Friday, 3 June 2016

Showcasing APIs – The Demo Site

Developers that work on front-ends have it easy [1]. When it comes to doing a sprint demo or showcasing your work you’ve got something glitzy to talk about. In contrast writing a service API is remarkably dull by comparison.

At the bottom end of the spectrum you have got good old reliable CURL, if you want a command-line feel to your demo. Just above this, but also still fairly dry by comparison with a proper UI, is using a GUI based HTTP tool such as the Postman or Advanced REST Client extensions for Chrome. Either way it’s still pretty raw. Even using tools like Swagger still put you in the mind-set of the API developer rather than the consumer.

If you’re writing a RESTful API then it’s not much effort to write a simple front-end that can invoke your API. But there’s more to it than just demoing which “feature” of an API you delivered, it has other value too.

Client Perspective

One of the hardest things I’ve certainly found working on service APIs is how you elevate yourself from being the implementer to being the consumer. Often the service I’m working on is one piece of a bigger puzzle and the front-end either doesn’t exist (yet) or is being developed by another team elsewhere so collaboration is sparse.

When you begin creating an API you often start by fleshing out a few basic resource types and then begin working out how the various “methods” work together. To do this you need to start thinking about how the consumer of the API is likely to interact with it. And to do that you need to know what that higher level use cases are (i.e. the bigger picture).

This process also leads directly into the prioritisation of what order the API should be built. For instance it’s often assumed the Minimum Viable Product (MVP) is “the entire API” and therefore order doesn’t matter, but it matters a lot as you’ll want to build stuff your consumers can plug-in as soon as it’s ready (even just for development purposes). Other details may not be fleshed out and so you probably want to avoid doing any part of it which is “woolly”, unless it’s been prioritised specifically to help drive out the real requirements or reduce technical risk.

A simple UI helps to elevate the discussion so that you don’t get bogged down in the protocol but start thinking about what the goals are that the customer wants to achieve. Once you have enough of a journey you can start to actually play out scenarios using your rough-and-ready UI tool instead of just using a whiteboard or plugging in CURL commands at the prompt. This adds something really tangible to the conversation as you begin to empathise with the client, rather than being stuck as a developer.

This idea of consuming of your own product to help you understand the customers point of view is known as “eating your own dog food”, or more simply “dog-fooding”. This is much easier when you’re developing an actual front-end product rather than a service API, but the demo UI certainly helps you get closer to them.

Exploratory Testing

I’ll be honest, I love writing tools. If you’ve read “From Test Harness To Support Tool” you’ll know that I find writing little tools that consume the code I’ve written in ways other than what the automated tests or end user needs is quite rewarding. These usually take the form of test tools (commonly system testing tools [2]), and as that post suggests they can take on a life later as a support or diagnostic tool.

Given that the remit of the demo site is to drive the API, it can therefore also play the role of a testing tool. Once you start having to carry over data from one request to the next, e.g. security token, cookie, user ID, etc. it gets tedious copying and pasting them from one request to the next. It’s not uncommon to see wiki pages written by developers that describe how you can do this manually by copy-and-pasting various steps. But this process is naturally rather error prone and so the demo site brings a little automation to the proceedings. Consequently you are elevated from the more tedious aspects of poking the system to focus on the behaviour.

This also gels quite nicely with another more recent post “Man Cannot Live by Unit Testing Alone” as I have used this technique to find some not-insignificant gaps in the API code, such as trivial concurrency issues. Whilst they would probably have been found later when the API was driven harder, I like to do just a little bit of poking around in the background to make sure that we’ve got the major angles covered early on when the newer code is fresher in your mind.

This can mean adding things to your mocks that you might feel is somewhat excessive, such as ensuring any in-memory test data stores (i.e. dictionaries) are thread-safe [3]. But it’s a little touch that makes exploratory testing easier and more fruitful as you don’t get blasé and just ignore bugs because you believe the mocks used by the API are overly simple. If you’re going to use a bit of exploratory testing to try and sniff out any smells it helps if the place doesn’t whiff to begin with.

3rd Party Test UI

One of the problems of being just one of many teams in the corporate machine is that you have neighbours which you need to play nicely with. In the good old days everyone just waited until “the UAT phase” to discover that nothing integrated correctly. Nowadays we want to be collaborating and integrating as soon as possible.

If I’m building an API then I want to get it out there in the hands of the consumers as soon as possible so that we can bottom out any integration issues. And this isn’t just about fixing minor issues in the contract, but they should have input into what shape it should take. If the consumers can’t easily consume it, say for technical reasons, then we’ve failed. It’s too easy to get bogged down in a puritanical debate about “how RESTful” an API is instead of just iterating quickly over any problems and reaching a style that everyone can agree on.

The consumers of the service naturally need the actual API they are going to code against, but having access to a test UI can be useful for them too. For example it can give them the power to play and explore the API at a high-level as well, potentially in ways you never anticipated. If the service is backed by some kind of persistent data store the test UI can often provide a bit of extra functionality that avoids them having to get your team involved. For example if your service sends them events by a queue, mocking and canned data will only get them so far. They probably want you to start pushing real events from your side so that they can see where any impedance mismatches lie. If they can do this themselves it decouples their testing from the API developers a little more.

Service Evangelism

When you’re building a utility service at a large company, such as something around notifications, you might find that the product owner needs to go and talk to various other teams to help showcase what you’re building. Whilst they are probably already in the loop somewhere having something tangible to show makes promoting it internally much easier. Ideally they would be able to give some immediate feedback too about whether the service is heading in a direction that will be useful to them.

There is also an element of selling to the upper echelons of management and here a showcase speaks volumes just as much as it does to other teams. It might seem obvious but you’d be amazed at how many teams struggle to easily demonstrate “working software” in a business-oriented manner.

How Simple Should It Be?

If you’re going to build a demo UI for your service API you want to keep it simple. There is almost certainly less value in it than the tests so you shouldn’t spend too long making it. However once you’ve got a basic demo UI up you should find that maintaining it with each story will be minimal effort.

In more than one instance I’ve heard developers talk of writing unit tests for it, or they start bringing in loads of JavaScript packages to spice it up. Take a step back and make sure you don’t lose sight of why you built it in the first place. It needs to be functional enough to be useful without being too glitzy that it detracts from it’s real purpose, which is to explore the API behaviour. This means it will likely be quite raw, e.g. one page per API method, with form submissions to navigate you from one API method to the next to ease the simulation of user journeys. Feel free to add some shortcuts to minimise the friction (e.g. authentication tokens) but don’t make it so slick that the demo UI ends up appearing to be the product you’re delivering.

[1] Yes, I’m being ironic, not moronic.

[2] I’ve worked on quite a few decent sized monolithic systems where finding out that’s going on under the hood can be pretty difficult.

[3] Remember your locking can be as coarse as you like because performance is not an issue and the goal is to avoid wasting time investigating bugs due to poorly implemented mocks.

Thursday, 2 June 2016

My Kingdom for a C# Typedef

Two of the things I’ve missed most from C++ when writing C# are const and typedef. Okay, so there are a number of other things I’ve missed too (and vice-versa) but these are the ones I miss most regularly.

Generic Types

It’s quite common to see a C# codebase littered with long type names when generics are involved, e.g.

private Dictionary<string, string> headers;

public Constructor()
  headers = new Dictionary<string, string>();

Dictionary<string, string> FilterHeaders(…)
  return something.Where(…).ToDictionary(…);

The overreliance on fundamental types in a codebase is a common smell known as Primitive Obsession. Like all code smells it’s just a hint that something may be out of line, but often it’s good enough to serve a purpose until we learn more about the problem we’re trying to solve (See “Refactoring Or Re-Factoring”).

The problem with type declarations like Dictionary<string, string> is that, as their very name suggests, they are too generic. There are many different ways that we can use a dictionary that maps one string to another and so it’s not as obvious which of the many uses applies in this particular instance.

Naturally there are other clues to guide us, such as the names of the class members or variables (e.g. headers) and the noun used in the method name (e.g. FilterHeaders), but it’s not always so clear cut.

The larger problem is usually the amount of duplication that can occur both in the production code and tests as a result of using a generic type. When the time comes to finally introduce the real abstraction you were probably thinking of, the generic type declaration is then littered all over the place. Duplication like this is becoming less of an issue these days with liberal use of var and good refactoring tools like ReSharper, but there is still an excess of visual noise in the code that distracts the eye.

Type Aliases

In C++ and Go (amongst others) you can help with this by creating an alias for the type that can then be used in it’s place:

typedef std::map<std::string, std::string> headers_t;

headers_t headers; 

There is a special form of using declaration in C# that actually comes quite close to this:

using Headers = Dictionary<string, string>;

private Headers headers;

public Constructor()
  headers = new Headers();

Headers FilterHeaders(…)
  return something.Where(…).ToDictionary(…);

However it suffers from a number of shortcomings. The first is that it’s clearly not a very well known construct – I hardly ever see C# developers using it. But that’s just an education problem and the reason I’m writing this, right?

The second is that by leveraging the “using” keyword it seems to mess with tools like ReSharper that want to help manage all your namespace imports for you. Instead of keeping all the imports at the top whilst keeping alias style using’s separately below it just lumps them all together, or sees the manual ones later and just appends new imports after them. Essentially they don’t see a difference between these two different use cases for “using”.

A third problem with this case of “using” though is that you cannot compose them. This means you can’t use one alias inside another. For example, imagine that you wanted to provide a cache of a user’s full name (first & last) against some arbitrary user ID, you wouldn’t be able to do this:

using FullName = Pair<string, string>;
using NameCache = Dictionary<int, FullName>;

It’s not unusual to want to compose dictionaries with other dictionaries or lists and that’s the point at which the complexity of the type declaration really takes off.

Finally, and sadly the most disappointing aspect, is that the scope of the alias is limited to just the source file where it’s defined. This means that you have to redefine the alias wherever it is used if you want to be truly consistent. Consequently I’ve tended to stick with only using this technique for the implementation details of a class where I know the scope is limited.


One alternative to the above technique that I’ve seen used is to create a new class that inherits from the generic type:

public class Headers : Dictionary<string, string>
{ }

This is usually used with collections and can be a friction-free approach in C# due to the way that the object-initialization syntax relies on the mutable Add() method:

var header = new Headers
  { “location”, “” },

However if you wanted to use the traditional constructor syntax you’d have to expose all the base type constructors yourself manually on the derived type so that they can chain through to the base.

What makes me most uncomfortable about this approach though is that it uses inheritance, and that has a habit of leading some developers down the wrong path. By appearing to create a new type it’s tempting to start adding new behaviours to it. Sometimes these behaviours will require invariants which can then be violated by using the type through it’s base class (i.e. violate the Liskov Substitution Principle).

Domain Types

Earlier on I suggested that this is often just a stepping stone on the way to creating a proper domain type. The example I gave above was using a Pair to store a person’s first and last name:

using FullName = Pair<string, string>;

Whilst this might be palatable for an implementation detail it’s unlikely to be the kind of thing you want to expose in a published interface as the abstraction soon leaks:

var firstName = fullName.First;
var lastName = fullName.Second;

Personally I find it doesn’t take long for some generics like Pair and Tuple to leave a nasty taste and so I quickly move on to creating a custom type for the job (See “Primitive Domain Types - Too Much Like Hard Work?”):

public class FullName
  public string FirstName { get; }
  public string LastName { get; }

This is another one of those areas where C# is playing catch-up with the likes of F#. Creating simple immutable data types like these are the bread-and-butter of F#’s Record type; you can do it with anonymous types in C# but not easily with named types.

Strongly-Typed Aliases

One other aside before I sign off about typedefs is that in C, C++, and the “using” approach in C#, they are not strongly-typed types, they are just simple aliases. Consequently you can freely mix-and-match the alias with the underlying type:

var headers = new Headers();
Dictionary<string, string> dictionary = headers;

In C++ you can use templates to workaround this [1] but Go goes one step further and makes the aliases more strongly-typed. This means that there is no implicit conversion even when the fundamental type is the same, which makes them just a little bit more like a real type. Once again F# with its notion of units of measure also scores well on this front.

[1] The first article I remember about this was Matthew Wilson’s “True Typedefs”, but the state of the art has possibly moved on since then. The Boost template metaprogramming book also provides a more extensive discussion on the topic.