Friday, 24 June 2016

Confusing Assert and Throw

One of the earliest programming books I read that I still have a habit of citing fairly often is Steve Maguire’s Writing Solid Code. This introduced me to the idea of using “asserts” to find bugs where programming contracts had been violated and to the wider concept of Design by Contract. I thought I had understood “when and where” to use asserts (as opposed to signalling errors, e.g by throwing an exception) but one day many years back a production outage brought home to me how I had been lured into misusing asserts when I should have generated errors instead.

The Wake Up Call

I was one of a small team developing a new online trading system around the turn of the millennium. The back end was written in C++ and the front-end was a hybrid Java applet/HTML concoction. In between was a custom HTTP tunnelling mechanism that helped transport the binary messages over the Internet.

One morning, not long after the system had gone live, the server hung. I went in to the server room to investigate and found an assert dialog on the screen telling me that a “time_t value < 0” had been encountered. The failed assert had caused the server to stop itself to avoid any potential further corruption and wait in case we needed to attach a remote debugger. However all the server messages were logged and it was not difficult to see that the erroneous value had come in via a client request. Happy that nothing really bad had happened we switched the server back on, apologised to our customers for the unexpected outage, and I contacted the user to find out more about what browser, version of Java, etc. they were using.

Inside the HTML web page we had a comma-separated list of known Java versions that appeared to have broken implementations of the Date class. The applet would refuse to run if it found one of these. This particular user had an even more broken JVM version that was returning -1 for the resulting date/time value represented in milliseconds. Of course the problem within the JVM should never have managed to “jump the air gap” and cause the server to hang; hence the more important bug was how the bad request was managing to affect the server.

Whilst we had used asserts extensively in the C++ codebase to ensure programmatic errors would show up during development we had forgotten to put validation into certain parts of the service - namely the message deserialization. The reason the assert halted the server instead of silently continuing with a bad value was because we were still running a debug build as performance at that point didn’t demand we switch to a release build.

Non-validated Input

In a sense the assert was correct in that it told me there had been a programmatic error; the error being that we had forgotten to add validation on the message which had come from an untrusted source. Whereas something like JSON coming in over HTTP on a public channel is naturally untrusted, when you own the server, client, serialization and encryption code it’s possible to get lured into a false sense of security that you own the entire chain. But what we didn’t control was the JVM and that proved to be our undoing [1].

Asserts in C++ versus C#

In my move from C++ towards C# I discovered there was a different mentality between the two cultures. In C++ the philosophy is that you don’t pay for what you don’t use and so anything slightly dubious can be denounced as “undefined behaviour” [2]. There is also a different attitude towards debug and release builds where the former often performs poorly at the expense of more rigorous checking around contract violations. In a (naïve) sense you shouldn’t have to pay for the extra checks in a release build because you don’t need them due to you having removed (in theory) any contract violations via testing with the debug build.

Hence it is common for C++ libraries such as STLport to highlight entering into undefined behaviour using an assert. For example iterators in the debug build of STLport are implemented as actual objects that track their underlying container and can therefore detect if they have been invalidated due to modifications. In a release build they might be nothing more than an alias for the built-in pointer type [3].

However in C# the contract tends be validated in a build agnostic manner through the use of checks that result in throwing an explicit ArgumentException / ArgumentNullException type exception. If no formal error checking is used up front then you’ll probably encounter a more terse NullPointerException or IndexOutOfBoundsException when you try and dereference / index on the value sometime later.

Historically many of the assertions that you see in C++ code are validations around the passing of null pointers or index bounds. In C# these are catered for directly by the CLR and so it’s probably unsurprising that there is a difference in culture. Given that it’s (virtually) impossible to dereference in invalid pointer in the CLR and dereferencing a null one has well defined behaviour there is not the same need to be so cautious about memory corruption caused by dodgy pointer arithmetic.

Design by Contract

Even after all this time the ideas behind Design by Contract (DbC) still do not appear to be very well known or considered useful. C# 4 added support for Code Contracts but I have only seen one codebase so far which has used them, and even then its use was patchy. Even the use of Debug.Assert as a very simple DbC mechanism never appears to be used, presumably because there is so little use made of debug builds in the first place (so they would never fire anyway).

Consequently I have tended to use my own home-brew version which is really just a bunch of static methods that I use wherever necessary:

Constraint.MustNotBeNull(anArgument, “what it is”);

When these trigger they throw a ContraintViolationException that includes a suitable message. When I first adopted this in C# I wasn’t sure whether to raise an exception or not, and so I made the behaviour configurable. It wasn’t long before I stripped that out along with the Debug.Assert() calls and just made the exception throwing behaviour the default. I decided that if performance was ever going to be an issue I would deal with it when I had a real example to optimise for. So far that hasn’t happened.

Always Enabled

If I was in any doubt as to their value I got another reminder a while back when developing an in house compute grid. I added a whole bunch of asserts inside the engine management code, which strictly speaking weren’t essential as long as the relationship between the node manager, engine and task didn’t change.

One day someone added a bit of retry logic to allow the engine to reconnect back to its parent manager, which made perfect sense in insolation. But now the invariant was broken as the node manager didn’t know about engines it had never spawned. When the orphaned task finally completed it re-established the broken connection to its parent and tried to signal completion of its work. Only it wasn’t the same work dished out by this instance of the manager. Luckily an internal assert detected the impedance mismatch and screamed loudly.

This is another example where you can easily get lulled into a false sense of security because you own all the pieces and the change you’re making seems obvious (retrying a broken connection). However there may be invariants you don’t even realise exist that will break because the cause-and-effect are so far apart.

Confused About the Problem

The title for this post has been under consideration for the entire time I’ve been trying to write it. I’ve definitely thought about changing it twice before to “Confusing Assert and Validation” as essentially the decision to throw or not is about how to handle the failure of the scenario. But I’ve stuck with the title because it highlights my confused thinking.

I suspect that the reason for this confusion is because the distinction between a contract violation and input validation is quite subtle. In my mind the validation of inputs happens at the boundary of a component, and it only needs to be done once. When you are inside that component you should be able to assume that validation has already occurred. But as I’ve described earlier, a bug can allow something to slip through the net and so input validation happens at the external boundary, but contract enforcement happens at an internal boundary, e.g. a public versus internal class or method [4].

If the input validation triggers its significance is probably less because the check is doing the job it was intended to do, but if a contract violation fires it suggests there is a genuine bug. Also by throwing different exception types it’s easier to distinguish the two cases and then any Big Outer Try Blocks can recover in different ways if necessary, e.g. continue vs shutdown.

Another way to look at the difference is that input validation is a formal part of the type’s contract and therefore I’d write tests to ensure it is enforced correctly. However I would not write tests to ensure any internal contracts are enforced (any more than I would write tests for private methods).

Epilogue

I started writing this post a few years ago and the fact that it has taken this long shows that even now I’m not 100% sure I really know what I’m doing :o). While I knew I had made mistakes in the past and I thought I knew what I was doing wrong, I wasn’t convinced I could collate my thoughts into a coherent piece. Then my switch to a largely C# based world had another disruptive effect on my thinking and succeeded in showing that I really wasn’t ready to put pen to paper.

Now that I’ve finally finished I’m happy where it landed. I’ve been bitten enough times to know that shortcuts get taken, whether consciously or not, and that garbage will end up getting propagated. I can still see value in the distinction of the two approaches but am happy to acknowledge that it’s subtle enough to not get overly hung up about. Better error handling is something we can always do more of and doing it one way or another is what really matters.

[1] Back then the common joke about Java was “write once, test everywhere”, and I found out first-hand why.

[2] There is also the slightly more “constrained” variant that is implementation-defined behaviour. It’s a little less vague, if you know what your implementation will do, but in a general sense you still probably want to assume very little.

[3] See “My Kingdom for a C# Typedef” for why this kind of technique isn’t used.

[4] The transformation from primitive values to domain types is another example, see “Primitive Domain Types - Too Much Like Hard Work?”.

Wednesday, 15 June 2016

Mocking Framework Expectations

I’ve never really got on with mocking frameworks as I’ve generally preferred to write my own mocks, when I do need to use them. This may be because of my C++ background where mocking frameworks are a much newer invention as they’re harder to write due to the lack of reflection.

I suspect that much of my distaste comes from ploughing through reams of test code that massively over specifies exactly how a piece of code works. This is something I’ve covered in the past in “Mock To Test the Outcome, Not the Implementation” but my recent experiences unearthed another reason to dislike them.

It’s always a tricky question when deciding at what level to write tests. Taking an outside-in approach and writing small, focused services means that high-level acceptance tests can cover most of the functionality. I might still use lower-level tests to cover certain cases that are hard to test from the outside, but as tooling and design practices improve I’ve felt less need to unit test everything at the lowest level.

The trade-off here is an ability to triangulate when a test fails. Ideally when one or more tests fail you should be able home in on the problem quickly. It’s likely to be the last thing you changed but sometimes the failure is slightly further back but you’ve only just unearthed it with you latest change. Or, if you’re writing your first test for a new piece of code and it doesn’t go green, then perhaps something in the test code itself is wrong.

This latter case is what recently hit the pair of us when working on a simple web API. We had been fleshing out the API by mocking the back-end and some other dependent services with NSubstitute. One test failure in particular eluded our initial reasoning and so we fired up the debugger to see what was going on.

The chain of events that lead to the test failure ultimately were the result of a missing test method implementation on a mock. The mocking framework, instead of doing something sensible like throwing an exception such as NotImplementedException, chose to return the equivalent of default(T) which for a reference type is null.

While a spurious null reference might normally result in a NullReferenceException quickly after, the value was passed straight into JArray.Parse() which helpfully transformed the bad value into a good one – an empty JSON array. This was a valid scenario (an empty set of things) and so a valid code path was then taken all the back out again to the caller.

Once the missing method on the mock was added the code started behaving as expected.

This behaviour of the mock feels completely wrong to me. And it’s not just NSubstitute that does this, I’ve seen it at least once before with one of the older C# mocking frameworks. In my mind any dependency in the code must be explicitly catered for in the test. The framework cannot know what a “safe” default value is and therefore should not attempt to hide the missing dependency.

I suspect that it’s seen as a “convenience” to help people write tests for code which has many dependencies. If so, then that to me is a design smell. If you have a dependency that needs to be satisfied but plays no significant role in the scenario under test, e.g. validating a security token, then create a NOP style mock that can be reused, e.g. an AlwaysValidIdentityProvider and configure it in the test fixture SetUp so that it’s visible, but not dominant.

A tangential lesson here is one that Sir Tony Hoare is personally burdened with, which is the blatant overloading of a null string reference as a synonym for “no value”. One can easily see how this can happen because of the way that String.IsNullOrEmpty() is used without scant regard for what a null reference should symbolise. My tale of woe in “Extension Methods Should Behave Like Real Methods” is another illustration of this malaise.

I did wonder if we’d shot ourselves in the foot by trying to test at too a high a level, but I don’t believe so because you wouldn’t normally write unit tests for 3rd party components like a JSON parser. I might write characterisation tests for a 3rd party dependency such as a database API if I wanted to be sure that we really knew how it would handle certain types of data conversion or failure, or if we rely on some behaviour we might suspect is not clearly defined and could therefore silently change in a future upgrade. There did not feel like there should have been anything surprising in the code we had written and our dependencies were all very mature components.

Of course perhaps this behaviour is entirely intentional and I just don’t fit the use case, in which case I’d be interested to know what it is. If it’s configurable then I believe it’s a poor choice of default behaviour and I’ll add it to those already noted in “Sensible Defaults”.

Friday, 3 June 2016

Showcasing APIs – The Demo Site

Developers that work on front-ends have it easy [1]. When it comes to doing a sprint demo or showcasing your work you’ve got something glitzy to talk about. In contrast writing a service API is remarkably dull by comparison.

At the bottom end of the spectrum you have got good old reliable CURL, if you want a command-line feel to your demo. Just above this, but also still fairly dry by comparison with a proper UI, is using a GUI based HTTP tool such as the Postman or Advanced REST Client extensions for Chrome. Either way it’s still pretty raw. Even using tools like Swagger still put you in the mind-set of the API developer rather than the consumer.

If you’re writing a RESTful API then it’s not much effort to write a simple front-end that can invoke your API. But there’s more to it than just demoing which “feature” of an API you delivered, it has other value too.

Client Perspective

One of the hardest things I’ve certainly found working on service APIs is how you elevate yourself from being the implementer to being the consumer. Often the service I’m working on is one piece of a bigger puzzle and the front-end either doesn’t exist (yet) or is being developed by another team elsewhere so collaboration is sparse.

When you begin creating an API you often start by fleshing out a few basic resource types and then begin working out how the various “methods” work together. To do this you need to start thinking about how the consumer of the API is likely to interact with it. And to do that you need to know what that higher level use cases are (i.e. the bigger picture).

This process also leads directly into the prioritisation of what order the API should be built. For instance it’s often assumed the Minimum Viable Product (MVP) is “the entire API” and therefore order doesn’t matter, but it matters a lot as you’ll want to build stuff your consumers can plug-in as soon as it’s ready (even just for development purposes). Other details may not be fleshed out and so you probably want to avoid doing any part of it which is “woolly”, unless it’s been prioritised specifically to help drive out the real requirements or reduce technical risk.

A simple UI helps to elevate the discussion so that you don’t get bogged down in the protocol but start thinking about what the goals are that the customer wants to achieve. Once you have enough of a journey you can start to actually play out scenarios using your rough-and-ready UI tool instead of just using a whiteboard or plugging in CURL commands at the prompt. This adds something really tangible to the conversation as you begin to empathise with the client, rather than being stuck as a developer.

This idea of consuming of your own product to help you understand the customers point of view is known as “eating your own dog food”, or more simply “dog-fooding”. This is much easier when you’re developing an actual front-end product rather than a service API, but the demo UI certainly helps you get closer to them.

Exploratory Testing

I’ll be honest, I love writing tools. If you’ve read “From Test Harness To Support Tool” you’ll know that I find writing little tools that consume the code I’ve written in ways other than what the automated tests or end user needs is quite rewarding. These usually take the form of test tools (commonly system testing tools [2]), and as that post suggests they can take on a life later as a support or diagnostic tool.

Given that the remit of the demo site is to drive the API, it can therefore also play the role of a testing tool. Once you start having to carry over data from one request to the next, e.g. security token, cookie, user ID, etc. it gets tedious copying and pasting them from one request to the next. It’s not uncommon to see wiki pages written by developers that describe how you can do this manually by copy-and-pasting various steps. But this process is naturally rather error prone and so the demo site brings a little automation to the proceedings. Consequently you are elevated from the more tedious aspects of poking the system to focus on the behaviour.

This also gels quite nicely with another more recent post “Man Cannot Live by Unit Testing Alone” as I have used this technique to find some not-insignificant gaps in the API code, such as trivial concurrency issues. Whilst they would probably have been found later when the API was driven harder, I like to do just a little bit of poking around in the background to make sure that we’ve got the major angles covered early on when the newer code is fresher in your mind.

This can mean adding things to your mocks that you might feel is somewhat excessive, such as ensuring any in-memory test data stores (i.e. dictionaries) are thread-safe [3]. But it’s a little touch that makes exploratory testing easier and more fruitful as you don’t get blasé and just ignore bugs because you believe the mocks used by the API are overly simple. If you’re going to use a bit of exploratory testing to try and sniff out any smells it helps if the place doesn’t whiff to begin with.

3rd Party Test UI

One of the problems of being just one of many teams in the corporate machine is that you have neighbours which you need to play nicely with. In the good old days everyone just waited until “the UAT phase” to discover that nothing integrated correctly. Nowadays we want to be collaborating and integrating as soon as possible.

If I’m building an API then I want to get it out there in the hands of the consumers as soon as possible so that we can bottom out any integration issues. And this isn’t just about fixing minor issues in the contract, but they should have input into what shape it should take. If the consumers can’t easily consume it, say for technical reasons, then we’ve failed. It’s too easy to get bogged down in a puritanical debate about “how RESTful” an API is instead of just iterating quickly over any problems and reaching a style that everyone can agree on.

The consumers of the service naturally need the actual API they are going to code against, but having access to a test UI can be useful for them too. For example it can give them the power to play and explore the API at a high-level as well, potentially in ways you never anticipated. If the service is backed by some kind of persistent data store the test UI can often provide a bit of extra functionality that avoids them having to get your team involved. For example if your service sends them events by a queue, mocking and canned data will only get them so far. They probably want you to start pushing real events from your side so that they can see where any impedance mismatches lie. If they can do this themselves it decouples their testing from the API developers a little more.

Service Evangelism

When you’re building a utility service at a large company, such as something around notifications, you might find that the product owner needs to go and talk to various other teams to help showcase what you’re building. Whilst they are probably already in the loop somewhere having something tangible to show makes promoting it internally much easier. Ideally they would be able to give some immediate feedback too about whether the service is heading in a direction that will be useful to them.

There is also an element of selling to the upper echelons of management and here a showcase speaks volumes just as much as it does to other teams. It might seem obvious but you’d be amazed at how many teams struggle to easily demonstrate “working software” in a business-oriented manner.

How Simple Should It Be?

If you’re going to build a demo UI for your service API you want to keep it simple. There is almost certainly less value in it than the tests so you shouldn’t spend too long making it. However once you’ve got a basic demo UI up you should find that maintaining it with each story will be minimal effort.

In more than one instance I’ve heard developers talk of writing unit tests for it, or they start bringing in loads of JavaScript packages to spice it up. Take a step back and make sure you don’t lose sight of why you built it in the first place. It needs to be functional enough to be useful without being too glitzy that it detracts from it’s real purpose, which is to explore the API behaviour. This means it will likely be quite raw, e.g. one page per API method, with form submissions to navigate you from one API method to the next to ease the simulation of user journeys. Feel free to add some shortcuts to minimise the friction (e.g. authentication tokens) but don’t make it so slick that the demo UI ends up appearing to be the product you’re delivering.

[1] Yes, I’m being ironic, not moronic.

[2] I’ve worked on quite a few decent sized monolithic systems where finding out that’s going on under the hood can be pretty difficult.

[3] Remember your locking can be as coarse as you like because performance is not an issue and the goal is to avoid wasting time investigating bugs due to poorly implemented mocks.

Thursday, 2 June 2016

My Kingdom for a C# Typedef

Two of the things I’ve missed most from C++ when writing C# are const and typedef. Okay, so there are a number of other things I’ve missed too (and vice-versa) but these are the ones I miss most regularly.

Generic Types

It’s quite common to see a C# codebase littered with long type names when generics are involved, e.g.

private Dictionary<string, string> headers;

public Constructor()
{
  headers = new Dictionary<string, string>();
}

Dictionary<string, string> FilterHeaders(…)
{
  return something.Where(…).ToDictionary(…);
}

The overreliance on fundamental types in a codebase is a common smell known as Primitive Obsession. Like all code smells it’s just a hint that something may be out of line, but often it’s good enough to serve a purpose until we learn more about the problem we’re trying to solve (See “Refactoring Or Re-Factoring”).

The problem with type declarations like Dictionary<string, string> is that, as their very name suggests, they are too generic. There are many different ways that we can use a dictionary that maps one string to another and so it’s not as obvious which of the many uses applies in this particular instance.

Naturally there are other clues to guide us, such as the names of the class members or variables (e.g. headers) and the noun used in the method name (e.g. FilterHeaders), but it’s not always so clear cut.

The larger problem is usually the amount of duplication that can occur both in the production code and tests as a result of using a generic type. When the time comes to finally introduce the real abstraction you were probably thinking of, the generic type declaration is then littered all over the place. Duplication like this is becoming less of an issue these days with liberal use of var and good refactoring tools like ReSharper, but there is still an excess of visual noise in the code that distracts the eye.

Type Aliases

In C++ and Go (amongst others) you can help with this by creating an alias for the type that can then be used in it’s place:

typedef std::map<std::string, std::string> headers_t;

headers_t headers; 

There is a special form of using declaration in C# that actually comes quite close to this:

using Headers = Dictionary<string, string>;

private Headers headers;

public Constructor()
{
  headers = new Headers();
}

Headers FilterHeaders(…)
{
  return something.Where(…).ToDictionary(…);
}

However it suffers from a number of shortcomings. The first is that it’s clearly not a very well known construct – I hardly ever see C# developers using it. But that’s just an education problem and the reason I’m writing this, right?

The second is that by leveraging the “using” keyword it seems to mess with tools like ReSharper that want to help manage all your namespace imports for you. Instead of keeping all the imports at the top whilst keeping alias style using’s separately below it just lumps them all together, or sees the manual ones later and just appends new imports after them. Essentially they don’t see a difference between these two different use cases for “using”.

A third problem with this case of “using” though is that you cannot compose them. This means you can’t use one alias inside another. For example, imagine that you wanted to provide a cache of a user’s full name (first & last) against some arbitrary user ID, you wouldn’t be able to do this:

using FullName = Pair<string, string>;
using NameCache = Dictionary<int, FullName>;

It’s not unusual to want to compose dictionaries with other dictionaries or lists and that’s the point at which the complexity of the type declaration really takes off.

Finally, and sadly the most disappointing aspect, is that the scope of the alias is limited to just the source file where it’s defined. This means that you have to redefine the alias wherever it is used if you want to be truly consistent. Consequently I’ve tended to stick with only using this technique for the implementation details of a class where I know the scope is limited.

Inheritance

One alternative to the above technique that I’ve seen used is to create a new class that inherits from the generic type:

public class Headers : Dictionary<string, string>
{ }

This is usually used with collections and can be a friction-free approach in C# due to the way that the object-initialization syntax relies on the mutable Add() method:

var header = new Headers
{
  { “location”, “http://chrisoldwood.com” },
};

However if you wanted to use the traditional constructor syntax you’d have to expose all the base type constructors yourself manually on the derived type so that they can chain through to the base.

What makes me most uncomfortable about this approach though is that it uses inheritance, and that has a habit of leading some developers down the wrong path. By appearing to create a new type it’s tempting to start adding new behaviours to it. Sometimes these behaviours will require invariants which can then be violated by using the type through it’s base class (i.e. violate the Liskov Substitution Principle).

Domain Types

Earlier on I suggested that this is often just a stepping stone on the way to creating a proper domain type. The example I gave above was using a Pair to store a person’s first and last name:

using FullName = Pair<string, string>;

Whilst this might be palatable for an implementation detail it’s unlikely to be the kind of thing you want to expose in a published interface as the abstraction soon leaks:

var firstName = fullName.First;
var lastName = fullName.Second;

Personally I find it doesn’t take long for some generics like Pair and Tuple to leave a nasty taste and so I quickly move on to creating a custom type for the job (See “Primitive Domain Types - Too Much Like Hard Work?”):

public class FullName
{
  public string FirstName { get; }
  public string LastName { get; }
}

This is another one of those areas where C# is playing catch-up with the likes of F#. Creating simple immutable data types like these are the bread-and-butter of F#’s Record type; you can do it with anonymous types in C# but not easily with named types.

Strongly-Typed Aliases

One other aside before I sign off about typedefs is that in C, C++, and the “using” approach in C#, they are not strongly-typed types, they are just simple aliases. Consequently you can freely mix-and-match the alias with the underlying type:

var headers = new Headers();
Dictionary<string, string> dictionary = headers;

In C++ you can use templates to workaround this [1] but Go goes one step further and makes the aliases more strongly-typed. This means that there is no implicit conversion even when the fundamental type is the same, which makes them just a little bit more like a real type. Once again F# with its notion of units of measure also scores well on this front.

[1] The first article I remember about this was Matthew Wilson’s “True Typedefs”, but the state of the art has possibly moved on since then. The Boost template metaprogramming book also provides a more extensive discussion on the topic.

Sunday, 29 May 2016

The Curse of NTLM Based HTTP Proxies

I first crossed swords with an NTLM authenticating proxy around the turn of the millennium when I was working on an internet based real-time trading system. We had chosen to use a Java applet for the client which provided the underlying smarts for the surrounding web page. Although we had a native TCP transport for the highest fidelity connection when it came to putting the public Internet and, more importantly, enterprise networks between the client and the server it all went downhill fast.

It soon became apparent that Internet access inside the large organisations we were dealing with was hugely different to that of the much smaller company I was working at. In fact it took another 6 months to get the service live due to all the workarounds we had to put in place to make the service usable by the companies it was targeted at. I ended up writing a number of different HTTP transports to try and get a working out-of-the-box configuration for our end users as trying to get support from their IT departments was even harder work. On Netscape it used something based on the standard Java URLConnection class, whilst on Internet Explorer it used Microsoft’s J/Direct Java extension to leverage the underlying WinInet API.

This last nasty hack was done specifically to cater for those organisations that put up the most barriers between its users and the Internet, which often turned out to be some kind of proxy which relied on NTLM for authentication. Trying to rig something similar up at our company to develop against wasn’t easy or cheap either. IIRC in the end I managed to get IIS 4 (with MS Proxy Server?) to act as an NTLM proxy so that we had something to work with.

Back then the NTLM protocol was a propriety Microsoft technology with all the connotations that comes with that, i.e. tooling had to be licensed. Hence you didn’t have any off-the-shelf open source offerings to work with and so you had a classic case of vendor lock-in. Essentially the only technologies that could (reliably) work with an NTLM proxy were Microsoft’s own.

In the intervening years the need for both clients (though mostly just the web browser) and servers have required more and more access to the outside world, both for the development of the software itself, and it’s ultimate presence through a move from on-premises to cloud based hosting.

Additionally the NTLM protocol was reverse engineered and tools and libraries started to appear (e.g. Cntlm) that allowed you to work (to a degree) within this constraint. However this appears to have sprung from a need in the non-Microsoft community and so support is essentially the bare minimum to get you out of the building (i.e. manually presenting a username and password).

From a team collaboration point of view tools like Google Docs and GitHub wikis have become more common as we move away from format-heavy content, and Trello for a lightweight approach to managing a backlog. Messaging in the form of Skype, Google Hangouts and Slack also play a wider role as the number of people outside the corporate network, not only due to remote working, but also to bring strategic partners closer to the work itself.

The process of developing software, even for Microsoft’s own platform, has grown to rely heavily on instant internet access in recent years with the rise of The Package Manager. You can’t develop a non-trivial C# service without access to NuGet, or a JavaScript front-end without Node.js and NPM. Even setting up and configuring your developer workstation or Windows server requires access to Chocolatey unless you prefer haemorrhaging time locating, downloading and manually installing tools.

As the need for more machines to access the Internet grows, so the amount of friction in the process also grows as you bump your head against the world of corporate IT. Due to the complexity of their networks, and the various other tight controls they have in place, you’re never quite sure which barrier you’re finding yourself up against this time.

What makes the NTLM proxy issue particularly galling is that many of the tools don’t make it obvious that this scenario is not supported. Consequently you waste significant amounts of time trying to diagnose a problem with a product that will never work anyway. If you run out of patience you may switch tact long before you discover the footnote or blog post that points out the futility of your efforts.

This was brought home once again only recently when we had developed a nice little tool in Go to help with our deployments. The artefacts were stored in an S3 bucket and there is good support for S3 via the AWS Go SDK. After building and testing the tool we then proceeded to try and work our why it didn’t work on the corporate network. Many rabbit holes were investigated, such as double and triple checking the AWS secrets were set correctly, and being read correctly, etc. before we discovered an NTLM proxy was getting in the way. Although there was a Go library that could provide NTLM support we’d have to find a way to make the S3 library use the NTLM one. Even then it turned out not to work seamlessly with whatever ambient credentials the process was running as so pretty much became a non-starter.

We then investigated other options, such as the AWS CLI tools that could we then script, perhaps with PowerShell. More time wasted before again discovering that NTLM proxies are not supported by them. Finally we resorted to using the AWS Tools for PowerShell which we hoped (by virtue of them being built using Microsoft’s own technology) would do the trick. It didn’t work out of the box, but the Set-AWSProxy cmdlet was the magic we needed and it was easy find now we knew what question to ask.

Or so we thought. Once we built and tested the PowerShell based deployment script we proceeded to invoke it via the Jenkins agent and once again it hung and eventually failed. After all that effort the “service account” under which we were trying to perform the deployment did not have rights to access the internet via (yes, you guessed it) the NTLM proxy.

This need to ensure service accounts are correctly configured even for outbound only internet access is not a new problem, I’ve faced it a few times before. And yet every time it shows up it’s never the first thing it think of. Anyone who has ever had to configure Jenkins to talk to private Git repos will know that there are many other sources of problem aside from whether or not you can even access the internet.

Using a device like an authenticating proxy has that command-and-control air about it; it ensures that the workers only access what the company wants them to. The alternate approach which is gaining traction (albeit very slowly) is the notion of Trust & Verify. Instead of assuming the worst you grant more freedom by putting monitoring in place to ensure people don’t abuse their privilege. If security is a concern, and it almost certainly is a very serious one, then you can stick a transparent proxy in between to maintain that balance between allowing people to get work done whilst also still protecting the company from the riskier attack vectors.

The role of the organisation should be to make it easy for people to fall into The Pit of Success. Developers (testers, system administrators, etc.) in particular are different because they constantly bump into technical issues that the (probably somewhat larger) non-technical workforce (that that policies are normally targeted at) do not experience on anywhere near the same scale.

This is of course ground that I’ve covered before in my C Vu article Developer Freedom. But there I lambasted the disruption caused by overly-zealous content filtering, whereas this particular problem is more of a silent killer. At least a content filter is pretty clear on the matter when it denies access – you aren’t having it, end of. In contrast the NTLM authenticating proxy first hides in the ether waiting for you to discover its mere existence and then, when you think you’ve got it sussed, you feel the sucker punch as you unearth the footnote in the product documentation that tells you that your particular network configuration is not unsupported.

In retrospect I’d say that the NTLM proxy is one of the best examples of why having someone from infrastructure in your team is essential to the successful delivery of your product.

Monday, 23 May 2016

The Tortoise and the Hare

[This was my second contribution to the 50 Shades of Scrum book project. It was also written back in late 2013.]

One of the great things about being a parent is having time to go back and read some of the old parables again; this time to my children. Recently I had the pleasure of re-visiting that old classic about a race involving a tortoise and a hare. In the original story the tortoise wins, but surely in our modern age we'd never make the same kinds of mistakes as the hare, would we?

In the all-too-familiar gold rush that springs up in an attempt to monetize any new, successful idea a fresh brand of snake oil goes on sale. The particular brand on sale now suggests that this time the tortoise doesn't have to win. Finally we have found a sure-fire way for the hare to conquer — by sprinting continuously to the finish line we will out-pace the tortoise and raise the trophy!

The problem is it's simply not possible in the physical or software development worlds to continue to sprint for long periods of time. Take Usain Bolt, the current 100m and 200m Olympic champion. He can cover the 100m distance in just 9.63 secs and the 200m in 19.32 secs. Assuming a continuous pace of 9.63 secs per 100m the marathon should have been won in ~4063 secs, or 1:07:43. But it wasn't. It was won by Stephen Kiprotich in almost double that — 2:08:01. Usain Bolt couldn't maintain his sprinting pace even if he wanted to over a short to medium distance, let alone a longer one.

The term "sprint" comes loaded with expectations, the most damaging of which is that there is a finish line at the end of the course. Rarely in software development does a project start and end in such a short period of time. The more likely scenario is that it will go on in cycles, where we build, reflect and release small chunks of functionality over a long period. This cyclic nature is more akin to running laps on a track where we pass the eventual finishing line many times (releasing) before the ultimate conclusion is reached (decommissioning).

In my youth my chosen sport was swimming; in particular the longer distances, such as 1500m. At the start of a race the thought of 16 minutes of hard swimming used to fill me with dread. So I chose to break it down into smaller chunks of 400m, which felt far less daunting. As I approached the end of each 400m leg I would pay more attention to where I was with respect to the pace I had set, but I wouldn't suddenly sprint to the end of the leg just to make up a few seconds of lost time. If I did this I’d start the next one partially exhausted which really upsets the rhythm, so instead I’d have to change my stroke or breathing to steadily claw back any time. Of course when you know the end really is in sight a genuine burst of pace is so much easier to endure.

I suspect that one of the reasons management like the term "sprint" is for motivation. In a more traditional development process you may be working on a project for many months, perhaps years before what you produce ever sees the light of day. It’s hard to remain focused with such a long stretch ahead and so breaking the delivery down also aids in keeping morale up.

That said, what does not help matters is when the workers are forced to make a choice about how to meet what is an entirely arbitrary deadline – work longer hours or skimp on quality. And a one, two or three week deadline is almost certainly exactly that — arbitrary. There are just too many daily distractions to expect progress to run smoothly, even in a short time-span. This week alone my laptop died, internet connectivity has been variable, the build machine has been playing up, a permissions change refuses to have the desired effect and a messaging API isn't doing what the documentation suggests it should. And this is on a run-of-the-mill in-house project only using supposedly mature, stable technologies!

Deadlines like the turn of the millennium are real and immovable and sometimes have to be met, but allowing a piece of work to roll over from one sprint to the next when there is no obvious impediment should be perfectly acceptable. The checks and balances should already be in place to ensure that any task is not allowed to mushroom uncontrollably, or that any one individual does not become bogged down or "go dark". The team should self-regulate to ensure just the right balance is struck between doing quality work to minimise waste whilst also ensuring the solution solves the problem without unnecessary complexity.

As for the motivational factors of "sprinting" I posit that what motivates software developers most is seeing their hard work escape the confines of the development environment and flourish out in production. Making it easy for developers to continually succeed by delivering value to users is a far better carrot-and-stick than "because it's Friday and that's when the sprint ends".

In contrast, the term "iteration" comes with far less baggage. In fact it speaks the developer's own language – iteration is one of the fundamental constructs used within virtually any program. It accentuates the cyclic nature of long term software development rather than masking it. The use of feature toggles as an enabling mechanism allows partially finished features to remain safely integrated whilst hiding them from users as the remaining wrinkles are ironed out. Even knowing that your refactorings have gone live without the main feature is often a small personal victory.

That doesn't meant the term sprint should never be used. I think it can be used when it better reflects the phase of the project, e.g. during a genuine milestone iteration that will lead to a formal rather than internal release. The change in language at this point might aid in conveying the change in mentality required to see the release out the door, if such a distinction is genuinely required. However the idea of an iteration "goal" is already in common use as an alternative approach to providing a point of focus.

If continuous delivery is to be the way forward then we should be planning for the long game and that compels us to favour a sustainable pace where localized variations in scope and priority will allow us to ride the ebbs and flows of the ever changing landscape.

Debbie Does Development

[This was my initial submission to the 50 Shades of Scrum book project, hence it’s slightly dubious sounding title. It was originally written way back in late 2013.]

Running a software development project should be simple. Everyone has a well-defined job to do: Architects architect, Analysts analyse, Developers develop, Testers test, Users use, Administrators administer, and Managers manage. If everyone just got on and did their job properly the team would behave like a well-oiled machine and the desired product would pop out at the end — on time and on budget.

What this idyllic picture describes is a simple pipeline where members of one discipline take the deliverables from their predecessor, perform their own duty, throw their contribution over the wall to the next discipline, and then get on with their next task. Sadly the result of working like this is rarely the desired outcome.

Anyone who believes they got into software development so they could hide all day in a cubicle and avoid interacting with other people is sorely mistaken. In contrast, the needs of a modern software development team demands continual interaction between its members. There is simply no escaping the day-to-day, high-bandwidth conversations required to raise doubts, pass knowledge, and reach consensus so that progress can be made efficiently and, ultimately, for value to be delivered.

Specializing in a single skill is useful for achieving the core responsibilities your role entails, but for a team to work most effectively requires members that can cross disciplines and therefore organize themselves into a unit that is able to play to its strengths and cover its weaknesses. My own personal preference is to establish a position somewhat akin to a centre half in football — I'm often happy taking a less glamorous role watching the backs of my fellow team mates whilst the "strikers" take the glory. Enabling my colleagues to establish and sustain that all important state of "flow" whilst I context-switch helps overall productivity.

To enable each team member to perform at their best they must be given the opportunity to ply their trade effectively and that requires working with the highest quality materials to start with. Rather than throwing poor-quality software over a wall to get it off their own plate, the team should take pride in ensuring they have done all that they can to pass on quality work. The net effect is that with less need to keep revisiting the past they have more time to focus on the future. This underlies the notion of "done done" — when a feature is declared complete it comes with no caveats.

The mechanics of this approach can clearly be seen with the technical practices such as test-driven development, code reviews, and continuous integration. These enable the development staff to maintain a higher degree of workmanship that reduces the traditional burden on QA staff caused by trivial bugs and flawed deployments.

Testers will themselves write code to automate certain kinds of tests as they provide a more holistic view of the system which covers different ground to the developers. In turn this grants them more time to be spent on the valuable pursuits that their specialised skills demand, like exploratory testing.

This skills overlap also shows through with architects who should contribute to the development effort too. Being able to code garners a level of trust that can often be missing between the developer and architect due to an inability to see how an architecture will be realized. This rift between the "classes" is not helped either when you hear architects suggesting that their role is what developers want to be when they "grow up".

Similar ill feelings can exist between other disciplines as a consequence of a buck-passing mentality or mistakenly perceived job envy. Despite what some programmers might believe, not every tester or system administrator aspires to be a developer one day either.

Irrespective of what their chosen technical role is, the one thing everyone needs to be able to do is communicate. One of the biggest hurdles for a software project is trying to build what the customer really wants and this requires close collaboration between them and the development team. A strict chain of command will severely reduce the bandwidth between the people who know what they want and the people who will design, build, test, deploy and support it. Not only do they need to be on the same page as the customer, but also the same page as their fellow team mates. Rather than strict lines of communication there should be ever changing clusters of conversation as those most involved with each story arrive at a shared understanding. It can be left to a daily stand-up to allow the most salient points of each story to permeate out to the rest of the team.

Pair Programming has taken off as a technique for improving the quality of output from developers, but there is no reason why pairing should be restricted to two team members of the same skill-set. A successful shared understanding comes from the diversity of perspectives that each contributor brings, and that comes about more easily when there is a fluidity to team interactions. For example, pairing a developer with a tester will help the tester improve their coding skills, whilst the developer improves their ability to test. Both will improve their knowledge of the problem. If there are business analysts within the team too this can add even further clarity as "the three amigos" cover more angles.

One only has to look to the recent emergence of the DevOps role to see how the traditional friction between the development and operations teams has started to erode. Rather than have two warring factions we've acknowledged that it’s important to incorporate the operational side of the story much earlier into the backlog to avoid those sorts of late surprises. With all bases covered from the start there is far more chance of reaching the nirvana of continuous delivery.

Debbie doesn't just do development any more than Terry just does testing or Alex only does Architecture. The old-fashioned attitude that we are expected to be autonomous to prove our worth must give way to a new ideal where collaboration is king and the output of the team as a whole takes precedence over individual contributions.