Thursday 6 November 2014

AOP: Attributes vs Functional Composition

A colleague was looking at the code in an ASP.NET controller class I had written for marshalling requests [1] and raised a question about why we weren’t using attributes. The code looked something like this:

public class ThingController : ApiController
{
  public HttpResponseMessage Get(string id)
  {
    return Request.HandleGet(() =>
    {
        return _service.FetchThing(id);
    });
  }
}

. . .

public static class RequestHandlers
{
  public static HttpResponseMessage(
                     this HttpRequestMessage request,
                     Func<T> operation)
  {
    try
    {
      // Do other stuff
      . . .
        T content = operation();
        return request.FormatResponse(content);
    }
    catch(. . .)
    {
      // Translate various known exceptions, etc.
    }
    . . . 
  }
}

The HandleGet() method effectively wraps the internal call with a Big Outer Try Block and does some other stuff to help ensure that the response is suitably formatted in the face of an exception. At some point it will undoubtedly instrument the internal call and add some extra nuggets of logging so that we can see the thread ID, request causality ID, etc. I’ve used this technique for a long time and wrote about it in more detail a few years back in “Execute Around Method - The Subsystem Boundary Workhorse”.

As my colleague rightly pointed out the behaviour of the HandleXxx() methods was really a spot of Aspect Orientated Programming (AOP), something I also noted myself at the end of that blog post. The question was whether it would be better to lift all that out and use a set of attributes on the method instead and then wire them up into the HTTP pipeline? I guess it would look something like this:

public class ThingController : ApiController
{
  [TranslateErrors][InstrumentCall][LogCall]
  public HttpResponseMessage Get(string id)
  {
      return _service.FetchThing(id);
  }
}

I wasn’t convinced; I don’t personally believe this makes the code considerably better. And if anything it makes it harder to test because all that logic has been lifted out of the call chain and pushed up the call stack where it’s now become part of the infrastructure, which I guess is the point. Yes, you can unit test the implementation of the attribute’s behaviour just as you can the HandleXxx() methods, but you can’t easily test it in-situ. Whilst testing happy paths via acceptance tests is easy, testing recovery from failure scenarios is often much harder and is where I would look to use component-level tests with fault-injecting mocks.

Any argument around forgetting to wrap the internal call with HandleXxx() can largely be dismissed as you could just as easily forget to add the attributes. Perhaps if the behaviour was mandatory so that you didn’t even have to do that on any API entry point it would be beneficial, but testing any non-happy path that shouldn’t return a 500 would highlight the missing HandleXxx() mistake right away. If it’s a genuine concern that there are so many entry points that one could be forgotten, I would suggest that the smell of a lack of Separation of Concerns was more prominent.

Most of the services I’ve worked on in recent years are small (by design) and so only have a handful of operations to implement. Those services might be fronted by ASP.Net, WCF, message queue or even a classic Socket listener, so the pattern remains pretty much the same. So, whilst the pattern is similar one might argue that it’s not idiomatic ASP.Net.

My main argument though is that I’ve watched in amusement as developers tie themselves in knots and try to debug services made all the more complex by frameworks that allow all manner of things to be injected here and there with very little diagnostic support for when things go wrong. Although composing the functions might add a little extra cognitive load to the handler when you aren’t interested in the tangential concerns, it’s always there as an explicit reminder that servicing a request has a lot of potentially hidden costs around it. If you ever needed to profile those concerns (which I have in the past) it is trivial to host the class in noddy test harness and profile it the same as any other code. This is an aspect of code reuse and low coupling that I personally value highly.

I’m sure once again that I’m just being a luddite and should embrace more of the underlying technology. But as I get older I find I have a lower tolerance for wrestling with some complex tool when I know there is another simple solution available. This quote from Brian Kernighan always sits in the back of my mind:

“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”

I also know we can always refactor to a better position in the future without much effort, if the need arises, because the tests will be in place already which is often what makes this kind of refactoring more time consuming. This makes we wonder whether that’s what’s causing me to be cautious - confidence in my own tests?

 

[1] To me the ASP.Net controller classes are purely for marshalling data into the underlying service call and marshalling the data back out again for the response [2]. Actually most of that is handled by the framework so it’s really just exception translation and mapping responses, e.g. translating a missing resource to a 404. Whilst all the logic could go in the controller I find Visual Studio web projects very clunky in comparison to consuming simple class libraries and console applications / services.

[2] As I wrote that I wondered why the same is not true about using model binders to aid in the validation of the input arguments. I concluded that the validation logic is in the domain types, not the binder, which is merely an adapter. If required it would be fairly trivial to replace the typed parameters as strings and invoke the conversion logic in the handler body. In fact this is exactly how it was done in the first place.

1 comment: