There is an interesting development on my current project involving a minor technology choice that I’m keen to see play out because it wouldn’t be my preferred option. What makes it particularly interesting is that the team is staffed mostly by freelancers and so training is not in scope per-se for those not already familiar with the choice. Will it be embraced by all, by some, only supported by its chooser, or left to rot?
Past Experience
We are creating a web API using C# and ASP.Net MVC 4, which will be my 4th web API in about 18 months [1]. For 2 of the previous 3 projects we created a demo site as a way of showing our stakeholders how we were spending their money, and to act as a way of exploring the API in our client’s shoes to drive out new requirements. These were very simple web sites, just some basic, form-style pages that allowed you to explore the RESTful API without having to manually crank REST calls in a browser extension (e.g. Advanced Rest Client, Postman, etc.).
Naturally this is because the project stakeholders, unlike us, were not developers. In fact they were often middle managers and so clearly had no desire to learn about manually crafting HTTP requests and staring at raw JSON responses - it was the behaviour (i.e. the “journey”) they were interested in. Initially the first demo site was built client-side using a large dollop of JavaScript, but we ran into problems [2], and so another team member put together a simple Razor (ASP.Net) based web site that suited us better. This was then adopted on the next major project and would have been the default choice for me based purely on familiarity.
Back to JavaScript
This time around we appear to be going back to JavaScript with the ASP.Net demo service only really acting as a server for the static JavaScript content. The reasoning, which I actually think is pretty sound, is that it allows us to drive the deliverable (the web API) from the demo site itself, instead of via a C# proxy hosted by the demo site [3]. By using client-side AJAX calls and tools like Fiddler we can even use it directly as a debugging tool which means what we’ve built is really a custom REST client with a more business friendly UI. This all sounds eminently sensible.
Skills Gap
My main concern, and I had voiced this internally already, is that as a team our core skills are in C# development, not client-side JavaScript. Whilst you can argue that skills like JavaScript, jQuery, Knockout, Angular, etc. are essential for modern day UI development you should remember that the demo site is not a deliverable in itself; we are building it to aid our development process. As such it has far less value than the web API itself.
The same was true for the C#, Razor based web site, which most of us had not used before either. The difference of course is that JavaScript is very different proposition. Experience on the first attempt my other team had at using it was not good - we ended wasting time sorting out JavaScript foibles, such as incompatibilities with IE9 (which the client uses) instead of delivering useful features. The demo site essentially becomes more of a burden than a useful tool. With the C#/Razor approach we had no such problems (after adding the meta tag in the template for the IE9 mode) which meant making features demonstrable actually became fun again, and allowed the site to become more valuable once again.
The Right Tool
I’m not suggesting that Razor in itself was the best choice, for all I know the WinForms approach may have been equally successful. The same could be true for the JavaScript approach, perhaps we were not using the right framework(s) there either [4]? The point is that ancillary technology choices can be more important than the core ones. For the production code you have a definite reason to be using that technology and therefore feel obligated to put the effort into learning it inside and out. But with something optional you could quite easily deny it’s existence and just let the person who picked it support it instead. I don’t think anyone would be that brazen about it; what is more likely is that only the bare minimum will be done and because there are no tests it’s easy to get away without ensuring that part of the codebase remains in a good state. Either that or the instigator of the technology will be forever called upon to support its use.
I’ve been on the other end of this quandary many times. In the past I’ve wanted to introduce D, Monad (now PowerShell), F#, IronPython, etc. to a non-essential part of the codebase to see whether it might be a useful fit (e.g. build scripts or support tools initially). However I’ve only wanted to do it with the backing of the team because I know that as a freelancer my time will be limited and the codebase will live on long after I’ve moved on. I’ve worked before on a system where there was a single PERL script that is used in production for one minor task and none of the current team knew anything about PERL. In essence it sits there like a ticking bomb waiting to go off, and no one has any interest in supporting it either.
As I said at the beginning I’m keen to see how this plays out. After picking up books on JavaScript and jQuery first time around I’m not exactly enamoured at the prospect, but I also know that there is no time like the present to learn new stuff, and learning new stuff is important in helping you think about problems in different ways.
[1] My background has mostly been traditional C++ based distributed services.
[2] When someone suggests adding unit tests to your “disposable” code you know you’ve gone too far.
[3] This proxy is the same one used to drive the acceptance tests and so it didn’t cost anything extra to build.
[4] The alarming regularity with which new “fads” seem to appear in the JavaScript world makes me even more uncomfortable. Maybe it’s not really as volatile as it appear on the likes of Twitter, but the constant warnings I get at work from web sites about me using an “out of date browser” don’t exactly inspire me with confidence (See “We Don’t Use IE6 Out of Choice”).
You mentioned F# in passing.
ReplyDeleteInterestingly I'm using F# to try and counter many of the problems you've gone into here.
The main reason I think it can do so (it's too early to say definitively yet, but the signs are good) is that F# is one of this new breed of "Scalable" languages (which is where Scala gets its name. Swift, for Apple platforms, is even more ambitious in this regard).
A scalable language, like F#, can be used from the lowest levels (where you'd typically use a systems language like C++ - but these days I'd include C# in that definition - although you probably wouldn't write an OS in it) - to the highest levels - the domain of scripting languages.
Because of this range it means you can use it in more places. If you're going to use auxilliary languages it makes sense to limit them to just one, if possible.
Over time I'm using F# to replace Python scripts, Perl scripts, batch files, Javascript, XML configuration files, Nant files, C# utilities and regression test tools, parsers and code generators - right up to peripheral parts of the production codebase itself.
Of course it helps that F# is an incredible productive language that eliminates whole classes of bugs - and makes many others less likely. Scripts look dynamic, yet benefit from static type checking - and while its feature set marches on with innovations like Type Providers the core language is stable stable and mature - as are the frameworks (which have the advantage of being familiar to C# developers).