Saturday, 27 November 2010

Exception Safe By Design

I’ve recently gone back to my simple database query tool to add a few essential usability features and like any legacy codebase there were a couple of surprises in store. Now, I’m not making excuses for myself, but most of the code is the best part of 10 years old and like any journeyman I’ve learnt a lot since that was written. Also the recent episodes of Matthew Wilson’s Quality Matters column in Overload (one of the ACCU journals) focused on exception handing and so these two examples became even more prominent.

Resource Management

One of the new features I added was to properly support non-query type statements, i.e. INSERT/DELETE (it was originally written to be a query tool but it’s been useful to run ad-hoc SQL as well). This involved me digging into the ODBC facade and eventually I came across this piece of code:-

CSQLCursor* CODBCSource::ExecQuery(...)

  CODBCCursor* pCursor = new CODBCCursor(*this);

  try 
  { 
    ExecQuery(pszQuery, *pCursor); 
  } 
  catch (const CODBCException&) 
  { 
    delete pCursor; 
    throw; 
  } 

  return pCursor;
}

Now clearly you’ll have to put aside your disgust of Hungarian Notation for a moment and focus on the real issue which is the shear verbosity of the code. It’s also wrong. On the one hand I’ve been a good boy and only caught what I can knowingly recover from (a specific CODBCException type exception) but I’ve also missed the point that any other exception type will cause a memory leak. Clearly I should have used catch(…) because the exception type is irrelevant in this case.

Only I shouldn’t, because the try/catch block is entirely unnecessary. Although now I’m firmly in the camp that believes RAII is probably the single most important idiom within C++, I obviously wasn’t then. I try really hard to write code that ensures there are no memory leaks and I use tools like the Debug CRT Heap in Visual C++ and BoundsChecker to point out my mistakes. And yet this piece of code highlights how hard it is to do that, especially in the face of exceptions; if you don’t have 100% code coverage in your testing it’s just too easy to miss something. What is all the more ridiculous though is that the correct solution is less work anyway:-

CSQLCursor* CODBCSource::ExecQuery(...)

  Core::UniquePtr<CODBCCursor> pCursor(new CODBCCur…);

  ExecQuery(pszQuery, *pCursor);

  return pCursor.detach();
}

Assuming I can make no changes to the method signature this ticks the box of being exception safe by design. The Core::UniquePtr smart pointer class is analogous to boost::scoped_ptr which ensures single ownership until it’s given away at the end. But there’s still a bad code smell which is that the method returns a bald pointer so the callee could still leak it. What we should return instead is the smart pointer object so that we explicitly transfer ownership to the callee:-

Core::SharedPtr<CSQLCursor> CODBCSource::ExecQuery(..)

  Core::SharedPtr<CODBCCursor> pCursor(new CODBCCur…);

  ExecQuery(pszQuery, *pCursor);

  return pCursor;
}

One unpleasant taste in the mouth is that because of the change in scope we cannot just return a Core::UniquePtr because that would require temporarily copying the object[*] so instead we use a smart pointer that allows shared ownership – Core::SharedPtr. Funnily enough this transfer of ownership from caller to callee is actually right up std::auto_ptr’s street and for the knowledgeable this would be preferable from a performance perspective. But sadly you end up worrying about the caller and whether they know the pitfalls of auto_ptr and so perhaps you go back to the safer option.

Side Effects

Another feature I wanted to added was to avoid requesting credentials when the connection doesn’t need them and this put me right into the client method responsible for opening a new database connection:-

void CAppCmds::Connect(. . .)

  try 
  { 
    OnDBDisconnect(); 
    . . . 
    App.m_oConnection.Open(strConnection); 
    . . . 
  } 
  catch(CSQLException& e) [#] 
  { 
    App.AlertMsg(TXT("%s"), e.m_strError); 
    OnDBDisconnect(); 
  }
}

This piece of code I found puzzling because I couldn’t obviously see why it called OnDBDisconnect() both at the start of the method and also in the exception handler as the connection will either open successfully or throw, in which case we had already cleaned up earlier. So I looked at what OnDBDisconnect() did to see why it might be necessary:-

void CAppCmds::OnDBDisconnect()

  if (App.m_oConnection.IsOpen()) 
    App.m_oConnection.Close(); 

  App.m_pCurrConn = NULL; 
  . . .
}

Nothing earth shattering there – just run-of-the-mill cleanup code. And so I went back to the Connect() method and looked more closely. And then I spotted it:-

  OnDBDisconnect(); 
  . . . 
  App.m_pCurrConn = App.m_apConConfigs[nConnection]; 
  . . . 
  App.m_oConnection.Open(strConnection);

There are side-effects that occur between the point when the previous connection is closed and when we attempt to open a new one. A quick scoot back through the version control repository[+] to see what changes have been made to this method and I see that it was checked in as a fix for another problem after failing to open a connection. Clearly my mindset was “fix the broken exception handling” instead of “fix the poorly written method”. The first rule of writing exception safe code is to perform all side-effects off to one side and then use exception safe mechanisms like std::swap()[^] to “commit” the changes.

So, for my example code above we need to introduce some temporaries to hold the unstable state, perform the Open and then persist the new state:-

  OnDBDisconnect(); 
  . . . 
  Core::SharedPtr<…> pConfig(App.m_apConConfigs[…]); 
  Core::SharedPtr<…> pConnection(new CODBCConnection); 
  . . . 
  pConnection.Open(strConnection); 
  . . . 
  App.m_oConnection.swap(pConnection); 
  App.m_pCurrConn.swap(pConfig);

Taking a cue from the preceding section about resource management I have used my Core::SharedPtr[$] smart pointer type as the connection will live on after this method returns. The catch handler remains but only to report the error message to the user, no cleanup now needs to be performed.

But we can do even better. If we take a closer look at what happens as a result of using swap() we see that the call to OnDBDisconnect() may actually be superfluous. In fact it may be better not to do it all. The call to swap() effectively means that the new connection will live on as App.m_oConnection and the old one will get destroyed at the end of the scope (the try/catch block). Now if we look at what OnDBDisconnect() does:-

void CAppCmds::OnDBDisconnect()

  App.m_oConnection.Close(); 
  App.m_pCurrConn = NULL;

  UpdateScriptsMenu(); 
  UpdateUI();
}

It closes the existing connection and resets m_pCurrConn (these both now happen automatically near the end of Connect()) and it resets the UI. If we look at the tail end of the Connect() method we see this:-

  UpdateScriptsMenu(); 
  UpdateUI();

What this all means is that if an exception is thrown then the user loses their current connection as well. If we perform all the work off to the side and don’t modify any state (i.e. don’t call OnDBDisconnect()) up front then the user at least gets to keep their current connection as a consolation prize. If the new connection does work then we just save the new state and let the old one disappear at the end of the try/catch block. Is that just being clever or an attempt to write the simplest code possible? A topic for another day…

[*] I need to go back and read up on the new std::unique_ptr type in C++0X as I’m sure this was designed for this kind of scenario – and also the std::vector<std::auto_ptr<T>> gotcha.

[#] Using an exception to signal failure when opening a database connection falls right into the “Using Exceptions For Flow Control” category in Matthew Wilsons Quality Matters column and would almost certainly fall into the real of “Not Exceptional” for Kernighan & Pike too. In this case I’m inclined to agree and will be adding a TryOpen() method at the next opportunity.

[+] Lone developers need Source Configuration Management (SCM/VCS) too. At least, if you want to perform any software archaeology to avoid regressions, that is.

[^] It would be great if std::swap() had a ‘nothrow’ guarantee as that would make it excellent for this idiom. But I’m not sure it is guaranteed by the standard - is it just a convention like not throwing exceptions from destructors? My Google skills found lots of chatter but no quote citing chapter & verse.

[$] Naturally you wouldn’t use the full type name everywhere but use a typedef instead. Given how prevalent the use of the SharedPtr type is (especially with types stored in containers) I tend to add a typedef right after a  new entity class declaration for the smart pointer wrapper, e.g. class Xxx { . . . }; typedef Core::SharedPtr<Xxx> XxxPtr;

Tuesday, 23 November 2010

Reacquainting Myself With Sed & Awk

I’ve always felt somewhat uncomfortable using Visual Studio as my main text editor because there seems to be legions of programmers out there that swear by vi and emacs’ text processing prowess. For me (a C++/C# Windows server-side programmer) I find that day-to-day all I need is syntax highlighting, cursor key movement and basic search & replace; plus the basic code navigation aid “Go to definition”. But that’s pretty much it. So I posed a question on accu-general to find out if I’m some kind of programming Neanderthal or whether all these supposed fancy text processing aids are actually used day-to-day. My straw poll suggests not and so I don’t feel quite so pathetic now.

Still, I had one of those moments the other day when I needed to perform a really simple text processing task – I needed to add 6 tab characters to the end of every line in a large (10’s of MB) text file. This kind of thing (modifying[*] a file that large) is somewhat unusual for me. Instinctively I opened the file in my (already running) Visual Studio and tried to work out how I was going to do it. The two options I would normally use are Find & Replace and a macro (which I then run repeatedly by holding the ‘run macro’ key down :-). Given the size of the file I didn’t have all day for the macro to run so it had to be a replace using a regex. The one I came up with replaced the \n character with \t\t\t\t\t\t\n and then I waited for VS to do its thing…

Whilst waiting I thought to myself how much easier that probably would have been with Sed and/or Awk. I could even hear the sniggers of those Unix programmers around me. But I haven’t used either tool in 20 years and I don’t have a copy installed by default because I’m running Windows of course. I remember trying ports of the major Unix tools way back when I started programming professionally and it was painful because of the fundamental difference between who is responsible for handling wildcard expansion (the shell in Unix vs the program in DOS). Plus the memory constraints of DOS meant they weren’t altogether reliable[#]. Time has obviously marched on but I’m still jaded by the experience of the very distant past and I’ve also discovered various alternatives that have made up the shortfall in many respects.

And then I ran into another classic issue – trying to work out which SQL scripts had been saved in ASCII and which in Unicode (SQL Server Management Studio annoyingly uses Unicode by default). This was causing my colleague’s (Cygwin[^] based) grep not to match various files. Once again I remembered the old Unix ‘file’ tool that attempts to detect what format a file is and whist searching for a Windows port of that I came across the UnxUtils project on SourceForge. This looked to be a native port of the most commonly found Unix tools such as awk, sed, grep, egrep etc. It didn’t contain a version of ‘file’ but I got one of those from somewhere else. Anyway, I unzipped the UnxUtils package, added it to my PATH, and then waited for a moment to use them.

That moment came sooner than I expected. I needed to analyse our on disk trade repository and provide some idea of its size and daily growth. My instinct this time was somehow to use Excel as that is pretty much what the financial world uses for all its analysis and reporting. But we were running an old version that still has the 65K row limit and our repository was well over 100K files per day. So I Googled for an Awk based solution which (adapted for Windows) initially became:-

c:\> dir /s *.trade | gawk “{ sum += $3 } END { print sum }”

This was pretty close, but I needed to strip the chod out of the dir listing for the daily comparison. Also, recursively searching the repository whilst I was getting this right was going to be way too slow so I captured the output from dir once and applied a sprinkling of Sed to remove the rubbish:-

c:\> dir /s *.trade > filelist.txt
c:\> sed –n /^[0-9][0-9]/p < filelist.txt | gawk “{ sum += $3 } END { print sum }”

And that was it for the current repository size. The day-on-day growth just involved extra calls to Sort and Diff to trim out the common items - it was a textbook example of pipeline processing!

Of course I got stuck for ages not knowing that Sed has a limited regex capability or that the ‘END’ in the gawk script had to be in uppercase. But it inspired me to put the effort in to learn the tools properly and so I popped onto Amazon and found a second hand copy of Sed & Awk (2nd Edition) by Dougherty & Robbins for under £10 and also picked up the O’Reilly Sed & Awk Pocket Reference for a couple of quid to leave in my drawer at work.

In the last couple of months I’ve read the majority of the book so that I feel more equipped to notice when this pair of tools would be a better choice. One of the most useful features I discovered about Sed (that I don’t believe VS can do easily) is to apply a transformation to repeating sections of a file (which are specified by an ‘address range’, i.e. a pair of regex’s that denote the start and end of the region). I’ve also found it easier to use Awk to generate, say, SQL statements where I have the output from a query that then needs to be turned back into a bunch of stored procedure calls with the previous result set as the parameters. This is doable in VS with regex based search & replacing but it’s probably not the most efficient use of my time.

The only real problem I have is that the Sed regex syntax is a subset of the normal POSIX one, so some common features are not available. It’s also another regex syntax to learn (along with Visual Studio’s own warped variation).

By this point I suspect that I’m preaching to the converted and there is much nodding of heads. There has also probably been the quiet whispering of “the right tool for the job” and all that. So yes, I’m incredibly late to the party, but I’m most pleased with my re-acquaintance. I don’t expect to ever be writing multi-line scripts but even if it makes the one-liners easier it will have been ten quid well spent.

 

[*] I search and analyze much larger log files every day with BareTail, BareGrep and LogParser. It’s just that I don’t need to modify them or generate other files from them.

[#] When it came to searching my entire (400 MB) hard disk it was actually quicker to close Windows 3.1, reboot into Linux (0.91pl15 IIRC) and use find + grep, save the results and then reboot back into 16-bit Windows than it was to do the search directly in Windows (or via a DOS prompt) in the first place.

[^] I want my tools to behave naturally on Windows - I have no desire to simulate a Unix environment.

Monday, 27 September 2010

My ACCU 2011 Session Proposal - Using xUnit As a Swiss Army Testing Toolkit

I’ve decided to be brave and submit a session proposal for next year’s ACCU Conference. I know if I get accepted that means I’ll have to curtail the relentless late nights in the bar – at least until I’ve given my presentation. Anyway, the catalyst for my proposal was my blog back in March titled Integration Testing with NUnit. Not long after posting that I watched Paul Grenyer give a presentation to the London branch of the ACCU that touched on similar ground. My session is intended to take the idea further still and cover use of the xUnit style in other areas where perhaps custom test scripts and tools might have been written such as API exploration and verifying external systems.

Session Summary

Title: Using xUnit As a Swiss Army Testing Toolkit
Type: Tutorial/Case Study
Duration: 90 minutes
Speaker: Chris Oldwood

Speaker Biography

Chris started out as a bedroom coder in the 80s writing assembler on 8-bit micros; these days it’s C++ and C# on Windows in big plush corporate offices. His career has covered both shrink wrapped applications and in-house systems with the past 5 years focusing on grid-based distributed systems in the Finance industry. When not attached to a keyboard and screen he has a wife and four children to entertain, dips his toe in the local swimming pool and provides the commentary for the annual Godmanchester Gala Day Duck Race.

Session Description

Modern Unit Testing practices act as a conduit for improved software designs that are more amenable to change and can be easily backed by automation for fast feedback on quality assurance. The necessity of reducing external dependencies forces us to design our modules with minimum coupling which can then be leveraged both at the module, component and subsystem levels in our testing. As we start to integrate our units into larger blocks and interface our resulting components with external systems we find ourselves switching nomenclature as we progress from Unit to Integration testing. But is a change in mindset and tooling really required?

The xUnit testing framework is commonly perceived as an aid to Unit Testing but the constraints that it imposes on the architecture mean that it is an excellent mechanism for invoking arbitrary code in a restricted context. Tests can be partitioned by categorisation at the test and fixture level and through physical packaging leading to a flexible test code structure. Throw in its huge popularity and you have a simplified learning curve for expressing more that just unit tests.

Using scenarios from his current system Chris aims to show how you can use a similar format and tooling for unit, component and integration level tests; albeit with a few liberties taken to work around the inherent differences with each methodology.

Friday, 10 September 2010

Implementing Constants & Enumerations in a Database

How often have you come across this kind of code in a stored procedure:-

select abc
from   xyz
where  Status = 3

…or if you’re lucky (or perhaps unlucky if the comment is out-of-date):-

select abc
from   xyz
where  Status = 3 --Failed

One of the downsides of SQL (at least with T-SQL) is a lack of support for constants, and by extension enumerations.

No Magic Numbers

It’s long been a best practice that you avoid the use of so called Magic Numbers like the ‘3’ in the examples above in favour if some more symbolic name. This aids both readability and maintainability because you don’t have to go and lookup what the number represents (in the case of an enumeration) and if the enumeration value needs to change you only have to change the relationship with the symbol and not the code where the symbol is referenced.

Local Constants

If you only need a constant for use within a single stored procedure then you can just use a normal variable:-

declare @Failed int
set     @Failed  = 3
. . .
select abc
from   xyz
where  Status = @Failed

This will make the latter SQL more readable and hopefully more maintainable - it should be far easier to grep for “Failed” than “3” if the numeric value does need to change[*].

Global Constants

Sadly the need for one-off constants is pretty rare. What is more likely is the need for a globally defined constant so that we don’t violate the DRY/SPOT principle by defining local constants in every procedure where it’s used. The best solution I could come up with[#] was to use a User Defined Function:-

create function Failed() 
  returns int
as 
  return 3
go

This means that you can use it like so:-

select abc
from   xyz
where  Status = Failed()

However it’s not always possible to use a UDF in the same place that you can a constant or variable (I’ve yet to read up on why) and so sometimes in a stored procedure you still need to use a local variable as well:-

declare @Failed int
set     @Failed = Failed()
. . .
exec SomeOtherProc @Status = @Failed

Lack of Namespaces

Although some databases have support for schemas, they are often quite simplistic and not designed for nesting like you would in C++ and C#. This means that to avoid conflicts we need to resort to the old style of using some form of prefix or suffix on the name:-

select abc
from xyz
where Status = TaskStatus_Failed()

This becomes a little less distracting with enumerations as in C# you are forced to prefix the enumeration value with it’s type name anyway (unlike C++, although may people nest C++ enumerations within a separate struct to get the same effect) so it’s not too unnatural to read. Of course you could still create a separate namespace, called say ‘constants’, in which you put all these UDFs if you felt that it helped matters.

An Alternative - Lookup Tables

The main thing I like about the UDF solution is that the function name means the symbol appears as code rather than data. One alternative I’ve seen is to use a small lookup table that has the integer value and a string for the symbol. This makes the numbers less ‘magic’ but it means you have invoke a query to get the enumeration value:-

declare @Failed int
select  @Failed = EnumValue from TaskStatusEnum
                 
where EnumSymbol = ‘Failed’

I’ll be honest and admit that I’ve done no performance comparisons between this and the UDF idea. It also still feels to me like you’re expressing the enumeration with data rather than code though but that’s probably nonsense.

Client-Side Use

The other benefit of using a UDF is that you can also use it client side should you happen to need to embed queries within your code so your magic numbers don’t leak further afield. We use proper enumerations in our C# code that are kept in sync manually but tested automatically as part of a build. Ideally we should generate the UDF’s and C# enum’s from a single source, but for the moment the automated tests acts as a nice safety net.

 

[*] Of course in the real world you would never rely on the consistent use constants like this and would actually search for all references to the table and/or column name instead, but hey this is theory :-)

[#] Remembering that I’m not a SQL expert by any stretch of the imagination.

Wednesday, 8 September 2010

What’s Exceptional Depends on Context

The latest edition of Overload (one of the journals from the ACCU) has another episode of Matthew Wilson’s excellent Quality Matters column. The latest topic is exception handling in which he makes a reference to The Practice of Programming by Kernighan & Pike – a book which I’ve only ever skimmed so far. Hence I thought I should to read the relevant chapter and somewhat surprisingly I found the following statement:-

“Exceptions are often overused. Because they distort the flow of control, they can lead to convoluted constructions that are prone to bugs. It is hardly exceptional to fail to open a file; generating an exception in this case strikes us as over-engineering. Exceptions are best reserved for truly unexpected events, such as file systems filling up or floating-point errors.”

My gut reaction to that was “No it’s not – it depends on context”…

External Resources Always Fail

Messer’s Kernighan & Pike are obviously far more knowledgeable and experienced than me so why did they make such a bold statement? Generalising somewhat, I presume that they were suggesting that any interaction with an external resource should be handled with due care as you should expect it to fail – after all we always encounter problems with files, sockets, databases etc. And as a general rule this makes perfect sense.

Full Environmental Control

But… In the world of in-house server-side software you are often completely in control of the entire environment in which your code runs. Even for in-house desktop software the environment may be a little more hostile but you can still ensure certain invariants. Taking the “opening a file” example again and given the following conditions, would failing to open a file still be considered unexceptional?

  1. The file exists (say we just created it and control its lifetime)
  2. The permissions on the folder are correct (our server installation script set them)
  3. The server has plenty of internal resources (memory, handles etc)
  4. The process has not suffered any unrecoverable errors

In this situation I would be on the side of it being exceptional. In my experience failures under these kinds of scenarios usually point to a system configuration error [1 or 2], which means it would never have worked, or the server/process itself has become unstable [3 or 4] due to some earlier (and hence unexpected) issue.

Recoverability

Of course the issue is perhaps moot as all errors fall into two categories – recoverable and unrecoverable. In the scenario described above the recovery is effectively manual[*] – fix the system configuration or restart the process/server. In the real world I can’t see how you could justify the cost of designing a system to automatically cope with some/all of those potential edge cases[+]. Even the last one (process instability) can be hard to handle fully via process recycling because too much badly written code masks nasty errors like Out Of Memory or Access Violations by sinking everything so you don’t get to react to it.

In contrast the times where I can see you might be able to recover from failing to open a file are when the choice of file is controlled by the user, such as in a desktop application, or when the contents of the file can be considered optional, such as user settings.

Applying the issue wider afield to databases and remote services we end up at clustering/replication as the obvious recovery option. Even so, given the general reliability of in-house networks and (non-blade) hardware these days, I would still consider it pretty exceptional for a failure to occur. By-and-large misconfiguration is the most common reason I see for failures involving resources.

Do Or Do Not (Or There May Be a Try)

The chapter of The Practice of Programming in question was about designing interfaces so do we assume that this message is aimed more at the writers of libraries rather than applications? Given that you don’t know the context in which your code is being called do you have to choose between error codes or exceptions? Would your interfaces be better or worse if you somehow supported both so that the caller could decide? I’m sure there are many developers out there right now wincing as they stare at their legacy COM heavy codebases that have a mixture of wrappers generated by Visual C++’s #import feature where some methods throw and others return HRESULTs, some have a Chk prefix and others don’t. What I’m suggesting has more moderation to it…

In the .Net world there is a common pattern where methods by default succeed or throw an exception and a parallel set of methods named TryXxx that return a bool and use an output parameter instead. The latter ‘overloads’ allow you to avoid the cost[#] of handling an exception such as when parsing numbers and DateTimes that you can expect to be iffy (format wise) but they will still throw other exceptions which are orthogonal to the task such as Out of Memory.

Was It Just a Poke At Java?

Or maybe I’m just being too literal? The book was written over 10 years ago (back when exceptions where still not exactly mainstream) and it follows an example in Java so perhaps they were just having a pop at the choice Java had made. After all, at that time Java was still popular as a client-side technology – even outside the browser.

The fact that people like Matthew Wilson are still regurgitating this argument a decade later means there is still much to learn. I think the TryXxx pattern is a nice compromise as you get the flexibility of handling a domain failure without resorting to an unnecessary try/catch block and you also don’t accidentally mask orthogonal failures by ignoring the return code or catching an exception of too generic a type. I wonder if this also satisfies Kernighan & Pike or do they find it muddies the waters even further and still stand by their original statement?

 

[*] I say manual because what normally seems to happen is that the system monitoring software gets bombarded with errors and someone has to make sense of them. Once that’s done it usually involves bouncing something…

[+] In the embedded world, where lives may be at risk, there are forces way outside the normal risk/reward trade offs of an in-house corporate system.

[#] I’m referring to both the performance and maintenance costs of using exceptions.

Wednesday, 18 August 2010

IDisposable’s Should Assert In Their Finalizer

Paradoxically[*] I’m finding that C# makes resource management much harder than C++. In C++ life is so much easier because you always have to deal with resource management and RAII is the tool that makes it a no-brainer – you either use stack based variables or heap allocated variables and a smart-pointer class such as scoped_ptr/shared_ptr. C# on the other hand makes resource management ‘optional’ through the use of the IDisposable interface[%].

What Is and Isn’t Disposable?

And that is my first problem, you don’t always know what does and doesn’t need disposing – you have to go and find out. Of course IntelliSense can help you out here a little but it still means checking every object to see if it has a Dispose() method[#]. The other alternative is to hope you get it right and rely on a static analysis tool like FxCop to point out those ‘occasional’ mistakes. Personally I’ve yet to get anything really useful out of FxCop outside the usual stylistic faux pas’ which seems to be more the domain of StyleCop.

IDisposable is Viral

OK, that’s a little harsh on FxCop as I’m still learning to use it effectively. But after years of using C++ tools like Lint and BoundsChecker to watch my back I was more than a little disappointed. It does seem to point out if I aggregate a type that needs disposing and I haven’t implemented the Dispose pattern, which is nice. However Dispose() is like ‘const correctness’ in C++ - it’s viral - once you start correctly applying IDisposable to your types it then mushrooms and you now need to fix the types that aggregated those and so on.

Should Interfaces Inherit IDisposable?

This leads me to my first question – should interfaces inherit from IDisposable if you know that at least one implementation needs it? On the face of it the answer seems to be no as disposing is purely an implementation detail; but the whole point of interfaces is to avoid ‘programming to an implementation’. If the answer is no then the moment you cast down to an interface you hide the disposing behaviour. COM essentially has to deal with the same problem and its solution is to make AddRef() and Release() a fundamental requirement of every interface. Of course C# has RTTI built in through the use of the ‘as’ and ‘is’ keywords and so you can always attempt a cast to IDisposable from any other interface. However surely this way lies madness as your code would be littered with seemingly random ‘usings’ just in case an implementation later needed it. Here’s an example where this issues has cropped up most often to date…

We are using the Gateway Pattern extensively in our middle tier services to talk to other systems and so the gateway implementation often requires a WCF proxy which requires calling Close() (or a Socket, database connection etc). So, do I expose the disposing requirement through the gateway interface?

public interface ITradeGateway : IDisposable
{
. . .
}

public class TheirBackEnd : ITradeGateway
{
. . .
}

…or just implement IDisposable on the concrete type?

public interface ITradeGateway
{
. . .
}

public class TheirBackEnd : ITradeGateway, IDisposable
{
. . .
}

In my mind the first is more ‘discoverable’ than the second and it gives any static code analysis tools a fighting chance in pointing out where you might have forgotten to call Dispose(). Some might argue that at the point of creation you know the answer anyway as you have the concrete type so why does it matter? Well, I tend to wrap the creation of these kinds of services behind a factory method that returns the object via the intended interface so that you are not inclined to rely on the concrete type unnecessarily:-

public static TradeGatewayFactory
{
    public ITradeGateway CreateTradeGateway()
    {
        return new TheirBackEnd();
    }
}

Most of our factory methods are not quite this simple as they tend to take a configuration object that further controls the construction so that they can hide whether the ‘service’ is hosted in-proc (which is useful for testing and debugging) or remotely via a proxy[+].

Does It Matter If You Forget?

I mostly work on distributed systems where scalability and reliability are major concerns and perhaps I’m being overly pessimistic about the memory consumption of my services but I think it’s important that for certain kinds of resources that their lifetime is managed optimally[$]. At the moment I’m dealing with a managed wrapper over an in-house native library that is used to manipulate the key custom data container that the organisation traffics in. The underlying native implementation uses reference-counted smart pointers for efficiency and naturally this has leaked out into the managed wrappers so that many aspects of the API return objects that implement IDisposable. In fact it’s all too easy to use one of the helper methods (such as an index property) and find yourself leaking a temporary that you didn’t know about and bang goes your careful attempts to control the lifetime of the container, e.g.

// Good. 
using (var section = container.GetSection(“Wibble”)) 

   var entry = section.Value; 
   . . . 


// Leaky. 
var entry = container[“Wibble”].Value; 
. . .

I definitely think this scenario should be picked up by a static analysis tool and if I’ve read the blurb on FxCop 10.0 (that ships with VS2010) correctly then I have high hopes it will watch more of my back.

Assert In The Finalizer

So can we do anything else than rely on tooling? I think we can and that would be to put a Debug.Assert in the Finalizer - after all if the object is being consumed correctly then you should honour the contract and call Dispose() at the relevant point in time. I think it’s safe to say that the Garbage Collector does a pretty good job of hiding most mistakes by running frequently enough, but as Raymond Chen points out on his blog last week (which is “CLR Week”) - you should not rely on the Garbage Collector running at all.

For my own types that don’t manage any native resources themselves it could be implemented like this:-

public class ResourceManager : IDisposable
{
#ifdef DEBUG
    ~ResourceManager()
    {
        Debug.Assert(false);
    }
#endif
    . . .
    public void Dispose()
    {
        m_member.Dispose();

#ifdef DEBUG
        GC.SuppressFinalize(this);
#endif
    }
}

So basically we’re saying that if Dispose() is not invoked, then, when a Garbage Collection does finally occur at least we’ll know we forgot to do it. Sadly we can’t rely on being able to inspect the members in the debugger to work out which instance of an object was forgotten because finalizers can be run in any order; but maybe we’ll get lucky.

If you start from a clean slate then you can write a unit or integration test that forces a full garbage collection right after exercising your code to ensure any errant finalizers run and get instant feedback about your mistake:-

[Test]
public void should_not_leak_resources()
{
    var consumer = new ResourceConsumer();

    consumer.consumeResources();

    GC.Collect();
    GC.WaitForPendingFinalizers();
}

I’ll be honest and point out that I’ve put off actually trying this out in earnest until I have had time to investigate how to tap into the Asserting mechanism so that I can avoid hanging the test runner with a message box unless I’m running under the debugger. I’ve done this plenty of times with the Debug MS CRT (_CrtSetReportHook) so I’m sure there must be a way (I’ve only scratched the surface of the TraceListener class but I’m guessing it plays a part).

Debug Builds – Not The Done Thing?

Back in an earlier post Debug & Release Database Schemas I suggested there must be times when a debug build is used in the C#/.Net world. Unlike the C++ world, this beast does not seem to be at all prevalent. In fact I’ve yet to come across any 3rd party (or in-house teams) promoting a debug build. Visual Studio and C# supports the concept, but I wonder if teams only expect it to be used for internal testing? Jeffrey Richter briefly mentioned “Managed Debugging Assistants” in his book CLR via C# but I’ve yet to read up on how you use them effectively, i.e. tap into the mechanism programmatically so that I can log these failures whenever the services are running unattended; not just under the debugger.

[*] It’s not really a paradox as 15 years C++ vs 1 year C# isn’t exactly a fair comparison.

[%] Optional in the sense that not every type requires it.

[#] or Close() in the case of the thread synchronization types which is a nice inconsistency.

[+] I’m still not convinced by the use of an off-the-shelf Inversion of Control (IoC) framework as it only seems to save the odd line or two of code at the expense of managing another 3rd party dependency. I also much prefer creating immutable types that are fully constructed via the ctor than providing multi-phase construction via mutators which IoC frameworks seem to require. Maybe I just don’t work on the kind of systems they’re aimed at?

[$] The obvious question here I suppose is “Why are you using C# then?”. And the answer [for now] is “because we are”. I was expecting this to to scale-up further that it has, but we can still scale-out further if needs be.

Monday, 9 August 2010

Stored Procedures Are About More Than Just Performance

A colleague who is new to the team and working on a new GUI project asked me whether we should be using an ORM tool like Entity Framework for the impending GUI Data Access Layer. Our existing services all invoke stored procedures via manually[*] crafted ADO .Net code which is largely due to our inexperience with ORM tools, but also because we hardly have any DB code so it’s not a burden – unlike what the GUI is likely to need. This in turn led to a question about why we use stored procedures in the first place when the performance of modern SQL query optimisers is such that the benefits are less considerable. What I found interesting about this question was that we had never chose to communicate with the database through stored procedures for performance reasons; on the contrary it has always been about creating an abstraction over the database to ease maintenance.

OO Data Models

I have been practising OO for long enough that I naturally see rows in a table as instances of objects and stored procedures as their methods. You can use OO techniques with procedural languages like C and SQL, it just takes a little artistic license in the naming scheme to work around some of the inherent limitations - although simulating virtual functions is a little trickier :-). There are a number of patterns available to deal with the ORM problem (the oldest ones I know of were in James Rumbaugh’s book Object-Orientated Modelling and Design from 1991) and in particular the problem of mapping polymorphic types to tables but I find the vast majority of cases fall into the simple Table == Type category.

A classic example would be a table of customers with ‘methods’ to add a new customer or find an existing one by its unique ID:-

create table Customer
    Id      int not null,
    Name    varchar(100) not null

exec Customer_Insert 1234, ‘name’
exec Customer_FindById 1234

The most common scheme I’ve seen is to just prefix procedures with the ‘type’ name, which can seem a little odd to long term SQL’ists and so sometimes it’s feels more natural to them to treat the prefix as a namespace and repeat the ‘type’ in the ‘method’ name:-

exec Customer_InsertCustomer 1234, ‘name’
exec Customer_FindCustomerById 123

This prefix also has the nice side-effect of causing related procedures to be grouped together within the file-system/VCS and database IDEs like SQL Server Management Studio (SSMS).

Of course for simple INSERTs and SELECTs this probably feels like overkill as all you’ve done is to wrap a one-line query and make more work for yourself. But where it starts to pay off is when your one-liner turns into a two or three-liner, requires tuning because the query optimiser needs help or you want to weave in other aspects such as debug code (which I covered recently in Debug & Release Database Schemas).

Where’s the Public Interface?

In another post about database development, xUnit Style Database Unit Testing, I briefly referred to the ‘Public Interface’ of the database and what might constitute it. Obviously in SQL there is no direct equivalent of ‘public’ and ‘private’ that you can use to restrict access to objects on a logical level, but there is a real security mechanism instead that you can leverage. By following the “Principle of Least Privilege” and only granting access (e.g. EXECUTE permissions) on those objects that the client needs to use you can enforce a real public/private barrier.

Sadly this is not altogether intuitive from a maintenance perspective and can easily lead to the erroneous exposure of private procedures without some other convention to highlight the intended accessibility of the code. Even if you also follow the principle that you only write unit tests that target the public interface, it’s still not enough. One option would be to adorn the names of private procedures with another prefix or suffix, e.g.

exec __Customer_VerifyState @current, @intended
exec Customer_VerifyState_private @current, @intended
exec Customer_internal_VerifyState @current, @intended

Using a prefix disrupts the nice grouping in the IDE/file-system, whilst a suffix may be invisible if it’s too subtle. Placing it between the ‘Type’ and ‘Method’ names adds a little speed bump that could provide enough of a clue without adversely affecting readability, although it feels more intrusive than the suffix. If you are purely relying on naming conventions then a more intrusive adornment may be better as you can then use a tool like grep to find violations in your sever and client code.

The only other option I see available would be to use schemas, e.g.

exec private.Customer_VerifyState @current, @intended

I’ve not played much with schemas and on my current project we have already used them to distinguish between the core production code and other procedures that aid in unit testing (via a ‘test’ schema) or support utilities (via a ‘util’ schema). Our inexperience with schemas has also meant that we’ve run into a few minor permissioning issues with temporary tables and user-defined types that makes us a little wary of sprinkling them too liberally.

SQL Maintenance

All the teams I’ve worked in during the last dozen or so years have had developers whose main role is to maintain the database code. That doesn’t mean that they didn’t have other skills as well, but effectively they were hired to maintain the SQL side of the codebase. That also doesn’t mean that other developers did not contribute to the SQL code (although in some cases a more rigid structure was in place) as SQL is often a core skill, but the quality of SQL generated by the generalists may not be sufficiently high enough for the volume of data being manipulated.

I definitely put myself in the category of developers that can write basic SQL queries and stored procedures and can handle many table joins, but when it comes to concepts like nested sub-queries I start to question whether I’m writing an efficient query or not. I’ve also read around the subject enough to know that cursors have their place and it’s not usually in the kinds of queries I write so having a SQL expert around to turn my functional query into an efficient one really helps.

Of course the query optimisers built into SQL databases now are way more advanced that when I first started using them, but there still seems to be a need for SQL experts who can tweak a query to correct the optimisers bad judgment calls and give orders of magnitude performance gains, or add some hints to avoid deadlocks caused by lock escalation. This kind of maintenance is so much easier if the SQL is contained within stored procedures as you can leverage the faster deployment mechanism.

 

[*] Naturally much of the boiler-plate code has been refactored out by wrapping the underlying SQL classes and using Execute Around Method so that it has minimal excess baggage.

Tuesday, 27 July 2010

Where’s the PowerShell/Python/IYFSLH*?

An obvious question to many might be “why are you still writing batch files when PowerShell superseded it years ago?”. Even VBScript is way more powerful than cmd.exe and then there’s everyone’s pretender to the sysadmin throne – Python (and by extension IronPython in the Windows world).

Not Corporate Policy, Again

I’ve been following PowerShell since it was called Monad, and IronPython too during its pre DLR days (especially after a pretty cool demo by Michael Foord at the ACCU Conference in 2008). Once again the problem in the corporate world is that Application Servers are often very tightly controlled and organisations generally don’t like teams jumping on every bandwagon when it has the potential to create conflicts and therefore downtime; I’m sure it was Raymond Chen[#] that said “If the solution starts with ‘First Install X’; then now you have two problems”. PowerShell, Python, Ruby, PERL etc. all involve installing runtimes and libraries which may themselves rely on other shared 3rd party components – DLL Hell is still a problem today. Given that many large organisations only recently (i.e. the last couple of years) migrated to XP on the desktop and Windows Server 2003 for their servers you can see why the uptake of PowerShell has been slow – it’s just not been readily available in this environment; plus there’s a mountain of legacy COM and VBScript which still does the job perfectly well.

Drive-By Programming

As a freelance developer I also feel I have the need to act responsibility which means I can’t go off writing code in whatever takes my fancy and then leave my client with a load of stuff they can’t support; of course some see this kind of behaviour as job security… For me to see a technology as worthy of production use means that there must be a critical mass of people (ideally within the team) who can support it. I’ve worked in a medium sized team where we prayed the two production PERL scripts didn’t fail because no one really knew what they did. I’ve also seen an intern write some production code during his 6 month rotation in PowerShell when there was little to no knowledge of it within the team. I raised the obvious question about who would be able to support it when his rotation ended which prompted the usual discussion about code reviews. It also prompted a deeper one about whether we should try and standardise on one ‘glue’ language going forward and where (in the development process) it would be useful to introduce it to allow time to gain the necessary skills for its eventual promotion to production status. One alternative is for the management to be made aware of the impending cost and stump up the training to acquire the critical mass; but training often seems to be the first to go when money gets tight.

Ok, that sounds overly dramatic I know because anyone can perform simple maintenance, such as replacing database connection strings, file-system paths, server names etc. But there is a very real problem there if something more serious goes wrong as it’s highly likely that there won’t be any tests in place to guide any significant development. You also have to factor in that full-time experienced developers often write this kind of code initially but as the team grows and a separate support team comes on board it is them that are faced with picking up this maintenance burden.

The Right Tool For the Job

We discussed right back at the start of my current project what we wanted to pick as our ‘auxiliary’ languages to make sure that we all had the same hymn sheet. The server and client side coding was largely to be done in C#, but we were also going to need a language to act as the ‘glue’ to bind all the disparate Batch Processing tools together; plus we thought we might need another language to aid in development & testing. One again PowerShell and Python seemed to be at the top of the list. But whereas on a previous project it was Python’s interop with COM and C++ that made it look attractive, it was PowerShell’s interop with .Net (due to the C# bias) that made it favourable this time and it was also part of the standard app server and desktop build now so we had no deployment concerns. Somewhat interestingly a few months later the department did a fact-finding mission to see how many people had Python skills as it seems that they wanted to promote the Python/IronPython approach to the development teams instead. I’m looking forward to seeing how this initiative pans out as we’ve only just started on this area of our system so we have very little to throw away.

Proving a Point

I fired up a command prompt on this 9 month old netbook which is fully patched to XP SP3 to knock up some PowerShell equivalents to the cmd shell examples I gave last time and what do you know… PowerShell isn’t installed!

 

[*] IYFSLH = Insert Your Favourite Scripting Language Here

[#] I’m sure it was him, but my Google Fu is unable to locate this quote.

Monday, 19 July 2010

Simple Database Build & Deployment With SQLCMD

The SQL Server client tools has shipped for years with a console based utility for executing SQL. Historically this was OSQL.EXE, but that has been replaced more recently by SQLCMD.EXE which supports pretty much the same syntax. It’s also available as a stand-alone component that requires no shenanigans like COM registration and so could be stored in the 3rd party area of your VCS and packaged along with your SQL scripts during deployment[+].

So, what’s the difference between osql and sqlcmd? Not a great deal when running simple scripts from batch files or the occasional one liner. The big one I see is the ability to pass Environment Variables into the script which can then be referenced as “$(variable)” in the SQL. This is ideal for passing in flags, e.g. Debug/Release. Here’s a blog post this lists further differences.

Examples

If you want to see the full set of switches you can do “sqlcmd –?”; however I’m just going to point out the handful that I use regularly so the later examples make sense. You should be aware that SQLCMD’s switches are case-sensitive, so “–Q” is not the same as “–q”, but fortunately the behaviour of the ‘overloads’ is usually very similar.

-E Use Windows authentication[*]
-S <server\instance> The hostname (and instance) of SQL Server
-d <database> The database name
-Q <query> The query to execute
-i <filename> The name of a file containing the query to execute

So putting these to work, here are a few common examples, starting with dropping a database:-

C:\> sqlcmd –E -S MYSERVER\SQLEXPRESS –Q “drop database MYDB”

Creating a database:-

C:\> sqlcmd –E -S MYSERVER\SQLEXPRESS –Q “create database MYDB”

Executing a query stored in a file against the “MYDB” database:-

C:\> sqlcmd –E -S MYSERVER\SQLEXPRESS –d MYDB –i “C:\SQL\My Script.sql”

Because it’s a command line tool you can pipe the output to other command line tools. Here’s how you can see what connections are open to your database (that are stopping you from dropping it perhaps):-

C:\> sqlcmd –E -S MYSERVER\SQLEXPRESS –Q “sp_who2” | findstr “MYDB”

If you want to toast those connections, this will do the trick (you need the SPID from the previous query results):-

C:\> sqlcmd –E -S MYSERVER\SQLEXPRESS –Q “kill <spid>”

SQL Code Structure

If you’re storing your SQL scripts in a VCS then you may have a structure somewhat similar to this:-

\Src
    . . .
    \ForeignKeys
    \Indexes
    \StoredProcs
    \Tables
    . . .

(An obvious alternative would be to store the object type in the extension, e.g. Customer.table.sql, Orders_Customer.ForeignKey.sql. Just so long as the objects are grouped in some way that you can easily discover via folders or file wildcards.)

One of the biggest problems when building a database is dealing with all the dependencies. Tables can have foreign keys into other tables which means that they need to be created in a particular order or SQL Server will complain. The alternatives are to not use foreign keys at all (which seems a little excessive) or split them out into separate scripts as shown above. There is a still an order, but it is far more manageable because it’s at the object ‘type’ level not at the individual script level. But then we already had to consider that if we’re defining users, user-defined types, schemas etc.

What this means is that we can (re)build our database from scratch using the following plan:-

  1. drop existing database
  2. create new database
  3. . . .
  4. create Tables
  5. create Foreign Keys
  6. create Stored Procedures
  7. . . .
  8. apply permissions
  9. . . .

The drop and create[#] were shown above. The schema and object scripts could be done with a sprinkling of simple FOR magic:-

C:\> for /r %i in (*.sql) do sqlcmd –E -S MYSERVER\SQLEXPRESS –d MYDB –i “%i”

Obviously this is too general as we have no control over the order that scripts will be executed, but with a little repetition we can:-

C:\> for /r %i in (Tables\*.sql) do sqlcmd –E -S MYSERVER\SQLEXPRESS –d MYDB –i “%i”
C:\> for /r %i in (ForeignKeys\*.sql) do sqlcmd –E -S MYSERVER\SQLEXPRESS –d MYDB –i “%i”
C:\> for /r %i in (StoredProcs\*.sql) do sqlcmd –E -S MYSERVER\SQLEXPRESS –d MYDB –i “%i”

(Or if you choose to encode the object type in the filenames your ‘in’ wildcards would be (*.Table.sql), (*.ForeignKey.sql) etc.)

Also, don’t forget that %i at the command line becomes %%i when used in a batch file and to enclose the loop variable in quotes (“%i”) to cope with filenames with spaces in.

However, If you just have a big ball of scripts then a more painful way would be to create a text file with a list of relative paths in that defines the order. Naturally this would be horrendous to maintain, but maybe you have an easy way of generating it? Then you could use an alternate form of the FOR statement to read the text file and pass those lines (i.e. the filenames) to sqlcmd, e.g.

filelist.txt:-

Scripts\Customer.sql
Scripts\Orders.sql
Scripts\Customer to Orders.sql (the foreign keys)

C:\> for /f “delims=” %i in (filelist.txt) do sqlcmd –E -S MYSERVER\SQLEXPRESS –d MYDB –i “%i”

Once again you’ll need to watch out for filenames with spaces as the FOR /F statement will by default use the space and tab characters as field separators (which would cause the filename to be split across the variables %i, %j, %k etc) so we use “delims=” to treat each line as a whole filename.

Deployment From an Application Server

Your Continuous Integration server can happily invoke a script to run the automated tests on the build server using a local copy of SQL Express with the source code applied directly from the VCS view/working copy. But when it comes to Continuous Deployment to something like a DEV environment you really want to be packaging this code up (along with SQLCMD and the scripts) and installing from another box. This way you have a solid record of what was deployed (which hopefully ties up with the label in your VCS) and you’re also not tempted to patch code directly in the build server views.

I’ve found that a simple .zip file does the job nicely and there are plenty of command line tools for generating these, such as 7-Zip. You can use the same techniques you used to invoke SQLCMD to invoke the archiver to add the scripts to your package. The built-in Windows unzipper is painfully slow (and only available via the GUI I believe) so I would also publish the archiver alongside the SQL package so that you don’t have another server configuration issue to think about.

A Cautionary Tale

There is one rather nasty gotcha that you should be wary of with sqlcmd[^]. If the native SQL driver is not installed then you’ll not know about it – in the sense that you’ll get no errors reported. So, if your scripts just create objects and execute no queries you’ll be none the wiser. Running a simple query like this:-

C:\> sqlcmd –E -S MYSERVER\SQLEXPRESS –Q “select getdate()”

Will result in no output whatsoever. I somewhat naively assumed that the native SQL driver would be part of our standard application server build, if not Windows itself. Fortunately you can still download an MSI with just the driver from Microsoft Downloads here.

 

[+] I don’t know what your licensing agreement says (IANAL) but I would hope it allows you to use it with your Continuous Integration server to allow it to run your automated unit tests and additionally bundle it along with your scripts in your package to act as your deployment mechanism.

[*] Does anyone still use username/password (or mixed mode) authentication with SQL Server? I would expect anyone using SQL Server to be a Windows ‘shop’ and therefore be all Active Directory and single sign-on these days…

[#] It’s possible that for your UAT and Production databases your “create database” will involve custom hardware configuration stuff like where you put the data files and other careful tuning, but for automated unit testing purposes letting SQL Server use the defaults is perfectly adequate.

[^] I don’t think this affects osql because it uses a different transport (db-library?).

Monday, 12 July 2010

Unit Testing Evolution Part I - Structure

[This post has been under construction for over 6 months. Every time I think I’ve got it sorted I gain some new insight from somewhere and feel the need to re-evaluate my strategy. The last of these was Kevlin Henney’s ACCU London session which covers similar ground to this post but naturally in a far more intelligent fashion; this post essentially covers my journey towards similar conclusions.]

I’ll be perfectly honest and admit that the unit testing strategy employed within my personal class libraries can leave much to be desired. As I indicated in my previous post “My Unit Testing Epiphany” I see this whole area of the development process as a Rite of Passage. Just like many others I have written my own Unit Testing framework, mostly because I saw the standard xUnit ports as too verbose, but also because I wanted to derive the practices myself from first principles. I’ve no doubt also made many of the same errors and trodden the same paths in a mistaken belief that I can somehow ‘short-circuit’ the process by using a simpler style. Sadly “Real Time” has to pass for many of the consequences of these actions to sink in and the lessons to be learnt.

Genesis - Test Chaos

Initially I saw the essence of unit testing as the act of having tests in place to verify behaviour. In effect it was the regression testing aspect that grabbed me most due to my previous faux pas. Hence I slipped into the habit of just writing a bunch of asserts, which works adequately for very simple free functions with a blindingly obvious interface. The following example are tests from my template based string parsing function:-

TEST_EQUALS(parse<int>(“1234”), 1234);
TEST_EQUALS(parse<int>(“-1234”), -1234);
TEST_THROWS(parse<int>(“9999999999”));
. . .
TEST_THROWS(parse<double>(“XYZ”));
. . .

They should require no explanation as they have no setup, teardown or state. They are grouped by the template instantiation <type> and the tests run in a fairly obvious order: correct  input first, then the edge cases like MAX and MAX+1, additional whitespace and finally malformed input. Every time I’ve gone back to read them I can understand what is going on in little to no time.

However, the same cannot be said for my CommandLineParser class tests. These fall straight into the “write-only” category because they only made sense at the time I wrote them. Here is the simplest example:-

{
    static const char* args[] = { “test”, “--switch” };

    CommandLineParser parser(args);

    TEST_TRUE(parser.isSwitchSet(FLAG_SWITCH));
}

For testing class methods I used scopes to break the tests up rather than write separate functions as I felt that writing a function came with the burden of coming up with useful test function names[#]. I knew that each test needed to avoid side effects so using scopes also allowed me to ensure the test object was recreated each time and that I could use copy-and-paste to write the tests as the variable names could be reused without clashing. At the time I really believed this was a more efficient strategy. Probably the worst CommandLineParser test I wrote was this:-

{
    static const char* args[] = { “test”,
           
“--single”, “value”, “/s:value”,
            “--multi”, “value”, “value”, “-f”  };

    CommandLineParser parser(args);

    TEST_TRUE(parser.isSwitchSet(SINGLE_SWITCH);
    TEST_TRUE(parser.isSwitchSet(MULTI_SWITCH);
    TEST_TRUE(parser.isSwitchSet(FLAG_SWITCH));
    TEST_TRUE(parser.arguments[SINGLE_SWITCH].size() == 1);
    . . .
}

This single test highlights the biggest problem with my strategy - that it’s impossible to tell what features are intentionally being tested. This means that changing it would be incredibly difficult as I may accidentally remove unobvious test paths; no wonder I can’t think of a name for the test method! It’s the Single Responsibility Principle again - each test should only test one feature at a time[#].

Just as an aside these are the behaviours that the test was verifying (I think!):-

  1. A switch can be provided using the Windows “/switch” style
  2. A switch can be provided using the Unix “--switch” style (with full switch name)
  3. A switch can be provided using the Unix “-s” style (with short switch name)
  4. A switch and it’s value can be separated by whitespace
  5. A switch and it’s value can be separated by a “:” if provided using the Windows “/switch” style
  6. A switch can be defined as single value but multiple occurrence (hence --single)
  7. A switch can be defined as multi value (hence --multi)
  8. A switch defined as allowing multiple values consumes all values up to the next switch or end of input.

As a final note the original test did not contain the trailing “-f” parameter. It was added when I discovered the implementation contained a bug in the handling of switches that took a list values. In the rush to get a test in place through which I could fix the code I performed the unforgiveable crime of extending an existing test with yet more responsibilities.

Using Comments to Document Tests

My initial reaction when revisiting these tests some time later was to use the natural tool for documenting code – the comment. This hardest part about this refactoring was working out what each test was supposed to be doing. However once again I was pleased that I had avoided introducing separate functions for each test as I felt that naming functions would be far more limiting that a free text comment and redundant in many simpler cases. Here’s how that earlier test now looked:-

{
    // Boolean switches shouldn’t need a value.
    static const char* args[] = { “test”, “--switch” };

    CommandLineParser parser(args);

    TEST_TRUE(parser.isSwitchSet(FLAG_SWITCH));
}

Strangely I still thought it was ok to test multiple features because I believed the comments would add the necessary documentation for when the time came to deal with a test failure. Of course sometimes the comments still ended up being terse because I struggled to describe succinctly what it was doing[#].

The comments wouldn’t appear on the test output without more macro magic. But I wasn’t really bothered because if a failure does occur the first thing I do is to run the test runner under the debugger and wait for the failing ASSERT to trigger it. Eyeballing the test and code seems pointless when one of the key benefits of unit tests is that they run very quickly.

Decomposing the Big Ball of Mud

As the collection of tests started to grow I ran into a few niggling problems. The first was that each test was not protected with it’s own try/catch block so one stray exception caused the entire set of tests to fail. Although not a massive problem because I would fix every failure ASAP, it did feel as though each test should be better isolated. The second was that as I started practicing TDD I felt it would be beneficial to use the command line to restrict the set of tests run on each change to just those of the class I was developing. I wasn’t sure, but I suspected (for debugging reasons) that I may even want to limit the run to an even smaller subset of tests. This lead to the following new style test definition:-

TEST_SET(StringUtils)
{
    const tstring MAGIC_VALUE = “<something common>”;
    . . .
TEST_CASE(StringUtils, intFormatAndParse)
{
    TEST_TRUE(Core::format<int>(INT_MIN)
              == TXT("-2147483648"));
    TEST_TRUE(Core::parse<int>(TXT(" -2147483648 "))
              == INT_MIN);
    . . .
    TEST_THROWS(Core::parse<int>(TXT("1nv4l1d")));
}
TEST_CASE_END
. . .
TEST_CASE(StringUtils, skipWhitespace)
{
    . . .
}
TEST_CASE_END
. . .
}

The TEST_CASE/_END macros added a try/catch block around each test so each set would always run to completion now. Although you might think it, TEST_CASE does not declare a function, it still just defines a scope which means that you can declare all your common immutable test variables at the top and they will be useable throughout the entire set of tests – still no separate SetUp/TearDown needed. This also gets around the restrictions in C++ when declaring and initializing static const members in class declarations[*].

The example above shows the new smaller grouping of tests. The TEST_CASE macro takes two arguments which I thought could be used as subcategories to help both in the filtering at runtime and to aid in the format of the output. I still wasn’t sure that I wouldn’t need to turn these into real functions at some point (as that’s what most other frameworks seem to do), so the TEST_CASE arguments are short and still adhere to class/function conventions. The multi-paradigm nature of C++ meant that whatever naming scheme I came up with I felt it would need to suit both free function and class/method based uses.

One other thing this refactoring sorted out was a minor problem of accounting. I counted test results based on the number of passed and failed asserts, rather than the number of actual test cases as individual cases weren’t defined originally. This meant the summary at the bottom could give you a much exaggerated view of how many tests you were writing (not that “number of tests” should be seen as a good measure of test coverage).

No Assert Macro Changes

I originally created only three assert macros out of laziness. I decided that the bare minimum would be TEST_TRUE() and TEST_THROWS(); I added TEST_FALSE() purely for symmetry. I know other test frameworks have a plethora of assert functions and support fluent syntaxes for writing clearer asserts, but with C & C++ you can capture the assert expression with the pre-processor with is also powerful. Given that I never really paid much attention to the output for each test anyway I was happy just dumping the raw assert expression out. Plus, as I said before, I go straight for the debugger the moment a failure occurs instead of staring at the console output looking for inspiration.

 

And so this is how things stayed whilst I started getting the hang of TDD at which point some more ‘test smells’ started developing and that’s the focus of part II.

 

[#] This will be the focus of part II.
[*] I’m not sure what the exact rules are but I believe you’re effectively limited to a subset of the primitive types, e.g.float and double is also excluded.

Monday, 28 June 2010

Windows Batch File Template

The following code is the template I have started using for new Windows batch files[#]:-

@echo off
rem ****************************************
rem Script template
rem ****************************************

:help_requested
if /I "%1" == "-?"     call :usage & exit /b 0
if /I "%1" == "--help" call :usage & exit /b 0

:validate_args
if "%1" == "" call :missing_arg & exit /b 1

:body
echo SUCCESS...

:success
exit /b 0

:missing_arg
    echo ERROR: One or more arguments are missing
    echo.
    call :usage
goto :eof

:usage
    echo USAGE: %~n0 [option ^| options...]
    echo.
    echo e.g. %~n0
    echo.
goto :eof

Once again plenty of credit must go to Raymond Chen as his blog has shown some pretty neat little tricks that has challenged my opinions on how little I thought you could do with them. Also there is a fantastic web site run by Rob van der Woude that tells you everything you could ever want to know about batch file programming, and more. Although I’ve written MakeFiles in the dim and distant past, virtually all the build and deployment scripts I’ve written more recently have been as batch files as I prefer the imperative style of this “language” to the more functional nature of makefiles; plus I’ve never really needed that amount of power as Visual Studio does most of the heavy lifting. Anyway, here are a few notes about the script that might be useful…

Labels Instead of REMarks

The REM statement is the built in tool for writing comments, but personally I have found that you can also use a label for much the same purpose without the extra noise from the ‘rem’ prefix. OK, so you can’t have spaces in the label (you have to use underscores) but as a long time C/C++ programmer I’m used to this style. The other reason for using them this way is that once I start adding functionality I naturally find myself needing to use flow control anyway and the labels are already there.

Specifying an Exit Code

The EXIT statement, with the /B switch, can be used to specify an exit code. This switch is essential if you want to invoke one batch file from another, such as in a build or deployment process; without it you will exit the command interpreter - even if you’ve been invoked via the CALL statement. By default I tend to just use a single error code “1” unless I’m going to perform some custom error recovery in a calling script.

Chaining Statements

To aid readability (usually in error handling) I sometimes find it more succinct to chain a CALL and EXIT statement on one line rather than on separate lines, e.g.

if "%1" == "" call :missing_arg & exit /b 1
if "%2" == "" call :missing_arg
& exit /b 1
if "%3" == "" call :missing_arg & exit /b 1

The use of a single “&” means the EXIT will be invoked unconditionally. There are other forms such as “&&” and “||” which allow you to execute subsequent statements based on the result of the previous one, i.e. they’re the moral equivalent of “IF [NOT] ERRORLEVEL 1”.

Path Handling

You can parse paths stored in arguments and variables easily with the “%~” syntax. The one I’ve used in the template above “~n” gives you just the script filename (variable %0 is the full path of the script ala argv[0] in C). For some strange reason you need to look up the help for the FOR statement to find the full details of the path handling syntax[*].

Invoking Functions

One of the most useful features I’ve only recently come to appreciate is the ability to invoke “functions” within the same script. These are defined by a label and effectively return through use of the “GOTO :EOF” statement. You can also pass arguments to the function which appear as variables %1 - %9 just like when script itself is invoked. To make the appearance of scripts with functions a little more readable I tend to indent the code between the label and return.

Escaping The Pipe & Redirection Characters

A common format for usage strings is to use the pipe character “|” when listing alternate options and the greater-than/less-than “<>” characters to enclose argument names. Naturally these have another meaning to the command interpreter, so you have to escape them with the hat “^” symbol.

Testing Batch Files Prior to Automation

One thing you need to test before invoking your script under automation tools like Task Scheduler or TeamCity is the return codes. If you forget the /b argument to the exit statement, or you get your flow control wrong and just drop through somewhere, you can easily end up returning the default value of 0 which will be treated as successful and your Continuous Integration process will be lying to you. The following two line script (RunScript.cmd) can be used to invoke another script and display its exit code:-

call %*
echo ExitCode=[%ERRORLEVEL%]

C:\>RunScript.cmd MyScript.cmd arg1 arg2
. . .
ExitCode=[0]

If you don’t see the exit code then you’ve probably invoked EXIT without the /B switch somewhere.

Command Extensions

Although this template uses some command extensions that were added to more “recent” versions of Windows (i.e. 2000+) they are all enabled out-of-the-box and so you don’t need to do anything special to start using them. In fact Delayed Expansion of variables is the only extension I’ve used so far that has required some explicit action – SETLOCAL ENABLEDELAYEDEXPANSION.

 

[#] All the build scripts that I currently have in my personal codebase are ancient and were thrown together long before I really had a clue about what you could do inside a batch file, so don’t go looking there for concrete examples… yet.

[*] C:\> HELP FOR