Sunday, 29 November 2009

From SourceSafe to ClearCase to Subversion

My latest contract sees me working with a different Version Control System, this time it’s Subversion. Quite how I’ve managed to avoid knowing anything about Subversion, given its prevalence within the Open Source community, is probably quite astounding. What’s more amusing is that it seems I’m joining the party just as everyone else is leaving - the cool kids are all at Git, Mercurial & Bazaar - Subversion seems to be so last year. Mind you I’ve never used CVS either, so I have no idea of the problems Subversion was supposed to solve, and from what I understand it’s still growing within the Corporate sector, no doubt because it has reached a certain level of maturity and sits right between the other two VCS’s mentioned in the title.

My Background

I started out working at a company that used PVCS, and although I was vaguely aware of branching and merging and some of the other SCM activities it didn't really sink in as my head was swimming with so many other new Software Engineering concepts. It wasn’t until a few years into contracting and I had started using Microsoft’s Visual SourceSafe in a much larger team that some of the concepts started to make sense. In the late 90’s I began working at a company that used their own in-house VCS. Not unsurprisingly this was fairly simple and didn’t support even basic features like branching, merging and labelling. I (some would say foolishly) introduced them to SourceSafe, which due to the small team size was perfectly adequate, and it served the team well - I think it still does over a decade later. In contrast my next shift in company took me to a very large corporation, for which ClearCase was the tool of choice. It was using this mighty piece of Enterprise software that the concepts of baselines, branching, merging, labelling etc really took shape as my team was often working on 5 or 6 major development streams simultaneously. Yes it’s slow, but it’s object model and flexibility were so far in advance of SourceSafe that I couldn’t see why anyone would need anything else. Of course I wasn’t paying the licensing fees either :-) Still I know a fair few people that loathe ClearCase (especially UCM) and now I’m working with something more middle of the road like Subversion, I can see why…

Some Comparisons

There are probably hundreds of differences between these 3 products, and I’ve only spent a few weeks with Subversion, but that’s been enough to give me an idea of how the run-of-the-mill tasks work. So this list is really only the major points that have caught my attention, perhaps because they are radically different to the others.

1: Tools

The word visual in “Visual SourceSafe” pretty much sums up the target audience of VSS. It is integrated into Visual Studio and has a separate Explorer-like tool for manipulating the repository outside VS. There are some command line tools, but personally I’ve found them of little use; however I’ve never done any really serious automation with them to be fair. The ease with which you can do a recursive import makes light work of adding 3rd party libraries and is the one feature sorely missing from ClearCase.

That said, ClearCase is nearly all things to all [wo]men. It has a GUI based Explorer-like tool which makes working with local copies very simple. It also has a plethora of other GUI and command line tools that allow you to manipulate and mine data such as version histories, merge paths etc, with some effort. Naturally it’s a complex product, but when you get your head around the object model and the query syntax it can perform some impressive feats – albeit slowly :-)

Being Open Source, and knowing the Movements general attitude towards power and command-lines I wasn’t entirely sure what to expect from Subversion. I was faintly aware of TortoiseSVN but have had bad experiences with poorly written Shell Extensions* in the past - I disabled the ClearCase one due to similar issues. On the contrary it’s been pretty solid and a very different way of working with the repository than an Explorer-like view. Not that I need to use TortoiseSVN that much because AnkhSVN provides excellent Visual Studio integration.

2: Branching & Merging

Although I’ve worked with VSS for well over a decade and I’m aware that it supports branching, I’ve never actually done it in earnest. The main problem with SourceSafe is that it doesn’t support versioning of directories. This means that although you could label files and revert changes, you can’t do the same with folders, and you can’t use a label to recreate a previous release if there have been any deletions. The way I use VSS for my personal work, and how I’ve used it in the past within teams, is to effectively make all changes on the trunk – no feature or release branches, period. Trying to grok branching using VSS as a tool is never get to get you far and I guess that’s why we never did it :-) Merging was limited to those occasions when more than one developer had edited the same file on the trunk simultaneously, which was rare and changes hardly over overlapped.

ClearCase, for me, follows the most logical model. Each file/folder (or ‘Element’ as ClearCase calls it) can have many versions on many branches. What distinguishes it from the typical Subversion model though is that each file in your workspace can be selected from multiple different branches. When you define your workspace you are effectively selecting the version for each file and folder, using branches as common grouping mechanism, i.e. the branch is a selection mechanism within the workspace, not the defining characteristic of the workspace, although that is often the desired effect. However, it’s not branches, but labels, that provide the most common baseline definition for a development stream – and those labels could tag versions from many branches. Subversion talks about cheap copies when branching, but ClearCase avoids even those cheap copies. Of course there is a trade-off and I guess it is in the workspace definition.

I didn’t get the Subversion model at first – just creating a folder with the name of the branch under another folder called ‘branches’ and then copying the files into that folder - it sounds bizarre after using ClearCase. The magic of course is in the implementation. The copies are really just symbolic links to revisions of the original files. Labelling is a similar affair, it’s just a convention that you don’t modify the files after copying. The consequence seems to be that merging is conceptually more complex because you have to do two different actions depending on whether you’re refreshing your branch from the trunk or integrating back again – ClearCase doesn’t care either way.

3: Local Workspaces

Not unsurprisingly SourceSafe again has the simplest model where you map folders in the repository to folders in your local file-system. Usually you just map the root and leave the subfolders to follow the same hierarchy, but if you’re using branches I guess you can map each branch to the same local folder for convenience. I don’t believe that you can have multiple workspaces on the same portion of repository because the mapping is stored in the configuration file which is in the repository. SourceSafe also marks all files read-only by default meaning that you have to “check-out” the file before editing. In single-editor mode this locks the file, but for multi-editor mode it just remembers the version so that it can merge the file on “check-in” if required.

ClearCase requires a similar check-out/check-in pattern to SourceSafe and write protects files as well. The consequence of this model though is that files you edit without ClearCase’s knowledge are deemed to be “Hijacked” and you have to resolve the issue either by checking-it-out or reverting the changes. Unlike VSS though you can have as many workspaces as you like, all with different configurations determined by a “Profile”. In reality, to ensure consistent views across developers and the build machine, you configure a Master Profile for each Branch (or Development Stream), and all your developers use that. There are two big problems with ClearCase views – the speed of updating and the lack of atomic commits when checking in multiple files.

Once again Subversion makes a departure from the other two and does something different and I’m not sure yet whether I like it or not. A checkout in subversion is the creation of the entire workspace, and all files are writable by default. This means that Subversion determines what’s changed by just looking for files that have been modified. This removes the whole check-out palaver, but at the expensive of not being warned when someone else has edited a file and you need to refresh your workspace. My team has been checking-in and updating as frequently as possible which minimises this, but it has caused problems when AnkhSVN updates a Visual Studio project file and a merge conflict causes it not to load. The other thing I miss is the Explorer-like tool. The TortoiseSVN shell extension is nicer than the command line for many tasks, but it doesn’t feel quite as integrated as the VSS and ClearCase tools.

Conclusions

I use VSS at home for my personal archive (it has over 12 years of history in it) and it only has to support a team size of 1, which it just about copes with. But as I begin to understand more about The Software Development Process I want to try out techniques at home that I use at work, and for that something like Subversion (or more likely the modern cool tools like Git or Mercurial) would be a far better fit. And of course they’re free. But I still look back with fondness on my time using ClearCase, a behemoth of a version control system, but one which provides huge possibilities and great satisfaction when you finally ‘crack it’.

[*I’ve tried some SourceSafe shell extensions in the past and they’ve often succeeded in locking up Explorer entirely when there are transient network issues with a LAN based repository. I’ve also experienced the same thing with ClearCase, often when the license server was misbehaving.]

Sunday, 22 November 2009

C# + C++ + C++/CLI – Context Switch Hell

During the last week I’ve been doing some interop development at work. As you might expect from a financial institution the core analytics are written in C++ which is awkward for my team as our system is entirely C# based. Fortunately I got the chance to decide how we were going to communicate with the C++ library. Although new to C# I am aware of the various interop mechanisms available and went through them one-by-one to see which was the best fit. The C++ library only exposes a couple of functions and those only take an argument or two which take the form of a custom type that’s not a million miles away from a DataSet. Luckily there was already a Wrapper Facade available for this type so it really just came down to what was the easiest way to marshal this structure and deal with any exceptions.

C API

C is the interop language - every language provides a C binding so it’s always a pretty safe bet. The P/Invoke mechanism built into the CLR was designed to allow interop with the underlying C based Windows Operating System, so there is a huge amount of support in the language for performing interop using declarative programming. Unfortunately I was calling a C++ API and I didn’t fancy the highly fragile option of binding to mangled function names, plus I still had to pick a C compatible type to serialize the data into. This meant either convincing the owners to expose (and more importantly maintain) the API themselves, or me writing a C based shim myself.

COM Inproc Server

Another common technique is to provide a COM based facade, perhaps via a Dual interface. If I was exposing a set of C++ objects this might be a good fit, but as it’s just a couple of free functions this seems like overkill. Plus the marshalling cost would have been significant as I would have marshalled the structure as a BSTR to get it over the COM interface and then constructed another copy on the C++ side. Given the volume of data being passed and the additional interop layer created by the Runtime Callable Wrapper this felt like a premature pessimisation, and a lot more  work.

C++/CLI

The last choice I considered was similar in vein to the C based DLL, but using C++/CLI instead. I had a little play with the Managed Extensions when they first appeared and the code looked pretty ugly then, but as we are using VS2008 now I had the opportunity to use the far more palatable C++/CLI instead. You can freely mix-and-match native C++ and managed C++, even within catch blocks, so this looked pretty enticing on the exception handling front.

Stick With What You Know

As far as I could see it was a toss-up between exposing C++ code via extern “C” functions, marshalling the data as c-style strings and having to learn the P/Invoke syntax, or learning the managed C++ syntax and creating a mixed mode DLL. I really liked the idea of being able to mix native and managed C++ in the same translation unit as this would allow me to write one set of try/catch blocks to handle any native or managed errors which simplify the code. Also I felt that given the performance bias of C++ I would have far more opportunities to pass the large amount of data most efficiently with RAII to back me up.

A Tale of ., ::, ^, –> & ()’s

The following few days coding were somewhat amusing as I context switched constantly between the three slightly different C based syntaxes of C#, C++ and C++/CLI. Each is so incredibly similar to the other that I don’t think I managed to write a single line of code 100% correctly first time :-). The following is a list of some of the subtle differences that I ran into:-

  • C# uses the single keyword ‘foreach’ whereas in C++/CLI it’s separated into two as ‘for each’.
  • C++ and C# use the ‘new’ keyword whereas C++/CLI uses ‘gcnew’.
  • In C++ and C++/CLI you can invoke a no argument ctor without parenthesis, but in C# you have to provide them.
  • When qualifying types with the enclosing class or namespace you use ‘::’ in C++ & C++/CLI, but just a ‘.’ in C#
  • C# uses a ‘.’ to invoke methods, but C++/CLI follows C++ and uses the ‘->’ pointer syntax. This is made worse by the use of the term ‘reference type’ to refer to a type that you invoke with pointer, not reference syntax in C++/CLI!
  • Forgetting the ^ either on the collection contained type, or the collection type itself.
  • The empty reference type is called ‘null’ in C#, ‘nullptr’ in C++/CLI and ‘NULL’ in plain C++.
  • Conditional compilation uses ‘#ifdef’ or ‘#if defined’ in C++ and C++/CLI whereas C# uses the simpler ‘#if’

It only amounted to a couple hundred lines of code in total across all three languages, but I felt like I spent more time staring at the compiler output window than the text editor…

Thursday, 12 November 2009

Adjusting to the C#/.Net Ecosystem

So I’ve been on The Dark Side (as my C++ biased ex-work colleagues call it) for a little under 2 months now. Most of the first few weeks were spent going over the potential architecture and doing a little up-front modelling to decide what areas, if any, needed prototyping. Now I’m finally into the real thrust of C# development and as a consequence various new names have entered my consciousness. The old stalwarts of the C++ world such as Andrei Alexandrescu, Scott Meyers and Herb Sutter have been consigned “To Tape” to make room for the new kids such as Jon Skeet (and Tony the Pony, natch), Bill Wagner, Udi Dahan and Ayende Rahien. The first two are authors of books I was advised to seek out as they seem to be treading the same path as Scott Meyers and Herb Sutter, but in the C# space. Udi is someone who’s articles I have read on previous occasions (such as in MSDN Magazine) and works in a not too dissimilar field. Finally, Ayende has a blog that seems well read. It was his opinions on Unit Testing that caught my eye during the height of the Duct Tape Programmer catfight. I’m hoping his blog will give me further leads and a good background in the popular tools, technologies and sites that a long-standing C# dev will have already acquired.

Although C++ (and Java which I’ve also dabbled with on occasion) is pretty similar to C#, there are a number of gaffs I’ve made, and continued make, right out of the stalls. The most common so far is forgetting to actually allocate the object and consequently being assaulted with a null pointer exception the moment I run a unit test,

Dictionary<string, string> configuration;

In C++ this would work, but in C# I have to remember to do this:-

Dictionary<string, string> configuration = new Dictionary<string, string>;

Well, not quite, because as the compiler will then remind me, I have to append parenthesis, even though I’m not providing any arguments:-

Dictionary<string, string> configuration = new Dictionary<string, string>();

I seem to have been fortunate enough to have started C# at v3.0, with Generics already well established. I don’t pity those poor souls who had to cope with the v1.0 collections. Anyone, such as myself, who had the misfortune to use MFC (or some of the other big frameworks) before the days of a C++ compiler that supported Templates will feel your pain. Actually there are probably many in the embedded C++ world who are still suffering…

Nice though Generics are (and I have only used the collections so far) what I really, really miss is to be able to Typedef an instantiation* to avoid the DRY (Don’t Repeat Yourself) spots. Of course, in C++ the template type names pervade the code base because of the iterator model; and not using typedefs is asking for impenetrable code – especially once you have nested containers or introduce std::pair.

The other main area that I’ve been getting used to is the tool chain. Naturally we’re using Visual Studio 2008, which I’m somewhat comfortable with having used it since v1.0, in fact I still use the Visual C++ 1.0 key bindings (although they call them Visual C++ 2.0 in the UI). For unit testing we’re using NUnit which is a pleasure after many years of macro abuse. The same goes for the mocking framework – Rhino Mocks. I’ve only ever hand-rolled mocks before. Well, technically most of them were just stubs because hand-rolling is so tedious. There is a list of other stuff which I’ve yet to get my teeth into that completes the full development process, such as, CruiseControl, FxCop, StyleCop, NCover etc. When it comes to C# (and Java is probably the same) there is a plethora of tools out there, much of them Open Source, and trying to compare them and make an informed choice about what to pick for the long road ahead is difficult. The main barrier to those kind of tools in the C++ world has historically been cost as a recent comment to an earlier post of mine will testify.

Refactoring is something that C# lends itself to very nicely and although I’ve only briefly played with these features in Visual Studio 2008 (and have a Resharper license waiting for me) I feel I’m going to enjoy using them. I’m holding off on Resharper until I know how to do things myself first as not every client uses the same tools and Visual Studio itself may satisfy most of my needs. One prevalent C# practice that is frowned upon in C++ is the use of ‘using’ directives** to bring entire namespaces into scope and Resharper appears to promote this practice. The reason is no doubt due to the different use of namespaces in C#. They tend to be much longer (unlike std and boost) and partitioned much smaller (again unlike std) so the chance for conflict is presumably far less. It still feels dirty.

Although I’ve only scratched the surface of the language and tools so far, I’m already beginning to feel at home.

 

*I know they aren’t called instantiations in C# and that the compilation model is different to C++, but the name for what a generic is where the closed types are known escapes me at present. Sorry Jon, all I can remember from your book at the moment is the names Open and Closed as referring to the types at definition and instantiation.

**That’s the C++ name, I’m not sure of the terminology of using declarations and directives in C# yet. I shouldn’t have brought my 3G dongle with me!

Monday, 9 November 2009

Standard Method Name Verb Semantics

Something that I’ve always taken for granted when programming is that certain verbs imply certain behaviours. I thought it would be useful to find a set of these on the web and link to it on our development wiki, as although I think it is common sense to native speakers, that is not true of team members for which English is a second language. However I’m struggling to track one down. I’ve seen plenty of posts about the format of interface, class, method and variable names, but not one that attempts to list the expected semantics of the really common verbs we use like get, create and find. To date I’ve only come across the PowerShell documentation as they’ve always made a concerted effort to try and standardise both the format of their commands:- verb-followed-by-noun, and also the semantics of the most common verbs.

One other useful resource I discovered was on Ward Cunningham's Wiki. There is a fair bit of discussion about naming conventions (across the board, not just methods) culminating in a concept know as Language Orientated Programming. The Wiki also attempts to establish a lexicon of common terms based on some existing naming conventions, such as the ‘str’ prefix used in the C runtime library for all the narrow version C-style string manipulation functions. However there is a major bias towards abbreviations in the pages I read which is not what I’m looking for.

So, until I find that resource I’m going to present my own list of common method name verbs and describe what connotations each one conjures up in my mind:-

Create & Delete

These should be an easy ones :-) Creation is about making new things, commonly objects, and are often used to implement the Factory Method or Abstract Factory patterns. The objects they create may be fully constructed, but more often they are only default initialised and the caller has to perform further actions on it. A common example would be a factory function for creating database connections, where the connection still needs to be opened after creation. Common alternatives would be Allocate and Construct.

IDatabaseConnection connection = DatabaseFactory.CreateConnection(connectionString);

If Create is how we bring objects into this world, then Delete is the Grim Reaper. Deletion is final and absolute, no trace should be left of the object afterwards. Any attempt to use something after deletion must surely be a logical error. Common alternatives are Dispose and Destroy.

directory.DeleteFile(filename);

Acquire & Release

Using Create in the name of a method, based on the above definition, could be seen to be leaking the abstraction. Acquire has more of a vagueness that could be interpreted only as “I’ll give you something you can play with, I may create it or I may reuse an existing object, either way you shouldn’t care”. The implication is that the object may be in a more complete state of initialisation. Using the previous example the database connection would already have been opened. Object pools are the prime example here.

IDatabaseConnection connection = ConnectionPool.AcquireConnection();

If you’ve acquired something, there is a feeling of only partial ownership. It’s not really yours to do with as you wish. This means that you’re not in a position to delete it, the best you can do is let go. Releasing suggests that the object may live on with another unrelated owner, or perhaps it’s going to be returned to storage to await a new owner at some point later. COM interfaces are the canonical example here as memory allocation is outside the remit of the caller.

connection.Release();

Open & Close

The use of Open and Close on an object is an incredibly common one, generally with real resources like files and database connections. They are very explicit actions, like Create and Delete, and also leak the abstraction to some degree. I find they are useful when you have a heavy resource that needs careful management, but you don’t feel you can hide that responsibility from the caller. Like Acquire and Release, they are slightly more woolly, but whereas Acquire and Release are about construction, Open and Close are actions an object performs (maybe on itself).

connection.Open();

connection.Close();

I have rarely seen these two verbs used alongside a noun in OO, even helper free functions would use overloading on type because the noun would only repeat the type.

Find & Get

When I go shopping I often have two things in mind, there is stuff I would like and stuff I need. The stuff I would like is optional, or maybe I can choose from various brands and sizes, e.g. cereal*. In contrast the stuff I need is mandatory, maybe something doesn’t work without it, e.g. petrol. Find is analogous to the first case. It says “I think you might have this, and if you don’t I can do something else”, in short, you know that it could fail and you are willing to handle that failure responsibly. One common alternative verb is Lookup.

Image image = cache.Find(url);

Get is like buying petrol from an oil refinery. You absolutely know that they must have some, and if they don’t then they better raise an exception because if the refinery is out you don’t stand much luck getting any from a petrol station either. Get is sometimes used for construction, such as in the composite verb GetOrCreate. I don’t personally like this use, to me, Acquire is far more pleasing.

String name = product.GetName();

Get has unfortunately become such an overused verb because of its association with Getters and Setters. This has the side effect of people becoming blind to it to make property accessors read more naturally, e.g. filename.getLength() would subconsciously just be read as filename.length(). I think most people these days would be surprised to find a Get method doing much more than just returning a private member.

Request, Retrieve & Locate

The word Find is so quick to read and say, and with that I feel that it often implies a quick scan rather than a long drawn out search. For remote calls I like to try and use a verb that has for more complex connotations. Retrieve and Locate both sound like more serious work is involved in the hunt. Likewise Request always feels like something a remote service will be called upon to handle.

Book book = library.RetrieveBook(title);

Unfortunately the ubiquitous nature of HTTP means that Request is probably more commonly used as a noun, such in the inseparable pairing Request/Response.

Read & Write/Send & Receive

When  it comes to I/O a very common naming convention appears to be the use of Read and Write - as least as far as file I/O goes. However when we’re talking about network I/O the old Berkley Sockets API kicks in and and we find the use of Send & Receive favoured instead. If we’re restricted to discussing the transfer of small buffers of data this is fine, but once we begin to widen the picture to the overall topic of OO data persistence a few new terms come into  play. The posh sounding pair is Serialize and Deserialize, which I find incredibly difficult to type (like all American APIs), but I tend to think of it as transport agnostic. The lower-class sounding variant is Load & Save, which evoke memories of floppy disks for me, and I usually expect it to be applied to large objects such as whole documents.

Buffer buffer = textfile.ReadLine();

uint count = socket.SendMessage(message);

Image image = document.LoadImage();

 

I readily accept that these can be seen to be very literal interpretations - I’m a very literal kind of chap - but I think that most frameworks I’ve dealt with tend to follow along similar lines and I rarely get surprised. But in application code the story is often different, especially when working in a multi-cultural team. What would be nice is to have a global programmers dictionary and thesaurus. The dictionary could ensure that you’ve not picked something contradictory, while the thesaurus would grant you some latitude without you picking esoteric words that only Shakespeare would stand a chance of understanding.

*Cereal shopping is actually a major event in our house, and as my wife (who will no doubt cast her eye over this post) will point out, does not allow for any deviation to shop own brands or other lesser substitutes…

Saturday, 7 November 2009

My New Blogging Machine

Much as I love my dear Dell XPS 1200 - it’s a lovely laptop to do development on – it’s still a little too heavy to just slip into my rucksack and carry around on the commute, even at under 2 Kg. And, given that I’m trying to make a concerted effort to do a little more blogging to improve those English skills I thought I’d check out the current range of netbooks.

As someone who stands over 6 feet tall, with hands the size of spades, and fingers fatter than some peoples forearms,I’ve never felt totally comfortable on a laptop keyboard, so I never for a moment thought a netbook would be practical (I still use an IBM Model M keyboard c1985 at home - the best keyboard ever). However the other weekend, I managed to convince my niece to let me play with her netbook just to confirm my suspicions. It’s seems I was a little too hasty in my assumptions. Putting aside the somewhat sticky keys, which I wish to point out was the result of her typing with chocolate covered fingers and not a design feature, I found it far more comfortable than I’d ever imagined.

A week or so later and here I am typing this on my new Asus Eee PC 1500HA. Leaving the plug at home it weighs in just over the Kg mark, but given that I often carry weighty tomes like the C++ IO Streams book, or Enterprise Patterns with me into work it hardly registers. The keyboard feels slightly cramped, which is to be expected but I’ve quickly got used to it. The Home and End keys are at the top-right on my Dell and I often hit Insert by mistake, whereas they are bottom-right on the Eee PC and need the Fn key as well. I didn’t think I was going like that, but I’ve already forgotten about it, in fact mapping Home,End, Page Up and Page Down on the cursor cluster seems so damn logical.

Performance is also surprisingly good. My wife bought a Toshiba laptop some years ago which was very lightweight, but it only had a 1.1 GHz processor and felt very sluggish, at least when you’re used to multi-core beasts in both workstation and laptop form. I was expecting a similar experience here, but frankly I’m gobsmacked at not only how quickly it boots up from cold, but also how responsive it is when starting applications. I may have to revisit the notion that I’m only gong to be using it for writing on.

My biggest concern is my wife though. Her Dell XPS 1310 that is her bread-and-butter, has developed another fault after only a short life – no doubt something to do with the large drop it suffered a while back :-) If she gets her hands on this little beauty I fear for it’s safety. I’m hoping it’s going to be small enough to hide at the bottom of my bag under the packets of tissues and fluff and will escape her notice. So, if a strange women comes up to you and asks if you’ve seen this netbook of mine, deny everything…

Thursday, 5 November 2009

Happy 21st/12th Anniversary

This week marks a palindromic moment in my relationship with my wife. On the one hand we have been married for 12 years, as of Monday. But we've been together for much longer, 21 years as of today. As happens each year, my wife says Happy Anniversary on the 1st, and yet I'm still waiting for the 5th because to me that is our spiritual anniversary. This of course has nothing to do with my complete inability to remember names or dates; obscure numbers like hex addresses and error codes - no problem, but birthdays and appointments - no chance.

And yet she still puts up with me... Even during my six month sabbatical I never really pulled my weight as I could have. She looks after the kids, runs the house, manages the finances and holds down a job! On top of that she still finds time to massage my ego and deal with my many insecurities. Quite frankly I really don't deserve her.

So tonight at the local annual fireworks display, even though I'll have the kids with me, and be surrounded by friends, St Judith's Field will still feel empty and the fireworks will lack a certain sparkle because you're not there with us. Happy Anniversary my love.

Monday, 2 November 2009

A Chip Off the Old Block

My eldest has a presentation to do for his English class at school which I managed to catch him practising. It's about the history of video game consoles. Naturally he has most to say about Nintendo's recent output given that even the N64 was out before he was born! Still, he starts out with the Atari 2600 which is good to see as it's my spiritual starting point in this industry, and we have one of those in the house (a "Heavy Sixer" no less). Of course I waited until he had finished his research before pointing out the copy of High Score!: The Illustrated History of Electronic Games on the bookshelf upstairs.

So, does he display any other tendencies towards following in his old man's footsteps? Obviously he's aware of what I do for a living (to a certain degree) and has shown some interest in programming in the past, starting with the time they were playing around with Logo at school during KS2. Consequently I nudged him towards my Java JLogo applet, which was inspired by my somewhat older nephew when he was learning Logo at that age. The school also promotes applications like Scratch, which I think is pretty neat as it's visual programming with a real sense of fun. These days text I/O is pretty dull even to diehards like me so why expect a child to find it interesting?

I've also downloaded Microsoft SmallBasic, which is nothing like the BASIC I grew up with, but I thought that it might be much easier to do something interesting with. He found a Pong example program which he could play around with. I'm also aware of a book aimed at children that uses Python as a first language which seems to have favourable reviews. I might invest in that myself as I've been meaning to learn Python. I guess once he studies Computer Science proper at school, it'll either rekindle his interest in programming or just reinforce a view that videogames are the only aspect of computing worth investing time in :-)

I remember one day, when he was much younger, standing behind me watching me relentlessly debugging some tricky code in Visual C++. All he could see happening on screen though was the yellow bar that marks the 'current source line' jumping around the screen as I entered into and out of functions. He remarked to his mum that all I did all day was "make a yellow line move up and down the screen". I suppose that's not that different from playing the Atari 2600 really…