Tuesday, 24 September 2013

Extension Methods Should Behave Like Real Methods

The other day I was involved in two discussions about Extension Methods in C#. The second was an extension (pun intended) of the first as a by-product of me tweeting about it. My argument, in both cases, is that the observable outcome of invoking an extension method with a null “this” argument should be the same as if that method was directly implemented on the class.

This != null

What kick-started the whole discussion was a change in the implementation of an extension method I wrote for the System.String class that checks if a string value [1] is empty:-

public static bool IsEmpty(this string value)
{
  return (value.Length == 0);
}

It had been replaced with this implementation:-

public static bool IsEmpty(this string value)
{
  return String.IsNullOrEmpty(value);
}

Ironically, the reason it was changed was because it caused a NullReferenceException to be thrown when provided with a null string reference. This, in my opinion, was the correct behaviour.

Whenever I come across a bug like this I ask myself what is it that is wrong - the caller or the callee? There are essentially two possible bugs here:-

  1. The callee is incorrectly implemented and does not support the scenario invoked by the caller
  2. The caller has violated the interface contract of the callee by invoking it with unsupported arguments

The code change was based on assumption 1, whereas (as implementer the extension method) I knew the answer to be 2. In this specific case the bug was still mine and down to me misinterpreting how “empty” string values are passed to MVC controllers [2].

My rationale for saying that the “this” argument cannot be null and that the method must throw, irrespective of whether the functionality can be implemented without referencing the “this” argument or not, is that if the method were to be implemented directly in the class in the future, it would fail when attempting to dispatch the method call. This could be classified as a breaking change if you somehow relied on the exception type.

Throw a NullReferenceException

This leads me to my second point and the one that came up via Twitter. If you do decide to check the “this” argument make sure you throw the correct exception type. Whilst it might be common to throw some type of ArgumentException, such as a ArgumentNullException when validating your inputs, in this case you are attempting to emulate a real method and so you should throw a NullReferenceException as that is what would be thrown if the method was real:-

public static bool MyMethod(this object value, . . .)
{
  if (value == null)
    throw new NullReferenceException();
  . . .
}

The reason I didn’t explicitly check for a null reference in my extension method was because I was going to call an instance method anyway and so the observable effect was the same. In most cases this is exactly what happens anyway and so doing nothing is probably the right thing to do - putting an additional check in and throwing an ArgumentNullException would be wrong.

Test The Behaviour

As ever if you’re unsure and want to document your choice then write a unit test for it; this is something I missed out originally. Of course you cannot legislate for someone also deleting the test because they believe it’s bogus, but it should at least provide a speed bump that may cause them to question their motives.

In NUnit you could write a simple test like so:-

[Test]
public void if_empty_throws_when_value_is_null()
{
  string nullValue = null;

  Assert.That(() => nullValue.IsEmpty,
         Throws.InstanceOf<NullReferenceException>());
}

Is It Just Academic?

As I’ve already suggested, the correct behaviour is the likely outcome most of the time due to the very nature of extension methods and how they’re used. So, is there a plausible scenario where someone could rely on the exception type to make a different error recovery decision? I think so, if the module boundary contains a Big Outer Try Block. Depending on what role the throwing code plays a NullReferenceException could be interpreted by the error handler as an indication that something logical has screwed up inside the service and that it should terminate itself. In a stateful service this could be an indication of data corruption and so shutting itself down ASAP might be the best course of action. Conversely an ArgumentException may be treated less suspiciously because it’s the result of pro-active validation.

 

[1] If you’re curious about why I’d even bother to go to such lengths (another pun intended) then the following older blog posts of mine may explain my thinking - “Null String Reference vs Empty String Value” and “Null Checks vs Null Objects”.

[2] It appears that you get an empty string on older MVC versions and a null reference on newer ones. At least I think so (and it’s what I observed) but it’s quite hard to work out as a lot of Stack Overflow posts and blogs don’t always state what version of MVC they are talking about.

Saturday, 21 September 2013

The Battle Between Continuity & Progress

One of the hardest decisions I think we have to make when maintaining software is to decide whether the change we’re making can be done in a more modern style, or using a new idiom, and if so what is the tension between doing that and following the patterns in the existing code.

Vive la Différence!

In some cases the idiom can be a language feature. When I was writing mostly C++ a colleague started using the newer “bind” helper functions. They were part of the standard and I wasn’t used to seeing them so the code looked weird. In this particular case I still think they look weird and lead to unreadable code (when heavily used), but you have to live with something like that for a while to see if it eventually pans out.

Around the same time another colleague was working on our custom grid control [1] and he came up with this weird way of chaining method calls together when creating a cell style programmatically:-

Style style = StyleManager.Create().SetBold().SetItalic().SetFace(“Arial”).SetColour(0, 0, 0).Set...;

At the time I was appalled at the idea because multiple statements like this were a nightmare to put breakpoints on if you needed to debug them. Of course these days Fluent Interfaces are all the rage and we use techniques like TDD that makes debugging far less of an occurrence. I myself tried to add SQL-like support to an in-memory database library we had written that relied on overloading operators and other stuff. It looked weird too and was slightly dangerous due to classic C++ lifetime issues, but C# now has LINQ and I’m writing code like that every day.

They say that programmers should try and learn a number of different programming languages because it’s seeing how we solve problems in other languages that we realise how we can cross-pollinate and solve things in our “primary” languages in a better way. The rise in interest in functional programming has probably had one of the most significant effects on how we write code as we learn the hard way how to deal with the need to embrace parallelism. It’s only really through using lambdas in C# that I’ve begun to truly appreciate function objects in C++. Similarly pipelines in PowerShell brought a new perspective on LINQ, despite having written command shell one-liners for years!

Another side-effect is how it might affect your coding style - the way you layout your code. Whilst Java and C# have fairly established styles C++ code has a mixture. The standard obviously has its underscores but Visual C++ based code (MFC/ATL) favours the Pascal style. I’ve worked on a few C++ codebases where the Java camelCasing style has been used and from what I see posted on the Internet from other languages I’d say that the Java style seems to have become the de-facto style. My personal C++ codebase has shifted over the last 20 years from it’s Microsoft/Hungarian Notation beginnings to a Java-esque style as I finally learnt the error of my ways.

When In Rome…

The counter argument to all this is consistency. When we’re touching a piece of code it’s generally a good idea not to go around reformatting it all as it makes reading diffs really hard. Adopting new idioms is probably less invasive and in the case of something like unique_ptr / make_shared it could well make your code “more correct”. There is often also the time element - we have more “important” things to do; there are always more reasons not to change something than to change it and so it’s easier to stick with what we know.

The use of refactoring within a codebase makes some of this more palatable and with good test coverage and decent refactoring tools it can be pretty easy much of the time. Unless it’s part of the team culture though this may well just be seen as another a waste of time. For me, when it’s out of place and continues to be replicated making the problem worse, then I find more impetus to change it.

For example, when I started on my first C# project it was a greenfield project and we had limited tools (i.e. just Visual Studio). A number of the team members had a background in C++ and so we adopted a few classic C/C++ styles, such as using all UPPERCASE names for constants and a “m_” prefix on member variables. Once we eventually started using ReSharper we found that these early choices were beginning to create noise which was annoying. We wanted to adopt a policy of ensuring there were no serious warnings from ReSharper [2] in the source code as we were maintaining it, and also that as we went forward it was with the accepted C# style. We didn’t want to just #pragma stuff to silence R# as we also knew that without the sidebar bugging us with warnings things would never change.

One particularly large set of constants (key names for various settings) continued to remain in all upper case because no one wanted to take the hit and sort them out. My suggestion that we adopt the correct convention for new constants going forward was met with the argument that it would make the code inconsistent. Stalemate! In the end I started factoring them out into separate classes anyway as they had no business being lumped together in the first place. This paved the way for a smaller refactoring in the future.

Whilst this particular example seems quite trivial, others, such as adopting immutability for data model classes or using an Optional<T> style class for cases where a null reference might be used are both more contentious and probably require a longer period of inconsistency whilst the new world order is adopted. They also tend not to bring any direct functional benefit and so its incredibly hard to put a value on them. Consistent code doesn’t stick out like a sore thumb and so it’s less distracting to work with, but there is also no sense in doing things inefficiently or even less correctly just to maintain the status quo.

 

[1] Writing a grid control is like writing a unit test framework - it’s a Rite of Passage for programmers. Actually ours was a lot less flickery than the off-the-shelf controls so it was worth it.

[2] The serious ones that is. We wanted to leverage the static analysis side of R#' to point out potential bugs, like the “modified closure” warning that would have saved me the time I lost in “Lured Into the foreach + lambda Trap”.

Wednesday, 18 September 2013

Letting Go - Player/Manager Syndrome

I’ve been following Joe Duffy’s blog for some time and this year he started a series on Software Leadership. In his first post he covered a number of different types of mangers but he didn’t cover the one that I feel can be the most dangerous: the ex-programmer turned manager.

While I think it’s great that Joe wants to maintain his coding skills and act as a mentor and leader towards his team, this is an ideal that would struggle to exist for long in the Enterprise Arena, IMHO. In the small companies where I’ve worked the team sizes have not demanded a full time project manager and so they have to ability to remain first-and-foremost a quality developer. In the corporate world the volume of paperwork and meetings takes away time they might have to contribute and so their skills eventually atrophy. In our fast-paced industry technologies and techniques move on rapidly and so what might once have been considered cutting-edge skills may now just be a relic of the past, especially when a heavy reliance on tooling is needed to remain productive.

There are some parallels here with the amateur football leagues, e.g. Sunday football [1], which I played in for many years. Someone (usually a long standing player) takes the role of manager because someone needs to do it, but ultimately what they really want to do is play football. As the team gets more successful and rises up the divisions the management burden takes hold and eventually they are spending more time managing and less time playing. At some point the quality of the players in the team changes such that the part-time player/manager cannot justify picking himself any longer because as a player he has become a liability - his skills now lie in managing them instead.

In software teams every full-time developer is “match fit” [2], but the danger creeps in when the team is under the cosh and the manager decides they can “muck in” and help take on some of the load. Depending on how long it’s been since they last developed Production Code this could be beneficial or a hindrance. If they end up producing sub-standard code then you’ve just bought yourself a whole load of Technical Debt. Unless time-to-market is of the utmost priority it’s going to be a false economy as the team will eventually find itself trying to maintain poor quality code.

No matter how honourable this sentiment might be, the best thing they can do is to be the professional manager they need to be and let the professional developers get on and do their job.

[1] Often affectionately known as pub football because it’s commonly made up of teams of blokes whose only connection is that they drink in the same boozer.
[2] Deliberate Practice is a common technique used to maintain a “level of fitness”.

Virtual Methods in C# Are a Design Smell

I came to C# after having spent 15 years writing applications and services in C++. In C++ there is no first class concept for “interfaces” in the same way that C# and Java has. Consequently they were emulated by creating a class that only contained pure virtual functions:-

class IConnection
{
  virtual void Open() = 0;
  virtual void Send(Message message) = 0;
  virtual void Close() = 0;
};

C# != C++

In C# however the interface is a first class concept and naturally when I started using C# I automatically made the implementation of my interface methods virtual [1]:-

interface IConnection
{
  void Open();
  void Send(Message message);
  void Close();
}

public class Connection : IConnection
{
  public virtual void Open()
  { }
  public virtual void Send(Message message)
  { }
  public virtual void Close()
  { }
}

Of course I very quickly discovered though that interfaces in C# do not rely on v-tables in the same was as C++ and that an interface method is altogether another concept over and above what virtual methods give you. Throw in Explicit Interface Implementation and you can seriously mess with the Liskov Substitution Principle (the L in SOLID) [2].

Once I understood that interface methods didn’t need to be virtual I also realised that most of the classes I create, and have been creating, are not actually polymorphic. In C++ they tend to look that way because it’s the only mechanism you have, whereas in C# it seems much clearer because of the more formal distinction between implementation inheritance and just implementing an interface. Putting aside my initial fumblings with C# I think I can safely count on one hand (maybe two) the number of times I’ve declared a method “virtual” in C#.

Testability

A common reason to mark a method as virtual is so you can override it for testing. The last time I remember creating a virtual method was for this exact purpose. I had a class I wanted to unit test but it ran “tasks” on the CLR thread pool which made it non-deterministic and so I factored out the code that scheduled the tasks into a separate protected virtual method:-

public class Dispatcher
{
  . . .
  public void DispatchTasks()
  {
    . . .
    foreach (var task in tasks)
        Execute(task);
    . . .
  }

  protected virtual void Execute(MyTask task)
  {
    ThreadPool.QueueUserWorkItem((o) =>
    {
      task.Execute();
    });
  }
  . . .
}

This allowed me to derive a test class and just override the Execute() method to make the asynchronous call synchronous by using the same thread:-

public class TestDispatcher : Dispatcher
{
  protected override void Execute(MyTask task)
  {
    task.Execute();
  }
}

Favour Composition Over Inheritance

Looking back, what I probably missed was a need to separate some concerns. The dispatcher class was responsible for not only finding the work to do, it was also responsible for queuing and scheduling it too. My need to mock out the latter concern to test the former should have been a smell I picked up on. In many cases where I’ve seen virtual methods used since it’s been to mock out a call to an external dependency, such as a remote service. This can just as easily be tackled by factoring out the remote call into a separate class with a suitable interface, which is then used to decouple the caller and callee. This is how you might apply that refactoring to my earlier example:-

public interface IScheduler
{
  void Execute(MyTask task);
}

public class ThreadPoolScheduler : IScheduler
{
  public void Execute(MyTask task)
  {
    ThreadPool.QueueUserWorkItem((o) =>
    {
      task.Execute();
    });
  }
}

public class Dispatcher
{
  public Dispatcher()
    : this(new ThreadPoolScheduler())
  {  }

  public Dispatcher(IScheduler scheduler)
  {
    _scheduler = scheduler;
  }

  public void DispatchTasks()
  {
    . . .
    foreach (var task in tasks)
        _scheduler.Execute(task);
    . . .
  }
  . . .
  private IScheduler _scheduler;
}

I could have forced the client of my Dispatcher class to provide me with a scheduler instance, but most of the time they would only be providing me with what I chose as the default anyway (there is only 1 implementation) so all I’m doing is making it harder to use. Internally the class only relies on the interface, not the concrete type, and so if the default eventually becomes unacceptable I can remove the default ctor and lean on the compiler to fix things up.

Too Much Abstraction?

One argument against this kind of refactoring is that it falls foul of Too Much Abstraction. In this example I believe it adds significant value - making the class testable - and it adds little burden on the client too. In general I’ve worked on codebases where there is too little abstraction rather than too much. Where I have seen excessive layers of abstraction occur it’s been down to Big Design Up-Front because code that is built piecemeal usually has abstractions created out of a practical need rather than idle speculation.

 

[1] Personally I blame COM which had #define interface struct so that you could use the “interface keyword” in your C++ code.

[2] Thanks goes to Ian Shimmings for showing me where using Explicit Interface Implementation goes against LSP for what looks like an acceptable reason - Fluent Interfaces.

Tuesday, 17 September 2013

Feature Branch or Feature Toggle?

One of the great things about joining a new team is having the opportunity to re-evaluate your practices in light of the way other people work. You also have a chance to hear new arguments about why others do things differently to the way you do. One recent discussion came about after I spotted that a colleague pretty much always used a branch for each (non-trivial) feature, whereas I always tend to use main/trunk/master [1] by default.

Martin Fowler wrote about both Feature Branches and  Feature Toggles a few years ago and ever since I worked on a project where there were more integration branches than a banyan tree [2] I’ve favoured the latter approach. Working with a source control system that doesn’t support branching (or does but very poorly, like SourceSafe) is another way to hone your skills at delivering code directly to the trunk, without screwing up the build (or your teammates).

Branching

The reason for branching at all is usually because you want some stability in the codebase whilst you’re making changes. Your view/working copy [3] is clearly isolated from other developer’s but unless you have the ability to commit your changes in stages it’s not exactly the most productive way to work. The most common non-development branch is probably the Release Branch where changes are made very carefully to maintain stability up to deployment and then to act as a safe place to create fixes and patches to address any serious problems.

I find myself very rarely wanting to branch these days. It’s more likely that I decide to shelve [4] my changes to reuse my working copy for something else, e.g. fix a build/deployment problem, after which I’ll un-shelve and carry on as if nothing happened. I don’t work in the kind of environments where spikes are very frequent; if anything it’s a potentially messy refactoring that is likely to cause me to branch, especially if the VCS doesn’t handle moving/renaming files and folders very well, like Subversion. Using the Repo Browser directly in Subversion with a private branch is probably the easiest way to remain sane whilst moving/renaming large folders as it saves on all the unnecessary shuffling of physical files around the working copy.

Toggling

My preference for publishing changes directly to the integration branch is borne out of always wanting to work with the latest code and keeping merges to a minimum. That probably sounds like I’m being a little hypocritical after what I said in “What’s the Check-In Frequency, Kenneth?”, but I try and keep my commits (and/or pushes) to a level where each change-set adds something “significant”. This also allows me to fix and commit orthogonal issues, like build script stuff immediately without having to cherry-pick the changes in some way.

Generally speaking, new code tends to have a natural “toggle” anyway, such as a menu entry or command line verb/switch that you can disable to hide access to the feature until its ready. When the changes go a little deeper I might have to invent a switch or .config setting to initially enable access to it. This way the feature can be tested side-by-side with the existing implementation and then once the switchover has occurred the default behaviour can be changed and the enabling mechanism removed. The need to do this kind of thing comes out of the way some organisations work - a lack of formal acceptance testing means the change practically reaches production before it’s known whether it will go live or not!

Waste Until Proven Useful?

What I found interesting in our discussion of the branch vs toggle approach was that my colleague felt he needed to keep his “work-in-progress” code out of the integration branch until it was mature. Looking at it from a lean perspective I guess you could argue that a feature that is not enabled is just waste and so shouldn’t exist - it’s dead code. This kind of make sense, but I think it turns a feature into an all-or-nothing proposition and there may be value alone in any refactoring that has been done to get the feature implemented. I would want that refactoring to take effect in the development integration branch as soon as possible to give it time to bed in.

I guess what makes me comfortable with working directly on the trunk is having 20 years experience to draw on :-). I have the ability to break down my changes into small tasks, and I know what kinds of changes are highly unlikely to have any adverse effects. This also presupposes the codebase is being developed by other people that also don’t have a habit of doing “weird stuff” that is likely to just break when seemingly unrelated changes occurs. All the usual good stuff like decent test coverage, low coupling and high cohesion makes this style of working achievable.

 

[1] Have we reached a general consensus yet on what we call the major “development” integration branch? I know it as “main” from ClearCase, “trunk” from Subversion and “master” from Git. SourceSafe doesn’t have a name for it because it doesn’t exactly make branching easy.

[2] Not sure if they have the most branches, but they sure seem to have a lot!

[3] Once again, I have “view” from ClearCase and “working copy/folder” from Subversion/SourceSafe.

[4] And yet another term - “shelve” or “stash”?

Monday, 16 September 2013

Overcoming the Relational Mindset

My current project is a bit of a departure for me as I’ve left behind the world of the classic SQL RDBMS for a moment and am working on one of those new-fangled NOSQL alternatives - MongoDB. Whilst I haven’t found any real difficulty adjusting to the document-centric world (thanks to too much XML) I have noticed myself slipping back into the relational mindset when making smaller changes to the schema. One such example happened just the other day…

Stop Extending Tables

Imagine you’re working for a retailer that has some form of loyalty card mechanism. Whenever you make a purchase you are told the number of points you have received for the current purchase, plus any accrued up to some date (notionally today). The initial part of the document schema might look something like this:-

LoyaltyBonus:

  CardNumber: “1234 5678”, 
  Points: 100
}

Now, the second part - the accrued points to date - has a slight twist in that the service required to obtain this data might not be available and so it’s not always possible to obtain it. Hence that part of the structure is optional. Slipping back into the relational mindset I automatically added 2 nullable attributes because what I saw was a need to extend the LoyaltyBonus “table” with two optional values like so:-

LoyaltyBonus

  CardNumber: “1234 5678”, 
  Points: 100, 
  BalancePoints: 999,        // Optional 
  BalanceDate: “2013-01-01”  // Optional
}

…and when the loyalty service is not available it might look like this:-

LoyaltyBonus:

  CardNumber: “1234 5678”, 
  Points: 100, 
  BalancePoints: null, 
  BalanceDate: null
}

Of course the null values can be elided in the actual BSON document but I’m showing them for example’s sake. The two attributes BalancePoints and BalanceDate are also tightly coupled, either they both exist or neither does. That might seem fairly obvious in this case, but it’s not always.

Documents, Not Columns

What I realised a little while later (after peer reviewing someone else’s changes!) was that I probably should have created a nested document for the two Balance related fields instead:-

LoyaltyBonus:

  CardNumber: “1234 5678”, 
  Points: 100, 
  Balance: 
  {  
    Points: 999, 
    Date: “2013-01-01” 
  }
}

Now the Balance sub-document exists in its entirety or not at all. Also the two values are essentially non-nullable because that’s handled at the sub-document level instead. The other clue, which in retrospect seemed blindingly obvious [1] was the use of the prefix “Balance” in the two attribute names.

 

[1] It’s never quite that simple in practice because you have probably already gone through a number of other refactorings before you got to this point. In a sense it’s a bit like going through the various Normal Forms in a relational schema - at each step you need to re-evaluate the schema and pull out any more sub-documents until you’ve factored out all the optional parts.

Friday, 6 September 2013

OwnedPtr and AssocPtr - UML in C++

My recent post about “Overdoing the References”, coupled with another recent post from an ex-colleague (Carl Gibbs) titled “Divide in C++ Resource Management” caused me to remember an idea we tossed about around the turn of the millennium for representing UML ownership semantics in C++...

Back then general purpose smart pointers, and in particular reference-counted smart pointers were still fairly cutting edge as Boost was in its infancy (if you were even aware of its existence). Around the same time UML was also gaining traction which I personally latched onto as I found the visualisation of OO class hierarchies jolly useful [1]. What I found hard though was translating the Aggregation and Association relationships from UML into C++ when holding objects by reference. This was because a bald (raw) pointer or reference conveys nothing about its ownership semantics by default. References at least had the convention that you don’t tend to delete through them (if you exclude my earlier reference obsessed phase), but that wasn’t true for pointers.

Unique Ownership

Reference-counted smart pointers like std::shared_ptr<> are the Swiss-Army knife of modern C++. The common advice of not using std::auto_ptr<> with containers is probably what led to their adoption for managing memory everywhere - irrespective of whether the ownership was actually shared or logically owned by a single container, such as std::vector<>. My overly literal side didn’t like this “abuse” - I wanted ownership to be conveyed more obviously. Also, the number of places where shared ownership even occurred was very rare then because there was always an acyclic graph of objects all the way down from the root “app” object that meant lifetimes were deterministic.

UML in C++

image

The canonical example in UML of a where both forms of ownership crops up is probably with a tree structure, such as with the nodes in an XML document. A node is a parent to zero or more children and the relationship is commonly bidirectional too. A node owns its children such that if you delete a node all its children, grand-children, etc. get deleted too.

Using bald pointers you might choose to represent this class like so:-

template<typename T>
class Node
{
private: 
  T                  m_value; 
  Node*              m_parent; 
  std::vector<Node*> m_children;
};

However you could argue there is a difference in ownership semantics between the two Node* based members (m_parent and m_children). The child nodes are owned by the std::vector<> container, whereas the parent node pointer is just a back reference. Naive use of reference-counted smart pointers for both relationships can lead to memory leaks caused by the cyclic reference between parent and child and so by keeping the child => parent side of the link simple we avoid this.

The Smarter Pointer

So, to deal with the ownership of the children we came up with a std::auto_ptr<> like type called OwnedPtr<>. The idea was that it would behave much like what we now have in the std::unique_ptr<> type, i.e. std::auto_ptr<> like non-shared ownership, but without the container problems inherit with auto_ptr<>.

  Node*                       m_parent; 
  std::vector<OwnedPtr<Node>> m_children;

The Dumbest Pointer

Whilst we could have left the child => parent pointer bald, this meant that it would be hard to tell whether we were looking at legacy code that had yet to be analysed, new code that mistakenly didn’t adhere to the new idiom, or code that was analysed and correct. The solution we came up with was called AssocPtr<> which was nothing more than a trivial wrapper around a bald pointer! Whilst it was functionally identical to a raw pointer, the name told you that the pointer was not owned by the holder.

  AssocPtr<Node>              m_parent; 
  std::vector<OwnedPtr<Node>> m_children;

Exit UML / Enter shared_ptr

In the end this idea became just another failed experiment. The OwnedPtr<> type was pretty indistinguishable from a classic reference-counted smart pointer and ultimately it was easier to just share ownership by default rather than decide who was the ultimate owner and who was just a lurker. Once Boost showed up with its various (thread-safe) smart pointer classes the need to crank one’s own variants pretty much evaporated.

[1] I also thought the Use Case, Deployment and Sequence diagrams were neat too. While I still find value in the latter two I got disillusioned with the Use Case aspect pretty quickly.