In my recent post The Public Interface of a Database I talked about using User Defined Types and how I believe they help make the interface more maintainable. Now being in production and having real data stored in our tables means that changing them is going to be more difficult. The other day I needed to do just that and I found it’s not quite as simple as I had hoped…
Domain Types (Theory)
Although I’ve never studied database theory in any formal way I can see how User Defined Types are probably meant to be used - to allow you to express the commonality between data of the same nature and also to attach constraints to deal with restrictions that the underlying storage type may not possess.
For example, if you were storing a percentage (as an integer) you could use any of the integer types - tinyint, smallint, int etc, however it makes sense to use the smallest amount of storage required, i.e. tinyint (1 byte). But this type has a range of 0 - 255 which is wider than a percentage (0 - 100) so you need to bind a rule to say the value is actually limited to just (0 <= value <= 100).
Domain Types (Practice)
The problem is that the rule (at least under SQL Server) only applies to a column in a table; it is effectively a Check Constraint[+]. This means that declaring them in your stored procedure parameter list and using them within your SQL code only has a partial effect - you get the effects from abstracting the underlying storage type, but not the additional validation rules.
To make matters worse when you try to change the UDT and go searching for the ALTER TYPE statement you’ll find it doesn’t exist[*]. The only way you can change a UDT is to drop it and then create it again. But, to drop it you need to have nothing referencing it - no tables, functions or procedures.
Putting aside the difficulties in changing them for the moment let’s see what benefits they can bestow…
Type Aliases
Even though they do not enforce their constraints when as variables or arguments, they still have a marked effect on the maintainability of your code. Using the kind of example I cited in my previous post, which do you think is more expressive?
declare @name varchar(100)
vs
declare @name dbo.CustomerName_t
With good variable naming it is arguable how much the type adds in this example. If I changed the variable name to “@customerName” then they would both be just as explicit; but at least in the latter case you have the choice to let the type name enhance the readability.
There is also the temptation, especially with varchar columns, to just pick a size that is “big enough”. This means you might see varchar(100) in some places, varchar(128) in others and perhaps a very specific varchar(33) elsewhere. User defined types as simple aliases take all that guesswork away so that you don’t have to think nearly so hard. Other common sources of discrepancy are the various numbers used in financial scenarios that have a habit of being stored with different scales and precision, e.g. float(n), decimal(s, p) depending on the paranoia level of the developer.
Abstracting Identities
Entities, such as customers and products, are commonly referred to by some integer based surrogate key (perhaps generated by an IDENTITY column), but they may also be based on a natural key if, say, a short code exists. Once again, using a type alias means that you don’t have to think so hard; the example below could be based on an underlying integer or varchar type:-
declare @productId dbo.ProductId_t
This is a big win because unlike some cases where you could argue that you need to know what the underlying storage type is (perhaps you’re going to perform some manipulation of its contents) an entity ID is (in my experience) nearly always a unique & opaque value - it’s contents are unimportant. I’ve come to find this construct very unsatisfactory (in any language):-
declare @id int
The Integer type supports so many operations and allows for unnecessary promotions that it’s an accident waiting to happen[#] when used for the identity of an entity. Sadly, using UDT’s will not disable the undesirable arithmetic operations, but it should help avoid any unexpected type mismatches by virtue of not being able to get it wrong in the first place.
Mechanisms for Change
Let’s come back to that gnarly issue about how to change them later because that’s what prompted this post in the first place…
As mentioned earlier there is no magical ALTER TYPE statement and so that means you have two real choices:-
- Drop every referenced object, drop the old type and then recreate everything
- Introduce a new type, migrate everything across to that and then drop the old type
The second option may sound simpler, and it is, slightly; but only because you can recreate objects in an order that is less susceptible to dependency problems. But functions and procedures are the easy part anyway, what’s hard is altering a tables’ schema, especially big ones!
Ultimately your choice for updating a UDT that is used in a table depends very much on the change you need to make to the UDT and whether it requires data in the existing table to be touched. For example, if you widen a column from varchar(10) to varchar(20) SQL Server only has to change the table metadata to say the maximum length has increased because the actual length of each value does not change. Shrinking the column size or changing an integral type will likely cause data to be touched, or even just examined to ensure it still conforms. According to one of the Inside SQL Server books you can tell a priori whether data will need to be touched just by looking at the estimated execution plan.
So far my needs to change a UDT have only revolved around varchar columns and so I’ve been lucky that the table change has been the simple one. The changes to the functions and procedures were also pretty easy because we have the ability to rebuild the database from scratch (our Continuous Integration process does this each build) so I could ensure I caught and checked every place where the type was used. It’s a somewhat laborious process, but with the right development process around you it can be pretty easy.
Updating Tables
As I understand it the most efficient way to update an existing table schema (where you’re not just appending a new column) is to rename it, create the new one and then copy the data back over. When I first came to change my UDT I knew that I could either try and fix-up the existing one or create a new “version” of it. When I considered what actually needed to be done to fix the type and how big the table was I buckled and created a new type, called “MyType2_t”. This was because it felt like there were far less ordering issues and I didn’t have to generate scripts to drop things first. So, instead of this lengthy tirade:-
- Drop dependent functions and procedures
- Create copies of tables without the UDT to be modified
- Copy data to backup tables
- Recreate the UDT with the new definition
- Create new tables with the new UDT
- Copy the data back again
- Recreate the functions and procedures
…I could just use ALTER TABLE and switch the types, taking whatever hit was necessary to adjust the data in the tables (which was none in this case):-
- Create the new version of the UDT
- Alter the tables to use the new version of the UDT
- Recreate the functions and procedures[$]
- Drop the old version of the UDT
Yes, the deployment was easier, but we’re now left with a wart - a type with a version suffix (e.g. MyType2_t). After having gone through all this it dawned on me soon after that if you could use ALTER TABLE to switch types then surely you can just do this instead:-
- Drop dependent functions and procedures
- Alter the table to use a non-UDT based equivalent type
- Recreate the UDT with the new definition
- Alter the table to use the new UDT
- Recreate the functions and procedures
If SQL Server is smart enough to just apply a metadata change when growing a varchar(n) then it should be able to just replace the metadata when switching from a UDT that is effectively a varchar(n) to a non-UDT alias of a varchar(n). This would remove the dependency on the UDT without any upheaval at all. Sadly it’s an experiment I cannot run on this netbook, but I intend to.
Unnecessary Context Switching
Having worked for some time on a codebase that uses UDTs virtually everywhere, now, when I do see an explicit type I begin to question whether it’s correct or not and I find that very distracting. I begin to question whether any thought was put into it or whether the author was just being lazy. Perhaps it’s a new type and they just didn’t have time to create a UDT for it? Or maybe they did but missed fixing up this case? Any way you look at it I’m now being diverted from my real goal - but that’s just the age old argument about being consistent I suppose.
[+] According to the documentation for CREATE RULE that these are now deprecated anyway in favour of using Check Constraints. Was that because they just weren’t as useful as expected?
[*] There was a “petition” on Microsoft Connect about the lack of an ALTER TYPE statement which I discovered whilst trawling StackOverflow for alternate solutions.
[#] This is one area where C++ enumerations are a boon because they disallow the arithmetic operations and also give the value a type-safe holder. This jolly useful use of enumerations was pointed out to me by fellow ACCU member Jonathan Wakely.
[$] Each function and procedure script has the common “if (object_id(Xxx) is not null) drop Xxx” prologue.