Despite all of the advances in client-side scripting, the wonderful JavaScript libraries like Prototype and Scriptaculous, and the ease of writing AJAXy code in ASP.NET, there’s still one aspect of modern web development that can be a complete pain in the butt: accessibility for users without JavaScript. If you’re lucky – perhaps you’re developing an Intranet application, or the like – a simple <noscript>Error: This application requires JavaScript enabled</noscript> is all it takes. But other times, you need to take that extra step and make it work for those with and without JavaScript enabled.

There’s a lot of ways that this can be accomplished, but one of the more popular ways is with the SCRIPT/NOSCRIPT combo...

<script type="text/javascript">
    document.write('Only users with JavaScript will see me.');    
</script>
<noscript>
    Only users without JavaScript will see me.
</noscript>

While this works fine in a lot of scenarios, it can get especially tricky when you want to put server-side controls on the SCRIPT side of things. A lot of developers resort to something like this...

<div id="javaScriptOnly" style="display:none">
    Only users with JavaScript will see me.
    <asp:LinkButton runat="server" ... />
</div>
<div id="noJavaScript" style="display:block">
    Only users without JavaScript will see me.
    <asp:Button runat="server" ... />
</div>
<script type="text/javascript">
    document.getElementById('javaScriptOnly').style.display = 'block';
    document.getElementById('noJavaScript').style.display = 'none';
</script>

... and of course, things quickly get much uglier once you do this in the real world.

One solution that I use is a simple, custom-control called ScriptOnly. It works just like this...

<inedo:ScriptOnly runat="server">
    Only users with JavaScript will see me.
    <asp:LinkButton runat="server" onClick="doSomething" ... />
</inedo:ScriptOnly>
<noscript>
    Only users without JavaScript will see me.
    <asp:Button runat="server" onClick="doSomething" ... />
</noscript>

JavaScript users see a LinkButton, while non-JavaScript users see a plain old submit button. What’s neat about this technique is that you can put any type of content - server-controls, html, script tags, etc - and that content will only be displayed for JavaScript users. In essense, it works like a reverse NOSCRIPT tag.

Behind the scenes, ScriptOnly is a very simple control...

[ParseChildren(false)]
public class ScriptOnly : Control
{
    protected override void Render(HtmlTextWriter writer)
    {
        //Render contents to a StringWriter
        StringWriter renderedContents = new StringWriter();
        base.Render(new HtmlTextWriter(renderedContents));

        //write out the contents, line by line
        writer.WriteLine("<script type=\"text/javascript\">");
        StringReader sr = new StringReader(renderedContents.ToString());
        while (sr.Peek() >= 0)
        {
            // This could be optimized to write on one line; but
            // I've found this makes it easier to debug when
            // looking at a page's source
            writer.WriteLine(
                "document.writeln('{0}');",
                jsEscapeText(sr.ReadLine()).Trim());
        }
        writer.WriteLine("</script>");
    }

    private string jsEscapeText(string value)
    {
        if (string.IsNullOrEmpty(value)) return value;

        // This, too, could be optimzied to replace character
        // by character; but this gives you an idea of
        // what to escape out
        return value
            /*  \ --> \\ */
            .Replace("\\", "\\\\")
            /*  ' --> \' */
            .Replace("'", "\\'")
            /*  " --> \" */
            .Replace("\"", "\\\"")
            /*  (newline) --> \n */
            .Replace("\n", "\\n")
            /*  (creturn) --> \r */
            .Replace("\r", "\\r")
            /* </script> string */
            .Replace("</script>", "</scri'+'pt>");
    }
}

When "pre-reistered" in your web.config, it works just as well as the NOSCRIPT tag.

[[ Meta-blogging: as you may have noticed from the name/description change (and of course, this article) I’ve decided to shift the focus of this blog back to the “front lines” of Microsoft/.NET development technologies. All other rants and ramblings will go to Alex's Soapbox over at WTF ]]

If you've ever come across this error...

The VirtualPathProvider returned a VirtualFile object with VirtualPath set to '/global/initrode/embeddedControl.ascx' instead of the expected '//global/initrode/embeddedControl.ascx'

... then chances are you're implementing VirtualPathProvider in order to serve up embedded Page/Control resources or something fun like that. Let's just hope your not serving pages from a ZIP file. And if you have no idea what a VirtualPathProvider is, then do check out that MSDN article I linked to get an idea.

The reason behind this error is identified in Microsoft Bug #307978: ASP.NET is erroneously replacing page compilation errors with the bad virtual path error. While ensuring that your virtual-pathed page will compile is a sure-fire way to fix the error, finding the compilation errors can be a bit of pain.

Fortunately, there's a pretty easy workaround that will let you see some of the compilation errors. First, make sure that your custom VirtualPathProvider has a static method that can determine if given virtualPath is on disk or is virtualized (e.g. an embedded resource). Next, create an IHandlerFactory that inherits PageHandlerFactory, overrides the GetHandler method, and has a try/catch around a call to base.GetHandler(). In the event that an exception occurs, simply determine if the request's virtual path is "virtual" (through that static method) and, if so, rethrow the exception with only the error message. In other words,

public class MyPageHandlerFactory : PageHandlerFactory
    {
        public override IHttpHandler GetHandler(HttpContext context, string requestType, string virtualPath, string path)
        {
            try
            {
                return base.GetHandler(context, requestType, virtualPath, pathTranslated);
            }
            catch (Exception ex)
            {
                //TODO: ASP.NET 2.0 Bug Workaround
                // There is an error generating a stack trace for VirtualPathed files, so 
                // we have to give up our stack trace if it's a resource file

                if (EmbeddedResourceVirtualPathProvider.IsResourcePath(virtualPath))
                    throw new Exception(ex.Message);
                else
                    throw ex;
            }
        }
    }

Since we're only wrapping the GetHandler method (as opposed to the IHttpHandler's ProcessRequest method), the only errors you'll see wrapped like this are pre-ProcessRequest errors (e.g. compilation errors). And while this won't give you the full stack trace, at least you'll see something like this instead:

http://server//global/initrode/embeddedControl.ascx(5): error CS1002: ; expected

There's been a buzz going around about a new web startup called Coghead.  Heralded by Business 2.0 as one of the "innovations that could reorder entire industries," Coghead is lead by former Red Hat executive Paul McNamara and Extricity founder Greg Olsen. El Dorado Ventures, a Venture Capitalist firm, recently invested $2.3M in the company. According to McNamara,

Coghead will enable nonprogrammers to rapidly create their own custom business software

In other words, Coghead is the b*stard child of 4GL and Web 2.0; it's the end result of a careful mixture of myth and buzzword; and, I'm sure, it will play an important part in several upcoming The Daily WTF articles. Before we get into the details of Coghead, let's take a look back at the world of 4GL.

There's a bit of ambiguity surrounding what is and isn't a 4GL (Fourth Generation Language), so I'll stick with James Martin's characterization from his 1982 book, Application Development Without Programmers. The book's title should give you a good enough understanding of the goal of a 4GL: the ability to develop complex custom business software through the use of simple-to-use tools.

In the quarter-century since Application Development Without Programmers debuted, let's consider how far we've come on this goal: dBase, Clipper, FileMaker, and Access. It's a pretty far cry from what James Martin and the other 4GL dreamers had in mind. Sure, Jane in Accounting could easily use Microsoft Access to create custom software to manage her music collection, but ask her to develop the General Ledger in Access and you'll find your books in worse shape than Enron's and Tyco's combined.

There's a simple, common-sense reason why custom business software will always require programmers. It's the same reason that brickwork will always require a mason and why woodwork will always require a carpenter. No matter how complex and versatile a tool is, an experienced builder is always required to create something unique.

Like many other common-sense principles, the "software machine" is one that some programmers don't get. Be it with The Tool or The Customer Friendly System, these programmers believe they are so clever and so intelligent that they can program even themselves into obsolescence.

Some businesses don't get it, either. But they will eventually pay the price: the "mission critical" software they developed for themselves in Microsoft Access will become their albatross, costing time, opportunity, and, eventually, lots of money for real programmers to fix their mess.

And this is where we return to Coghead. You see, Coghead is merely another example of this arrogant ignorance, but this time it's web-based and enterprisey. That's right; unlike its desktop counterparts, Coghead is targeted towards big businesses, not small businesses and hobbyists:

"anyone who can code a simple Excel macro should have little trouble using Coghead to create even sophisticated enterprise apps like logistics trackers, CRM programs, or project management systems.

Even with a liberal application of AJAX, the fact that Coghead is web-based means that it's less functional and offers a poorer experience than its desktop equivalent. Not only that, but all data and business logic are left in the hands of a third party. That, in and of itself, is a good enough reason to avoid Coghead.


the Coghead web IDE

Twenty years ago, if you developed a dBase application, you "owned" it and knew that so long as you could find a MS-DOS 2.0 disk, your data and business logic were safe. If you developed a Coghead application, what would happen if Coghead went out of business? What if they upgrade their system and it breaks your application? What if you forget to pay the subscription fee and they delete your application? It just isn't worth the risk.

Some might argue that this negative analysis is a knee-jerk reaction to a threat. After all, Business 2.0 claims that Coghead will be a disruptor for "initially, custom software developers, but potentially almost all software-tool makers." But I'm not threatened by Coghead; I'm disappointed.

We've come a long way with software development in the past twenty-five years, from automated build processes to advanced integrated development environments. We've developed effective techniques for communicating with users and bringing them closer to the development process. Perpetuating the myth that programmers are an unnecessary part of the software development process does nothing but alienate users and frustrate them when the (programmer-developed) ÜberTool fails to deliver the results it promised.

##
UPDATE, Feb 19, 2009 -- and would you know it, CogHead is no more. And Access '97 still runs strong.

Of all the tools that ship with SQL Server, Enterprise Manager is by far the most feature-packed and widely-used. Nearly every SQL Server developer is familiar with Enterprise Manager. They are comfortable using the wizards and GUI to do everything from creating a new table to adding a schedule job. But as a project grows to encompass more developers and environments, Enterprise Manager becomes a detriment to the development process.

Most applications exist in at least two different environments: a development environment and a production environment. Promoting changes to code from a lower level (development) to a higher level (production) is trivial. You just copy the executable code to the desired environment.

  • Click on the desired database.
  • Click on Action, New, then Table.
  • Add a column named "Shipper_Id" with a Data Type "char", give it a length of 5, and uncheck the "Allow Nulls" box.
  • In the toolbar, click on the "Set Primary Key" icon. Then you skip 22 steps.
  • In the toolbar, click on the "Manage Relationships…" button.
  • Click on the New button, and then select "Shippers" as the Foreign key table.
  • Select "Shipper_Id" on the left column and "Shipper_Id" on the right column. Skip the remaining steps.

Not only is this process tedious, but you're prone to making errors and omissions when using it. Such errors and omissions leave the higher-level and lower-level databases out of sync.

Fortunately, you can use an easier method to maintain changes between databases: Data Definition Language (DDL). The change described in the previous example can be developed in a lower-level environment and migrated to a higher-level environment with this simple script:

CREATE TABLE Shippers (
  Shipper_Id CHAR(5) NOT NULL 
    CONSTRAINT PK_Shippers PRIMARY KEY,
  Shipper_Name VARCHAR(75) NOT NULL,
  Active_Indicator CHAR(1) NOT NULL 
    CONSTRAINT CK_Shippers_Indicator 
      CHECK (Active_Indicator IN ('Y','N'))
)
ALTER TABLE Orders
  ADD Shipper_Id CHAR(5) NULL,
  ADD CONSTRAINT FK_Orders_Shippers
    FOREIGN KEY (Shipper_Id)
    REFERENCES Shippers(Shipper_Id)

You can manage all the DDL scripts with a variety of different techniques and technologies, ranging from network drives to source control. Once a system is put in place to manage DDL scripts, you can use an automated deployment process to migrate your changes. This process is as simple as clicking the "Deploy Changes" button.

The perceived difficulty of switching changes from Enterprise Manager to DDL scripts is one of the biggest hurdles for developers. The Books Online don't help change this perception. A quick look at the syntax for the CREATE TABLE statement is enough to discourage most developers from using DDL.

Enterprise Manager helps you with this transition. Before making database changes, Enterprise Manager generates its own DDL script to run against the database. With the "Save Change Script" button, you can copy the generated DDL script to disk, instead of running it against the database.

But as with any code generator, your resulting T-SQL script is far from ideal. For example, having Enterprise Manager generate the DDL required for the change described in the example involves six different ill-formatted statements. What do you do now? You can add a bit of refactoring to the generated script, and the result looks almost identical to the example script I showed earlier. After a few more rounds of generating and refactoring, you'll want to transition straight to DDL, and never look back at tiresome database development within Enterprise Manager.

There are few emails that one will receive in his lifetime that will render him completely speechless. This past weekend, I received one such email. Its subject read Congratulations on your MVP Award!

I struggle with the words to describe how elated I am to be chosen for this award. Sure, I’ve worked my butt off in microsoft.public.accessories.paint, helping both newbies and vets solve their problems. But I never expected this. For me, it’s always been about my love of the Paint, and sharing my knowledge and expertise of Paint with the world.

I don’t want to bore you with me patting my self on the back, so I’ll just use the rest of this space to share my top three tips and tricks. I’ve got plenty more, so if you ever need some help with Paint, don’t hesitate to ask this MVP!


Why are some of my edges jagged?
You’ve discovered one of the dark secrets of digital art: pixilation. Because everything in your Paint image is made of small square blocks, the only way to make a diagonal line or a curve is to arrange the pixels in “steps;” these very steps give the image that ugly, jagged appearance.

Fortunately, we can help smooth out the jagged edges with a technique called anti-aliasing. The trick is to make the jagged edge an in-between color of the two bodies of colors. For our red circle and white background, all we need is pink, applied with the spray paint can tool.

And like magic, the jagged edge is no more!

How do I do shadows?
Shadows in Paint are incredibly easy to do:
1) Draw the shape you want to draw, but instead use black
2) Draw the shape you want to draw, using the colors you really want to use, but draw it at an angle slightly away from the black shape

Look ma, a shadow!

How can I make realistic looking Hair?
This is one of the more difficult things to accomplish in Paint. But it’s certainly doable. First, you need to figure out what hair style you want to use. Once you figure that out, it’s just a matter of using the right tool.

Curl

Believe it or not, this is a simple matter of using the wonderfully handy spray can tool. Just pick the hair color, and go crazy!!!

Baldy

This hairstyle is so ridiculously simple you’ll wonder why more cartoons characters aren’t bald. Simply apply the ellipse tool twice, above each ear, and you’ve got yourself a bald guy!

Side Part

When you want to make your character look neat and orderly, only the polygon tool will do. Here’s something funny: I like to part my own hair on the left, but draw it parted on the right. Funny, see, I told you!

Bed Head

Oh no, caught red handed without a comb! You can easily achieve this look with the use of the paint brush tool. Don’t go too crazy, it’s pretty easy to slip and go through an eye.


Be sure to congratulate Jason Mauss as well. He was awarded this year’s MSN Messenger MVP.

UPDATE: My appologies, but with the advent of relatively inexpensive commercial solutions avaiable, I've decided to suspend this project indefinitely. If I do need a solution for myself, I may take it up again. But until then, I would recommend getting a commercial version (http://www.valesoftware.com/products-express-agent.php is one source) or using the Windows Task Manager to run batch files.

UPDATE 2: I no longer "officially" recommend Vale's agent; though I've used the product for well over a year, they were completely non responsive (via phone or email) to a showstopper bug in their product (stopped working after 24 hours when a job was set to run every 5 minutes). My workaround was to have a Windows Task stop then start VAle's SQL Agent service. Also, as a commenter noted, a free version (http://www.lazycoding.com/products.aspx) is out there - I have not used this, however.

I was pretty excited to learn about SQL Server: Express Edition. It is a stripped-down of version of SQL Server that is free to get, free to use, and free to distribute. This is great news if you're in the business of building small- and mid-sized database applications but not in a position to fork over five grand for the full edition.

A free, stripped-down version of SQL Server is nothing new; afterall, MSDE filled this niche for the previous version of SQL Server. One thing that sets SQL Server Express apart is its branding and accessiblity. Not only does Express "feel" like SQL Server, it's easy to install, use, and administer. MSDE did not have these qualities, which kept it out of the reach of many would-be database developers.

The limitations imposed by SQL Server Express do not hinder most small- and mid-sized applications. A single processor and a gigabyte of RAM is enough to run most of these applications and it certainly takes a *lot* of data to fill a database up to four gigabytes. One thing that makes Express a deal-killer is the lack of SQL Agent, which runs scheduled jobs and automates backups. That's important in just about all-sizes of applications.

I'm developing an application that will fill this functionality gap: Express Agent. I was hoping to have this complete before the launch of SQL Server Express, but other priorities prevented this from happening. Express Agent strives to replace and improve upon the SQL Agent that was left out.

Like the SQL Agent, Express Agent runs as a service. However, Express Agent can also be "plugged in" to a hosted web-application as a HttpHandler. This allows Express Agent agent to run as background thread, running jobs and sending email as needed.

The jobs are modeled in a similar fashion to the way SQL Server handles them. A job contains a number of tasks (SQL Scripts) that are run depending on whether the previous task was successful (no errors) or successful (errors). Jobs can also be scheduled on a one-time, idle, start-up, and recurring basis. The recurring schedule is handled much the same way SQL Server handles jobs as well.

Express Agent also adds database-email capability to Express Edition. Though not as complex as SQL Server's implemntation, this should cover just about any emailing you'd need to do from within your stored procedures. The mail feature is used to send success/failure notifications after jobs have been run.

It's difficult for me to show progress, since much of the work I've done is the "behind the scenes" stuff. I'm still working out the UI, HttpHandler, and some other issues, but so far it works great on it's own, so long as jobs are added via the stored procedures. No less, here's a few screen shots from the Jobs Manager UI ...

If this app looks like it may be of interest to you, I'd appreciate your feedback. If you're interested in lending a hand with some of the remaining portions, I'd really appreciate that, too. I plan on offering this completed product for free, but most likely not open source.

I saw that Jason Mauss wrote about his experience at the San Fransisco 2005 Launch Party, so I thought I'd share my experience at the Detroit venue. Because it wasn't the "real" Launch Party, we didn't have anything fancy like a speech from Steve Balmer, songs performed by AC/DC, or appearances by the guys from Orange County Choppers. But it was still a good time. Please bare with my lack of actual photographs, as I did not have the foresight to bring a camera.

Although the "doors" to the event opened at 7:30AM, the insatiable desire for inexpensive liquor required a stop at the duty-free shop first. I loaded up on Courvoisier, Chambord Royale, and many other fine spirits, saving easily $80 - $100. The US Customs agent even waived the "required" $2.85 duty per liter. He was surprisingly much nicer than the Canadian Customs agent, who demanded a birth certificate, a certificate certifying the birth certificate, the presence of my parents to certify the certified birth certificate, and a certificate certifying my parents are really my parents. Either that or a passport.

Tax-free Booze

The event was at the Renaissance Center, located in the heart of downtown. Despite being the tallest building in 100-square miles, it was surprisngly difficult to find, especially if you're unfamiliar Detroitonese, the language of the locals. They call it the "Ren Cen" and I'm surprised that any out-of-towner would find it.


The Ren Cen

Arriving there at 8:30, it was a bit disappointing to have to shell out $12.00 for parking. But such a high price does offer us protection from the Linux Crashers, who have a hard enough time getting a car to go downtown, let alone money to pay for parking. You know who I'm talking about, right? Those basement-dwelling fanboys who go to Microsoft conferences armed with Ubuntoo discs and try to dissuade people from attending because the carpet is not open source. They actually used to protest the building being "closed source," until someone pointed them to the city planning department for architectural diagrams.

Coming in so late offered one other large disadvantage: missing out on many of the cooler freebies given out by the vendors. Here's a quick classification/rarity guide on the Detroit Launch Event vendor free stuff:

  • Laser Pen (Rare) - Offered by Berbee, this was by far the coolest give-away. Only a few lucky attendies scored this combination pen/laser pointer. Surprisingly, no one abused these devices during the sessions.
  • Blinking Yo-yo (Rare) - I somehow managed to get one of these. It was really cool until I realized it was not a "sleeper" yo-yo, so I gave it away to a colleague.
  • Blinking HP Necklace (Uncommon) - About a third of the attendees had these, leading to two simultaneous yet conflicting feelings: "those are incredibly tacky" and "I wish I had one."
  • Quest Software Weeble (Uncommon) - I don't know what these were called actually, but it was just a yellow cotton ball with paper feet and plastic eyes glued on. Despite having an uncommon rarity, no one really wanted these.
  • Intel Mints (Uncommon) - These were in a neat, small metal container. They are borderline rare, mostly because you had to actually talk to the rep to get one. They were not just lying out like everything else.
  • Pens (Common) - A handful of vendors were giving these away, giving to a good variety of pens. All however were cheap and plastic.
  • Post-It Pads (Common) - Surprisingly, only one vendor was giving these away. Probably a good thing, just one less thing to end up in the landfill after the event.

Fortunately, there was plenty of free continental breakfast food. A good variety of bagels, danishes, and other pastries. The most notable thing from breakfast (and possibly even the day) was the itsy-bitsy jars of honey. They are about half the size of the mini-jars of jelly, and a fourth the size of baby-food containers. I was left speechless at the absolute adorableness of these mini-jars. I think I ended up with 12 of these.


Too cute to eat

After breakfast, there was the keynote speech and then a technical session. Not quite sure if there's anything more I can say about those.

Lunch time was absolutely incredible. There were tables and tables *stacked* with boxed lunches. I felt bad that they ordered so many more lunches than attendees, so I took three. My colleague was a bit less generous and had two. We also snapped up a highly-treasured premium: seats at a table. That's right, we were actually sitting down for lunch. I probably would have considering trading the seat for a blinking pendant and two pens, though. The boxed lunches were pretty good, too. I was only able to eat one, the others went in my already-overstuffed bags of goodies.

After lunch, they had another technical session.

I was "wowed" yet again after the second technical session. The community area was filled with lots of snacks: peanuts, pretzels, white-cheddar popcorn, etc. They even had this awful-tasting energy drink called Rockstar. Tried as I might, I was only able to down half of it. I was really hoping I'd like it, too, because there were a whole lot of them available. I did consider taking home a few, but I just couldn't imagine ever getting desperate enough to drink one of those again.


Gives you the energy to throw it away

My colleague and I decided to turn in the evaluation forms early and get the T-Shirt and SQL-Server/VS.NET/BizTalk software before there was a rush. And wow, let me tell you, that was quite a moment. It's one thing to experience Visual Studio 2005 through your buddy's work's MSDN subscription, but you really feel alive having your own, fully-licensed copy of the software. I'm still buzzing from that.

Things went a bit down hill from there. I was a bit disappointed at the next break between sessions three and four. It was very bleak. There were hordes of developers, all hungry and thirsty, scavenging through the remaining pop, water, and Rockstar drinks from the previous breaks. I felt bad for them; some were so desperate that they had to buy drinks the vending machines. Thankfully, I had about a weeks-worth of snacks and beverages saved up from the other breaks, so I was good to go.

All in all, it was a fun event. If you weren't able to make it, make sure to catch the "Best Of" tour starting next month. There probably won't be any vendors set up, and I doubt you'll see much free food, but you'll get the software. It's so worth it for that. If anyone is planning on attending the Brooklyn, Ohio "best of" event, I'll see you there!

"Ha," an email from a colleague started, "I think you can finally admit that MySQL is ready to compete with the big boys!" I rolled my eyes and let out a skeptical "uh huh." His email continued, "Check out Version 5. They now have views, stored procedures, and triggers."

My colleague has been a MySQL fan since day one. He loves the fact that it's free and open source and could never quite understand why anyone would spend tens of thousands of dollars on something else. But then again, he has never really had an interest in understanding; data management just isn't his "thing." Thankfully, he readily admits this and stays far, far away from anything to do with databases, leaving all of that "stuff" to the experts. No less, he'll still cheers whenever there's a MySQL "victory." it is, after all, free and open source.

Data professionals have traditionally relegated MySQL as a "toy" relational database management system (RDBMS). Don't get me wrong, it's perfectly suitable for blogs, message boards, and similar applications. But despite what its proponents claim, it has always been a non-choice for data management in an information-system. This is not a criticism of the "free open source" aspect of the product, but of its creators.

The MySQL developers claim to have built a reliable RDBMS yet seem to lack a thorough understanding of RDBMS fundamentals, namely data integrity. Furthermore, they will often surrogate their ignorance with arrogance. Consider, for example, their documentation on invalid data [emphasis added]:

MySQL allows you to store certain incorrect date values into DATE and DATETIME columns (such as '2000-02-31' or '2000-02-00'). The idea is that it's not the job of the SQL server [sic] to validate dates.

Wait a minute. It's not the job of the RDBMS to ensure data are valid?!? One of the greatest revelations in information systems is that applications are not good at managing their data: they change too frequently are too-bug prone. It just doesn't work. That's the whole point of a DBMS; it ensures that data are typed and valid according to business rules (i.e. an employee can't have -65 dependents).

But I digress. This is the 5.0 release. They've added views. They've added stored procedures. They've added triggers. Maybe things have changed.

I thought I'd check out MySQL 5.0 first hand, so I visited their website and downloaded the product. I have to say, the installation process was painless. It even defaulted to and recommended "strict mode," which apparently disallows the invalid dates as seen above. This is certainly progress!

After it installed, I fired up the MySQL prompt and started hackin' around.

 mysql> CREATE DATABASE ALEXP; Query OK, 1 row affected (0.00 sec) 
mysql> USE ALEXP; Database changed
mysql> CREATE TABLE HELLO (
-> WORLD VARCHAR(15) NOT NULL PRIMARY KEY,
-> CONSTRAINT CK_HELLO CHECK (WORLD = 'Hello World')
-> );
Query OK, 0 rows affected (0.14 sec)

Wow! I'm impressed! MySQL 5.0 has check constraints! Maybe I was wrong about these guys ...

 mysql> INSERT INTO HELLO(WORLD) VALUES('Hi World'); 
Query OK, 1 row affected (0.05 sec)

Err … umm … wait a minute. You did just see me put that check constraint on the HELLO table, right? It's not a very complicated check, maybe, I did it wrong?

 mysql> SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
-> WHERE TABLE_NAME='HELLO';
+--------------------+-------------------+-----------------+--------------+------------+-----------------+
| CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | TABLE_SCHEMA | TABLE_NAME | CONSTRAINT_TYPE |
+--------------------+-------------------+-----------------+--------------+------------+-----------------+
| NULL | alexp | PRIMARY | alexp | hello | PRIMARY KEY |
+--------------------+-------------------+-----------------+--------------+------------+-----------------+
1 row in set (0.01 sec)

Well how about that. It didn't actually add the constraint. Of course, had I done my research before hand, I would have known that MySQL 5.0 does not support check constraints. Apparently, in the MySQL world, one can buy a negative number of items and can be hired long before being born.

Alas, maybe I'm being too harsh; I suppose we could implement data validation logic in triggers. After all, this is often required in other RDBMS when CHECK constraints do not allow cross-table tests (e.g., making sure items cannot be added to a shipped order). So, why not just do it here?

The trigger cannot use statements that explicitly or implicitly begin or end a transaction such as START TRANSACTION, COMMIT, or ROLLBACK.

Oh that's just lovely. Leave it to MySQL to drop the most important use of triggers (complex data validation) and encourage their most obnoxious use (difficult to maintain business logic).

As far as other features added in MySQL, I think they are definitely a step in the right direction. Stored Procedures are a key component in creating a clean interface with strong-cohesion to the data layer (see, Strong vs Weak cohesion). Views (virtual tables) are absolutely essential for creating an effective and maintainable data model.

However, the new features do very little to budge MySQL's position as a "toy" RDBMS. As I mentioned before, this does not preclude MySQL from being an excellent tool for niche applications. But without mechanisms to ensure data are valid, MySQL will remain a non-choice for information systems.

No one has asked me that question just yet, but with the release of SQL Server 2005 just around the corner, I'm sure a handful of people will. Unlike regular User Defined Types, CLR UDTs are a new feature of SQL Server 2005 that allows one to create a .NET class and use it as a column datatype. As long as a few requirements are followed, one can create any class with any number of properties and methods and use that class as a CLR UDT.

Generally, when a new feature is introduced with a product, it can be a bit of a challenge to know when and how to use that feature. Fortunately, with SQL Server's CLR UDTs, knowing when to use them is pretty clear:

Never.

Let me repeat that. Never. You should never use SQL Server CLR User Defined Types. I'm pretty sure that this answer will just lead to more questions, so allow me to answer a few follow-up questions I'd anticipate.

Why Not?
CLR UDTs violate a fundamental principle of relational databases: a relationship's underlying domains must contain only atomic values. In other words, the columns on a table can contain only scalar values. No arrays. No sub-tables. And, most certainly, no classes or structures. Remember all the different levels of normalization? This is the first normal form, you know, the "duh" one.

This is a big thing. One can't just go and fudge a tried-and-true, mathematically-validated, theoretically-sound concept and "add and change stuff to it 'cause it'll be cool." Think of how much your car would love driving on a road made of stained glass blocks three years after it was built by an engineer who thought it'd look better.

Deviating so grossly from the relational model will bring as much joy as a dilapidated glass road. Take Oracle's foray into relational abuse: nested tables. I don't believe that there has ever been a single, successful implementation of that abomination. Sure, it may work out of the box, but after a year or two of use and maintenance, it decays into a tangled mess of redundancy and "synch" procedures -- both completely unnecessary with a normalized relational model.

And if that doesn't convince you, just think of having to change that CLR UDT. How easy do you think it would be to add a property to the class representing a few million rows of binary-serialized objects? And, trust me, it won't be nearly as easy as you think.

But wouldn't I want to share my .NET code so I don't have to duplicate logic?
This is always a novel goal, but an impossible one. A good system (remember, good means maintainable by other people) has no choice but to duplicate, triplicate, or even-more-licate business logic. Validation is the best example of this. If "Account Number" is a seven-digit required field, it should be declared as CHAR(7) NOT NULL in the database and have some client-side code to validate it was entered as seven digits. If the system allows data entry in other places, by other means, that means more duplication of the "Account Number" logic.

By trying to share business logic between all of the tiers of the application, you end up with a tangled mess of a system. I have illustrated this in the diagram below.

As you can see, the diagram on the right is a nicely structure three-tier architecture. The system on the right is the result of someone trying to share business logic between tiers, making a horribly tangled mess. One can expect to end up with the latter system by using CLR UDTs.

Never?!? How can there never, ever be an application of CLR UDTs?
Though I may not live by the cliché "never say never," I do follow the "never say 'never, ever'" rule. The only possible time where one might possibly want to use this feature is for developing non-data applications. But therein lies the crux: why would one develop a non-data application using SQL Server? There are certainly better tools out there for what the non-data application needs to accomplish. If you can come up with an appropriate use of a CLR UDT in an information system, I'll buy you a t-shirt or a mug.

But what about the samples provided? That's a use, right there!
Allow me to address these ...

Supplementary-Aware String Manipulation / UTF8 String User-Defined Data Type
Both of these samples have to do with UTF-8 character encoding. Without getting into the details, UTF-8 encodes characters as one, two, three, or four 8-bit bytes, meaning you can not do anything with characters in the string (length, substring, etc) unless you read it byte-by-byte. This works great for preserving "funny characters" while transmitting data but is a poor choice for storage. UCS-2 uses a fixed-size character format of 16-bits per character and is what should be used for storing character data.

Calendar-Aware Date/Time UDTs
Let's think about this. A point in time is a point in time; how it's described varies by culture ("Monday", "Lunes"), time zone (+6:00 GMT, -3:00GMT), calendar (Gregorian, Aztek), and format (2005-08, Aug '05). Describing a point in time properly is essential when interfacing with people or other systems. The keyword in that last sentence was "interface;" such description is best done in the "interface" tier of a system, not in the data tier. Doing this makes as much sense as putting currency conversion and language translation in the database.

Multi-dimensional Points and Latitude/Longitude
A geospatial location is described with Latitude and Longitude. Not Lati-longi-tude. These are two separate attributes and putting them in the same column violates First Normal Form. The same goes for points and other "array-like" structures.

Imaginary Numbers
Seriously? Correct me if I'm wrong, but the only actual use for imaginary numbers is in solving of differential equations. If you're not sure why this invalidates the example, say these two phrases aloud: "solving differential equations" and "relational database." Didn't that feel just like saying "drilling a hole" and "hacksaw?"

But what about if I want to put down "SQL CLR UDTs" on my resume?
What's stopping you now? By reading this article, you know everything you will ever need to about CLR UDTs. With this on your resume, you will be able to use your expert knowledge on the topic to never use CLR UDT.

I hope that clears things up about CLR UDT. Hopefully now you look forward to not using them and strongly opposing anyone who suggests it. Oh, and I really am serious about sending The Daily WTF swag to whoever can come up with a use for these things. So think about a use; you may just get a free t-shirt.

I'm asked that question every now and then from other developers who've played around in SQL Server Enterprise Manager and noticed the "User Defined Data Types" tab under their database. UDT seem a bit strange and pointless because they do not allow one to define (as one might expect) a data structure with more than one related data element. A UDT consists simply of a name and a base type (INT, VARCHAR(6), etc).

So why then would one use a UDT? It all has to do with a fundamental concept of data known as "domains." I'm not referring to a dot-com type domain, but a domain in the mathematical sense of restricting the value of a particular value. For example, the domain of x for "f(x) = 1/x" is "!=0".

We don't get domains in C++ / C# / VB / etc; all we have are types (integer, date, string, etc). But we're used to not having this; everyone knows you need to check if "x != 0" before trying to divide by x.  Imagine how much less coding (and related bugs) we'd have if trying to assign "0" to "x" threw an exception from the start, instead of in the middle. That's exactly what you can (and should) be doing with your databases. 

When I start on this same explanation to others, it turns out a lot don't quite understand what check constraints are. Basically, check constraints are used to define the domain of a column to ensure that a row can only contain valid data according to the business rules. For example, your Products table should have a check constraint on the Price column, requiring it to be greater than zero (this would cause an exception to be raised if you tried to update the price to zero). Here's another example of some code:

CREATE TABLE [Transactions] (
  [Transaction_Id] INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
  [Transaction_Type] VARCHAR(5) NOT NULL
    CHECK ([Transaction_Type] IN ('Debit','Credit','Escrow')),
  [Transaction_Amount] DECIMAL(4,2) NOT NULL
    CHECK ([Transaction_Amount] <> 0),
  [Reference_Code] CHAR(5)
    CHECK ([Reference_Code] LIKE '[A-Z][ A-Z][A-Z][A-Z][A-Z]'))
)

Get the idea? Each column has a constraint to ensure only valid data is allowed in the table. This way, there is no way that [Reference_Code] could contain anything but a five character string of upper case letters. No need to write code to test it, no need to ever validate it (except maybe on the data entry form so that the user doesn't see an ugly exception message), and no need to assume that it will be anything but that.

Now, immagine that you wanted to have the same [Reference_Code] attribute throughout your database. You'd have to define that check constraint time and time again. If the rules ever changed, you'd need to change it in every place. That's where UDTs come into place. UDTs are the SQL Server imlementation of domains.

If you have a common data element that will be used throughout the system, then it should be a UDT. Account number, Username, Order Number, etc; all should be UDT. When you define these types, you can easily apply rules (which are essentially just check constraints that apply whenever the type is used) to the type, and have it automatically enforced throughout the system.

It's really easy to do. I'll use the SQL 2005 syntax, but you can do the same things in 2000 using sp_addtype and sp_addrule:

CREATE TYPE USERNAME FROM VARCHAR(20)
GO

CREATE RULE USERNAME_Domain
    AS @Username = LTRIM(RTRIM(@Username))
   AND LOWER(@Username) NOT IN ('admin','administrator','guest')
GO

EXEC sp_bindrule 'USERNAME_Domain', 'USERNAME'
GO

And that's it. Now you can use the type throughout the database just as you normally would, and you'll never need to check or verify to make sure that someone slipped in an invalid value ...

CREATE TABLE [User_Logons] (
  [Username] USERNAME NOT NULL,
  [Logon_Date] DATETIME NOT NULL,
  [Success_Indicator] CHAR(1) NOT NULL
    CHECK ([Success_Indicator] IN ('Y','N')),
  PRIMARY KEY ([Username],[Logon_Date])
)

More Posts Next page »