December 2006 - Posts - Jon Galloway

December 2006 - Posts

sysobjects, sys.objects... what about INFORMATION_SCHEMA?

SQL Server Management Objects (SMO) is one of the coolest parts of SQL Server 2005, in my opinion. I've written about SMO before, and used it extensively in writing Data Dictionary Creator. In a nutshell, SMO is a collection of objects that are designed for programming all aspects of managing Microsoft SQL Server.

One thing about SMO puzzles me, though - it uses the system tables (sysobjects and sys.objects) instead of INFORMATION_SCHEMA when it checks if tables, views, or procedures exist. There are at least three ways that's evident:

  1. Scripting database objects from SSMS
  2. Scripting database object using SMO in CaptureSQL mode
  3. The new SQL Server Database Publishing Wizard

All the above check for object existence using system table checks rather than INFORMATION_SCHEMA checks. I'm not just guessing how they work, though - take a look at Microsoft.SqlServer.Management.Smo.Scripts..ctor() in Reflector:

Scripts.INCLUDE_EXISTS_TABLE80 = "IF {0} EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'{1}') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)"
Scripts.INCLUDE_EXISTS_TABLE90 = "IF {0} EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'{1}') AND type in (N'U'))"
Scripts.INCLUDE_EXISTS_VIEW90 = "IF {0} EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'{1}'))"
Scripts.INCLUDE_EXISTS_VIEW80 = "IF {0} EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'{1}') AND OBJECTPROPERTY(id, N'IsView') = 1)"
Scripts.INCLUDE_EXISTS_PROCEDURE80 = "IF {0} EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'{1}') AND OBJECTPROPERTY(id,N'IsProcedure') = 1)"
Scripts.INCLUDE_EXISTS_PROCEDURE90 = "IF {0} EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'{1}') AND type in (N'P', N'PC'))"

INFORMATION_SCHEMA views good, System Tables bad

The biggest problem is that internal system tables change between versions, so scripts are generated for one specific SQL Server version. The INFORMATION_SCHEMA views, on the other hand, are part of the SQL92 standard, so they don't change between versions.[1] Here's a pretty good article on INFORMATION_SCHEMA if you'd like to read further.

For example, the following works on both SQL 2000 and SQL 2005:

use northwind

if exists (select * from information_schema.tables where table_name = 'Products' and table_type = 'BASE TABLE') print 'Products table exists'
if exists (select * from information_schema.tables where table_name = 'Invoices' and table_type = 'VIEW') print 'Invoices view exists'
if exists (select * from information_schema.routines where specific_name = 'SalesByCategory') print 'SalesByCategory stored procedure exists'


I know there are some cases where the INFORMATION_SCHEMA won't cut it. INFO SCHEMA is a cross database standard, so it's got no concept of some advanced and proprietary features. In Data Dictionary Creator, I had to support different syntax for extended property lookup, since SQL Server 2000 uses  ::fn_listextendedproperty and SQL Server 2005 uses sys.fn_listextendedproperty.

Fine. That's no reason to drop INFORMATION_SCHEMA - use it for the 90% of the time it will do the job, and drop down to the system tables when you have to.

Workaround: Use Regular Expression replacements on the scripts

You can use regular expression replacements to "fix" SMO generated scripts to run against INFORMATION_SCHEMA.

This regular expression matches a table existence check in SQL 2000 specific syntax:

^IF\ EXISTS\ \(SELECT\ \*\ FROM\ dbo\.sysobjects\ WHERE\ id\ =\ OBJECT_ID\(N'(?<tablename>.+?)'\)\ AND\ OBJECTPROPERTY\(id,\ N'IsUserTable'\)\ =\ 1\)$

Here's the same thing, in SQL 2005 syntax:

^IF\ NOT\ EXISTS\ \(SELECT\ \*\ FROM\ sys\.objects\ WHERE\ object_id\ =\ OBJECT_ID\(N'\[dbo]\.\[(?<tablename>.+?)]'\)\ AND\ type\ in\ \(N'U'\)\)$

We're using a named capture group so we can use it in the replacement. Then we replace either with the following:


I'm resisting the urge to write a utility to handle the script conversion since I need to be working on my book. Here's a rough outline  for a console application that would read the console input, convert table existence checks to use INFORMATION_SCHEMA, and write the result to the console output. The reason for using console input and output is to allow piping SqlPubWiz output through it. It's not tested, and it doesn't include checks for views, procedures, and indices; they'd follow the exact same pattern. This isn't the optimal way to do regular expression replacements - given time, I'd do these with MatchEvaluators.

using System;
using System.Text;
using System.Text.RegularExpressions;

class SqlScriptCleaner
static void Main(string[] args)
string script = System.Console.In.ToString();
@"^IF\ EXISTS\ \(SELECT\ \*\ FROM\ dbo\.sysobjects\ WHERE\ id\ =\ OBJECT_ID\(N'(?<tablename>.+?)'\)\ AND\ OBJECTPROPERTY\(id,\ N'IsUserTable'\)\ =\ 1\)$",
@"^IF\ NOT\ EXISTS\ \(SELECT\ \*\ FROM\ sys\.objects\ WHERE\ object_id\ =\ OBJECT_ID\(N'\[dbo]\.\[(?<tablename>.+?)]'\)\ AND\ type\ in\ \(N'U'\)\)$",

Come to think of it, this (theoretical) utility could also take care of my other gripe with SqlPubWiz, which is that it doesn't allow you to select which objects to script. It's inelegant, but it would work - let SqlPubWiz script everything, then clean the output to remove scripts for tables we don't want included.

A more practical workaround: Regex Replacements in SSMS

For now, though, I'm fine with just handling this manually in an editor that supports Regular Expression replacement. Visual Studio does it, as do most of the top Notepad replacement tools (Notepad++, Notepad2, etc.). Now that SQL Server Management Studio uses the Visual Studio IDE, you can do Regex replacements in SSMS as well.

  1. Generate your script and open it in a new query window - if you're just scripting schema, you can right click on the database, select Tasks, and pick the Generate Scripts task (selecting a new query window as the output method). If you're scripting schema and data with SqlPubWiz, output to a SQL file and open it in SSMS.
  2. Bring up the Quick Replace dialog (Edit -> Find / Replace -> Quick Replace, or just hit ctrl-h).
  3. Expand the "Find Options" section and check the "Use" checkbox, then ensure "Regular expressions" is selected.
  4. Enter the correct Regex find and replacements as shown below. Note that Visual Studio has its own bastardized Regex syntax, so you'll need to tweak the regexes.
    Revised Find expression: ^IF\ NOT\ EXISTS\ \(SELECT\ \*\ FROM\ sys\.objects\ WHERE\ object_id\ =\ OBJECT_ID\(N'\[dbo\]\.\[{.+}\]'\)\ AND\ type\ in\ \(N'U'\)\)
  5. Hit the Replace All button.
  6. Make sure the script still compiles by hitting CTRL-F5.

[1] Jamie Thomson reported one issue with INFORMATION_SCHEMA behavior changing between 2000 and 2005 due to an implementation bug in INFORMATION_SCHEMA.COLUMNS.NUMERIC.PRECISION for smalldatetime and datetime.

Posted by Jon Galloway | 2 comment(s)
Filed under: ,

[tip] Use the Database Publishing Wizard to script your table data

Jeff Atwood recently asked "Is your database under version control?"  Well, is it? It's not as easy as it sounds. Until now, there wasn't a good, scriptable solution to the problem. I'll run through a few of the options I've looked at in the past, but let's jump to to the punchline:

The Database Publishing Wizard can script data as schema, and it's got a command line mode.

The Problem

Most of the applications I work on run against SQL Server databases. Jeff hit the nail on the head when he said that your database is an integral part of your application. I'll take it a step further and say that your data is often an integral part of your application. Many databases include signifcant amounts of reference data which are as much a part of your software release as your config files - for instance, a DotNetNuke site without the correct version of the database is just plain shot.

I'm not talking about database backups, which are an important part of your production environment. Database backups are a more appropriate solution to safeguarding your transactional data, but they don't integrate at all with the rest of your application's version control.

I previously posted a script to check a database (MDF) file into Subversion. It works, but it's far from ideal. It's just a binary file, so while you've got history, you can't track differences. No good.

I've used SQL-Scripter, GenDbScripts (had to fix a few bugs), and Enisey Data Scripter. They all work, but none of them was scriptable. 

I set out to see how easy it would be to write something in PowerSMO that would script out the table schema and data. It should have been easy, since SMO has a special "execution mode" that records all commands and outputs the resulting script. Heck, David Hayden has one post showing C# sample code to script a database schema, and another using SMO's Transfer.TransferData() to copy all data from one database to another. No problem, I'll just mash them together to enable CaptureSql and TransferData, and I'm set...

Yeah, not really. I converted it to a PowerShell script. It worked great... but it only scripted the schema, not the data. It looks like TransferData doesn't go through script, so it's not picked up by that cool CaptureSql thing. (In case you care, that PowerShell script is in the footnotes.)

The Solution (well, pretty much...)

Then Scott Guthrie mentioned the new Database Publishing Wizard on an extensive post on deploying a SQL database to a remote hosting environment. Scott's post was pretty beefy and covered the SQL Server Hosting Toolkit, so you might have missed the beautiful screenshots below the fold. The Database Publishing Wizard makes it easy to script your schema and data from within Visual Studio: 


The more exciting thing to me is that the Database Publishing Wizard can be run from the commandline:

sqlpubwiz script ( -C connection_string | -d local_database_name ) (output_file) [switches]


 Output options:

  [-f] :                       Overwrite existing files
  [-noschemaqualify] :         Output script does not qualify object names with
                             schema name.
  [-schemaonly] :              Only script schema
  [-dataonly] :                Only script data
  [-targetserver] :            Specifies what version of SQL Server the script
                             should target.  Valid versions are "2000", "2005".
                             Default Value: 2005
  [-q] :                       Suppress output
  [-nodropexisting] :           Default Value: False

 Local connection related:

  [-C connection_string] :     Connection string
  [-d local_database_name] :   Database name to script
  [-U local_user_name] :       Username
  [-P local_password] :        Password
  [-S local_server_name] :     Server name


Oops, we're missing two things before my quest is ended:

  1. We need an option to specify which tables to script. I know that complicates things a bit, but we gotta have it. Maybe a parameter that points to a text file containing a list of tables to be scripted. That's important because we want to script and version our reference data, but we don't want to script our transactional data for a lot of reasons (most importantly we don't want to overwrite transactional data, but also the transactional data will usually be the most subject to large growth).
  2. Why do the drop scripts need to check sys.objects for SQL 2005 databases and sysobjects for SQL 2000 databases? Why not use the INFORMATION_SCHEMA views so the same script works on both? That's the subject of another post, I guess...

Oh, and that PowerSMO I mentioned. It'll script the schema, but not the data:

$databaseToScript = "AdventureWorks" $serverName = "(local)\SQL2005" #USES POWERSMO - #SETUP VOODOO $executionPolicy = get-ExecutionPolicy set-ExecutionPolicy "Unrestricted" .\InitPowerSMO.ps1 cls #CONNECT TO DATABASE $server = SMO_Server $serverName $server.ConnectionContext.SqlExecutionModes = [Microsoft.SqlServer.Management.Common.SqlExecutionModes]"CaptureSql" #ExecuteAndCaptureSql $database = $server.Databases[$databaseToScript] $t = SMO_Transfer($database) $t.CopyAllObjects = $true $t.DropDestinationObjectsFirst = $true $t.CopySchema = $true $t.CopyData = $true #Doesn't get scripted $t.DestinationServer = $serverName $t.DestinationDatabase = "just_a_dummy_value" $t.Options.IncludeIfNotExists = $true $t.CreateTargetDatabase = $true #Doesn't help #$t.TransferData() "Generating script" $commands = $t.ScriptTransfer() "Writing to file" $scriptFile = "{0}_Script.sql" -f $databaseToScript $commands | out-File $scriptFile "Done" #CLEANUP $server = '' set-ExecutionPolicy -executionPolicy $executionPolicy

Things I wish I'd known before I installed VS 2005 Service Pack 1

1. Save Time and Space for VS 2005 SP1 by Disabling the Patch Cache (via Heath Stewart

The SP1 requires a lot of disk I/O. You can eliminate 1.3 GB of disk I/O, and cut the installation time significantly, by temprorarily disabling the Patch Cache. You can read more about it on Heath's post, but here's the basic idea:

  • Download the patch here.
  • Save the following to a batch file (I called mine InstallVS2005SP1.bat) in the same directory you saved the SP1 install file:
reg export HKLM\Software\Policies\Microsoft\Windows\Installer installer.reg reg add HKLM\Software\Policies\Microsoft\Windows\Installer /v MaxPatchCacheSize /t REG_DWORD /d 0 /f net stop msiserver start /wait VS80sp1-KB926601-X86-ENU.exe reg delete HKLM\Software\Policies\Microsoft\Windows\Installer /v MaxPatchCacheSize /f reg import installer.reg net stop msiserver del /q installer.reg 2>nul
  • Run the batch file

I installed SP1 twice (see tip 3), and it ran a lot faster with the Patch Cache disabled.

2. If you don't use C++, uninstall it

I always tend to install VC++ when I install Visual Studio, just in case. Now that VC++ Express is available (and free), I think I'll wait until I actually need to write C++ (hopefully never) to install it.

3. You may have to reinstall it if you upgrade to Vista

I installed SP1 on my XP machine, then upgraded to Vista, and was prompted to reinstall SP1. I'm not sure what was going on, and I'm not willing to roll back to XP, uninstall SP1, upgrade to Vista, and install SP1 again just to make this blog post more complete (sorry!), but if you're planning to upgrade to Vista you may want to do it before installing SP1. On the other hand, with the Patch Cache disabled, the second install was really quick!
Posted by Jon Galloway | 6 comment(s)
Filed under: ,

Freeing up drive space on Windows XP

windirstatI just cleaned up my computer in preparation for an upgrade to Vista. Here's how I did it:

  1. Empty Recycle Bin (Yes, I know. Not rocket surgery so far. Keep with me, I'm getting warmed up.)
  2. If you bought a PC with XP preinstalled, consider running the PC Decrapifier. It clears out the useless bundled software the manufacturer included to cut costs through renewal kickbacks.
  3. Uninstall any programs you don't need. My criteria: If I can't remember what it is or haven't used it in 6 months, I don't need it. I use SAFARP (Small and Fast Add Remove Programs), but Windows Add / Remove Programs works here. [UPDATE] A comment reminded of something here - XP Hotfix Files. I've used (and recommend) Doug Knox's "XP Remove Hotfix Backup" program.
  4. Run CCleaner to clean up temporary files and clear out orphaned registry keys. Registry cleanup used to be simple until Office started stashing unused keys to indicate features which would be installed on first use, after which registry cleaners often did more harm than good. However, CCleaner is regularly updated and has always worked for me. If you're scared to monkey with the registry, CCleaner will still help find unused temporary (permatemp?) files which can be removed.
  5. Start looking for big files to be removed. I use FolderSize and WinDirStat for this. Actually, I use FolderSize more for day to day stuff, but for clearing out the underbrush with a machete style work, you can't beat WinDirStat. The treemap really helps you zoom in on the large files or folders; when you click on them they're automatically selected for you in the folder view above. By the way, the screenshot shows it working in Vista.

I can't tell you what kind of files to look for, but I can tell you what I found on my system. I'm a Microsoft.NET developer, so this list is tilted that way. If you'd like to post some specifics which are geared towards other user types, please do so in the comments.

So, here's what I found:

  • FolderShare Trash (C:\Documents and Settings\USER\Local Settings\Application Data\FolderShare\trash). FolderShare keeps backups of shared files after you've deleted them from the share, so if you've used FolderShare to transfer any large files they're probably still hanging out in your trash.
  • Unneeded ISO and Virtual Machine hard drives. Yeah, semi-obvious, but since this is a checklist of sorts I'd be remiss if I didn't mention them. For Virtual PC and Virtual Server, you're looking for VHD files. For VMWare, I believe you're looking for VMDK files. Also - if you're using VPC, look for Virtual PC Undo files (.vud) files.
  • SQL Server Backup files (.bak). I had several database backups I'd been holding on to "just in case" for a year or so. Gone.
  • SQL Server log files (.ldf). This one deserves a sidebar... or actually another post. Short answer - for a developer machine you can probably just set the recovery model for the database to Simple, then shrink the database from within SSMS.

How about you? How do you clean up a computer?

SQL Server recovery model selection (or, what's this 3GB LDF file?!)


SQL Server recovery models can be a bit tricky. By default, SQL Server databases are set to Full Recovery mode, which means they maintain sufficient logs to restore to any specific point in time. These logs can grow to several GB for a small, lightly used database over time. For a development workstation, especially if your database is under version control, I like to change the recovery mode to Simple (most cases) or Bulk-Logged at most. Then you can shrink the database, which will clear out the log file in Simple mode - you can do this in SQL Server Management Studio by right clicking on the database, selecting Shrink, then Database. Keep in mind that shrinking databases can be a little more complex that I've pointed out here, but that's not normally important on a developer workstation.

Now, that solves the problem for this one database, but it will crop again the next time you create a database. A good way to deal with this once and for all is to change the recovery mode of your Model database to Simple or Bulk Logged, since that setting will apply to all future databases you create. Just make sure to update the recovery mode when you database is deployed to a production environment.

This is just a simplistic solution for SQL Server running on a developer machine - see some more in depth information on the MS Support site.

Posted by Jon Galloway | 1 comment(s)
Filed under: ,

Reporting Services - Add a logo to the Report Manager

The SQL Server Reporting Services Report Manager is functional, but it's not very customizable. The ASPX pages just reference compiled assemblies, so the only real way to modify them is via CSS.

What makes that more difficult is that the SSRS HTML is poorly constructed so that the tags you'd most want to customize don't have Id's or Classes assigned. For instance, the most obvious customization anyone would want to make is to add a corporate logo, and there's no hook for that at all. The folder image is a simple <img> tag, nestled between two other spacer <img> tags, packed inside an unidentified <td>, and all those elements are written out in code1. The problem there is that there's no way, even with some descendant selector trickery, to target that folder image.

That's not entirely true, part 1:

IE7 adds support for some combinators (e.g. child, adjacent, first:sibling) which allow you to directly target elements without classes or id's. Why I'm not using that approach here:

  • IE6 doesn't support it.
  • IE7's support is kind of quirky - it sees HTML comments as page elements, for instance. That means it's difficult to write cross browser combinator based CSS rules.
  • Targeting deeply nested elements with combinators is pretty difficult.
  • Combinator based rules are inherently unstable, since any changes to the page's structure (including HTML comments, as mentioned above) can really mess up your page. Structure changes can cause your rule to be ignored or - much worse - point it at another page element.

That's not totally true, part 2:

You could replace the image files (C:\Program Files\Microsoft SQL Server\MSSQL.X\Reporting Services\ReportManager\images\48folderopen.jpg
as well as all the other files in that directory named 48*.jpg)2 with your corporate logo. Problems with that approach:

  • You're constrained to a 48x48 px jpg
  • Subsequent upgrades could overwrite it
  • Those images do serve some use - they change depending on the current function you're performing in the site.

Who needs an Upper Title, anyway?

I went for the next best target - the rather useless "upper title" - since it's a div which actually has a class (div.msrs-uppertitle)assigned. First, let's look at the style changes, then we'll talk about the different ways to add them.

.msrs-uppertitle { BACKGROUND: url( no-repeat; HEIGHT: 35px; WIDTH: 120px; TEXT-INDENT: -5000px; }

Some things to notice:

  1. We set a background image by URL. That will keep the reports up to date with our company logo as it may change in the future. Make sure to pick an image that's an appropriate size for that space.
  2. We set the background to no-repeat so it just displays once rather than tiling. That's handy even if the logo image has a border, since we don't have to be so precise on your height and width settings.
  3. We set a height and width which are large enough to display the logo. These should be just larger than your image size; you can just see via trial and error what looks best. This is one of those times where on the fly CSS editing tools like the IE DevToolbar or the Firefox Web Developer extension come in really handy.
  4. Now we need to get rid of the site title. I didn't want to mess with trying to blank it out in the SSRS Site Settings since it might be useful elsewhere, or someone might fill it in without knowing that it would conflict with the logo. The easiest solution is to use the Phark Image Replacement technique and hide the text by setting the indent to -5000px.

Done and done.

Oh, wait. I said I'd be talking about where to add this style, didn't I? A few options:

  1. Edit ReportingServices.css (in the C:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportManager\Styles folder).
  2. Use another stylesheet - method 1: change the HTMLViewerStyleSheet parameter in RSReportServer.config
  3. Use another stylesheet - method 2: specify a stylesheet in rc:StyleSheet parameter of the report url:

I went with a variation of option 1 - I edited ReportingServices.css, but just to add an import statement that points to another CSS file where I'll put all my Reporting Services CSS customizations:

  1. Add the following to the top of ReportingServices.css:
    @import url(customizations.css);
  2. Save the block of CSS (the bit that starts with ".msrs-uppertitle") to a file called customizations.css to the same folder as ReportingServices.css. You might want to add other customizations to it while you're at it, like this CSS fix for Firefox.

That's it! We're done!

Great, but the logo should link to the home page of our intranet!

What!? Feature creep! Version 2!

Oh, okay. It's not easy, though...

Changing style isn't a piece of cake, but at least there's a supported hook for it. You can't add a link via CSS, though. CSS2 allows for some limited content generation, but it doesn't support adding links; even if it did we'd be out of luck because IE7 doesn't support it.

Bill Vaughn and Peter Blackburn proposed a solution using DHTML Behaviors. That's a pretty slick workaround - behaviors allow you to tie Javascript functionality to DOM elements in CSS. Unfortunately, only IE supports behaviors. Plus, if you're going to hack Javascript, why not edit the ReportingServices.js file...3

C:\Program Files\Microsoft SQL Server\MSSQL.n\Reporting Services\ReportManager\js\ReportingServices.js

There it is... the fabled treasure... a way to modify the content of the Reporting Services Manager...

I added the following code to the top of the JS file (after backing it up, of course):

addLoadEvent(SetLogoUrl); function addLoadEvent(fn) { if (window.addEventListener) window.addEventListener('load', fn, false) else if (window.attachEvent) window.attachEvent('onload', fn); } function SetLogoUrl() { var header = document.getElementById('ui_sharedArea'); if (!header) return; var headerDivs = header.getElementsByTagName('div'); for (var i=0;i<headerDivs.length;i++) { if(headerDivs[i].className == 'msrs-uppertitle') { headerDivs[i].onclick = new Function('top.location="/"'); headerDivs[i].style.cursor = 'pointer'; //headerDivs[i].style.backgroundImage = 'url('; //headerDivs[i].style.backgroundRepeat = 'no-repeat'; //headerDivs[i].style.width = '96px'; //headerDivs[i].style.height = '62px'; } } }


Let's talk about it:

  1. That addLoadEvent business is there because we can't modify the page until it has finished loading, but this script is included at the top of the page. A less sophisticated solution is to set a timeout to call SetLogoUrl, but addLoadEvent leverages the built in event model to call the function after the page has finished loading. IE uses attachEvent, while everyone else uses addEventListener, so we'll include some logic to make sure the right one gets called.
  2. SetLogoUrl's job is to modify the Upper Title DIV. This would be easier if that DIV had an ID assigned, since Javascript doesn't have built in support for retrieving an element by class name. Most sample code which returns an element by class just iterates all elements in the DOM looking for a class name match, but since we know a bit about the page structure we can be a little more efficient. We narrow the search by grabbing the nearest parent element with an ID assigned and only searching through its children.
  3. Once we've got our DIV, we can just set the onclick behavior to navigate to the correct URL. We'll need to change the cursor to indicate that it's a link, too.

Hey, since we're messing with that DIV, why not just change the image in Javascript and simplify things? Well, I've left some commented code there to get you started if that's what you want to do, but I think it's better to keep the image modification in CSS.

Why? Well, there are a few good reasons. The biggest reason is that the Javascript doesn't execute until the page has finished loading, so the original DIV text is displayed, then changes to the logo. Ugh. Plus, from a maintenance and architectural perspective, it's a much better practice to keep your presentation information in CSS and your behaviors in Javascript.

And that's it.

But, can we change the navigation to use WPF/E with Ajax and pull in some Google Analytics?


1 reflector://Microsoft.ReportingServices.UI.SharedArea.InitTitleArea()

2 Reporting Service installs into different directories depending on the order you install SQL Server services. A default installation will put Reporting Services in \Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services, but that MSSQL.3 folder may be different on your machine.

3 I actually stumbled across the HTC technique after I'd finished making this work as a Javascript include.

A simple example of a fluent interface

 Roiy recently released a really nice c# image enhancement filters library on CodeProject. It includes a nice collection of easy to use image transformation filters - ResizeFilter, RotateFilter, ImageWatermarkFilter, etc. They follow a simple pattern:

Image myImg = Bitmap.FromFile("cat.jpg"); Image transformedImage; ZRLabs.Yael.BasicFilters.TextWatermarkFilter watermark = new TextWatermarkFilter(); watermark.Caption = "Test"; watermark.AutomaticTextSize = true; transformedImage = watermark.ExecuteFilter(myImg); transformedImage.Save("cat_watermark.png", System.Drawing.Imaging.ImageFormat.Png);

I recommended he implement a fluent interface to allow chaining filter operations together, like this:

static void Main(string[] args) { ZRLabs.Yael.Pipeline pipeline = new ZRLabs.Yael.Pipeline("cat.jpg"); pipeline.Rotate(90) .Watermark("Monkey") .RoundCorners(100, Color.Bisque) .Save("test.png"); }

That's easier than it might sound:

  1. I created a Pipeline class which holds a System.Drawing.Image object to maintiain the current image state
  2. I added a bunch of methods which accept parameters for filter properties a user would be likely to want to change (I could have added overrides to allow setting all filter properties if desired).
  3. Each method returns "this", meaning it returns a reference to itself. That's the magic that allows for the chained calls.

Here's a snip of the code:

public class Pipeline { private Image image; public Image CurrentImage { get { return image; } set { image = value; } } public Pipeline(string inputFilename) { image = Bitmap.FromFile(inputFilename); } public Pipeline Rotate(float Degrees) { RotateFilter filter = new RotateFilter(); filter.RotateDegrees = Degrees; image = filter.ExecuteFilter(image); return this; } public Pipeline BlackAndWhite() { BlackAndWhiteFilter filter = new BlackAndWhiteFilter(); image = filter.ExecuteFilter(image); return this; } public Pipeline Watermark(string caption) { TextWatermarkFilter filter = new TextWatermarkFilter(); filter.Caption = caption; filter.AutomaticTextSize = true; image = filter.ExecuteFilter(image); return this; } /* more filters here */ public void Save(string filename) { image.Save(filename); } }

Since each method returns a Pipeline object and the Pipeline exposes all the basic methods we'll need, we're all set to just keep calling methods until we're done.

What else could you do with a fluent interface?

Joshua Flanagan wrote a very nice regular expression wrapper which allows you to define a regex using a readable syntax, exposed via a very elegent fluent interface:

Pattern findGamesPattern = Pattern.With.Literal(@"<div") .WhiteSpace.Repeat.ZeroOrMore .Literal(@"class=""game""").WhiteSpace.Repeat.ZeroOrMore.Literal(@"id=""") .NamedGroup("gameId", Pattern.With.Digit.Repeat.OneOrMore) .Literal(@"-game""") .NamedGroup("content", Pattern.With.Anything.Repeat.Lazy.ZeroOrMore) .Literal(@"<!--gameStatus") .WhiteSpace.Repeat.ZeroOrMore.Literal("=").WhiteSpace.Repeat.ZeroOrMore .NamedGroup("gameState", Pattern.With.Digit.Repeat.OneOrMore) .Literal("-->");

As Joshua notes, Ayende's Rhino Mocks uses a fluent interface as well. Milan Negovan applied this approach to his Fluent Control Container, which simplifies the task of creating properly instantiating ASP.NET controls.

When would you apply a fluent interface? I think it makes sense when the routine use of your API requires multiple sequential method calls or property settings. I think a fluent interface makes a lot of sense on top of a richer API; simple use cases can use the fluent interface and stay simple, while complex use cases can call into the base API.

The whole definition of a fluent interface is a little vague, but as Martin Fowler says, "The more the use of the API has that language like flow, the more fluent it is."

Tips for a Technical Presentation

I spoke at the local .NET user group meeting last week. I've done my share of software demos and short presentations to small groups, but this was the first time I've given an hour long technical talk to a large group. I'm making a list - partly for my future reference1- on some presentation tips while it's still fresh in my mind.

Scott Hanselman has several great posts about presentation tips, so that was the main place I looked to brush up.


Allow a lot of time to prepare

I've put together plenty of 15 minute presentations and had naively figured a one hour talk would take around four times as long to prepare. It may have been the subject matter, but putting together a full hour of content tool a lot more time. From what I've heard, the trend continues if you're going to be speaking for longer periods of time, so preparing a full day of training can take several days or weeks.

Pick demos for effect

I put some time into this, and I think in the future I'd put even more time into it. The demonstration is where everything comes together, where we see that it's worth learning something new because it wil help us do something cool (more on that later). So, early in your planning, make sure to think of how to make your demonstration exciting. I picked a rough topic for this, but I liked showing that (admitedly unsexy) tools like SMO and INFORMATION_SCHEMA allow you to build really cool stuff like Subsonic. In the future, I'll make this an even bigger part of the focus in my planning.

Set up a separate Windows account with large fonts and other presentation specific settings

This was probably the most important tip I took away from Scott's list. I set up a new Windows logon (I called my Presenter), then did the following:

  • Put all the files or shortcuts I'd need on the desktop
  • Set the font size to "Extra Large Fonts" (display options / Appearance / Font Size)
  • Set the font size for all the applications I'd be using to 16 Point Consolas - pick what you want, but make sure it's big and readable
  • Change any Visual Studio display settings

Set the screen resolution to something you're sure the LCD projector will support2

1024x768 is the best bet if you can't find out ahead of time. Learn from my example here - I forgot this step, and when I changed resolutions while testing before the meeting started, Windows required a reboot. I can't remember seeing that before, but I couldn't change the resolution without rebooting, which was a problem since I had a lot of demonstration applications running in debug mode. Fortunately I found that out before the meeting started, so I was able to set everything back up.

Test everything

I know that should go without saying, but it's important to test all sample applications, slides, etc. on the day of the demo. This is important, since things you've tested under your main login might not work under the new presentation user.

Have a Zoom tool ready and practice using itZoomIt

Scott recommends Magnifixer. I think I prefer the shiny new ZoomIt tool from SysInternals since it includes some other goodies, like a Draw tool that lets you mark up the screen like a football comentator. I actually didn't end up using this since I was mostly showing off code, but I should have used it when demonstrating my winform applications.


Tell a story in three acts

A good presentation should clearly state the following:

  1. Problem
  2. Solution
  3. Result2

This is another Hanselman tip which I should have paid more attention. I thought I had some good structure to my slides, but I didn't do justice to the third act. I presented the background, some cool technologies (SMO and INFORMATION_SCHEMA), and finished by showing some exciting tools that leverage these technologies (most notably Subsonic). I think that made for a good story, but the ending needed a summary. Without that final conclusion slide (or portion of the presentation), it ended a little bit muddy. It's important to sum up what the presentation covered in a way that brings it all together.

Prestage but don't stack the deck

No one wants to see Visual Studio start up. We've seen that before. It's good to have things set up - solutions loaded, files you'll be showing open in tabs, etc. I didn't set bookmarks in my code, but I think I'll try that next time.

But... don't take it too far. There's a difference between prestaged and canned. Developers are used to seeing magical demos that don't really pan out when you try them at home, and their BS detectors start going off when you've got everything magically working due to to some sleight of hand. So, prestage things that are a waste of time, show things that might be tricky, and have an escape plan in case things go wrong. For instance, rather than setting up a Subsonic website beforehand, I demonstrated from a freshly unzipped SubSonic download folder. Subsonic is easy to set up - just a simple edit to the web.config file - but I thought that it was important to demonstrate that. If I had a more difficult edit to make, I'd prestage a file (named something like web.config.disaster) so I could attempt the edit on the fly, then if it didn't work I could delete the web.config and replace it with web.config.disaster.

Have a plan for your demos2

It's important to have a plan for what you want to show in your demo. Of course, you'll want a basic script in demonstrating the application, but you also want to have a script for how you want to explain the code. I think I'll try that bookmarking thing next time to see if that helps, probably with numbered bookmarks using DPack. The demonstrations are the payoff to the audience for sittng through all those slides, so it's important to get the most value out of them possible.

Have some backup material

I'm a "just in case" kind of person (nuclear power training kind of does that to you), so I had an extra slide and accompanying material which I didn't plan to have time to present. Sure enough, a simple database insert during one of the coolest parts of my final demo took a really long time, and it was nice to be able to say, "Hmm... Well, let's let that sort itself out for a bit and take a look at something else while we're waiting..." I'd want to be careful with that, so you don't confuse the audience by jumping around too much, but a little "by the way" sidebar kind of thing is nice to have in your back pocket.

Have a way to keep track of the time2

Scott included a speaker timer in the downloads from his Presentation Tips PPT post. It's pretty simple, but it shows the remaining time in the window title so you can minimize it and get a countdown display in the taskbar. I've been using TimeLeft recently and like it - it has a pretty nice timer feature that is pretty customizable, so you can make the time readable without being in the way. Neither of these are exactly what I'd want, but either will do the trick. Of course, you could just bring a stopwatch, but you'd look like a goof.


Make sure to run through everything (beginning to end) at least once, but preferably more.

Have fun

Go with it. Some things may bomb, just do your best. You're speaking to developers, and they know that computers delight in causing unexpected misery. They'll understand. Hopefully.

Extra Credit - Record it

I'd planned to have TimeSnapper and a voice recorder running during the presentation so I could review it later and perhaps turn it into a webcast. Unfortunately, I didn't start TimeSnapper after my forced reboot so I missed it. Camtasia might be better for this, but I'd be worried that it might get in the way of the presentation a bit too much.

What about you? Got any presenting tips presentation software suggestions?

UPDATE: Some great presenting tips from Lifehacker.

1 Hello, future Jon!
2 These are things I didn't do, but should have.

Posted by Jon Galloway | 4 comment(s)
Filed under: ,

Notes for 11/28 Talk - SMO and INFORMATION_SCHEMA

I spoke at the San Diego .NET User Group meetin on 11/28 on SQL Server Metadata (SMO and INFORMATION_SCHEMA). The complete notes should be available on the user group downloads page, but here are the rough notes:

Extract of the PowerPoint outline:


Here's my PowerSMO (SMO in PowerShell) script:


$executionPolicy = get-ExecutionPolicy
set-ExecutionPolicy "Unrestricted"

$server = SMO_Server "(local)\SQL2005"
$server.ConnectionContext.SqlExecutionModes = 3 #ExecuteAndCaptureSql
$database = $server.Databases["AdventureWorks"]

$database.Tables | %{$}

$orders = SMO_Table $database "Orders"
$order_number = SMO_Column $orders "Order Number" (SMO_DataType "Int")
$customer_name = SMO_Column $orders "Customer Name" (SMO_DataType "Varchar")
$value = SMO_Column $orders "Value" (SMO_DataType "Int")

$orders.Columns | format-Table Name, DataType -auto

$orders.Columns["Value"].DataType = (SMO_DataType "Money")

$orders.Columns["Customer Name"].Drop()

$server = ''
set-ExecutionPolicy -executionPolicy $executionPolicy


Here's the code from my winform sample

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Data.SqlClient;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Common;

namespace SMO_Overview
public partial class Main : Form
Database db;

public Main()

private void btnConnect_Click(object sender, EventArgs e)
= Cursors.WaitCursor;
= Cursors.Default;

private void GetTables(string connectionString)
SqlConnection connection
= new SqlConnection(connectionString);
Server server
= new Server(new ServerConnection(connection));
= server.Databases[connection.Database];

foreach (Table table in db.Tables)
if (!table.IsSystemObject)
+ "." + table.Name);
if (ddlTables.Items.Count > 0)
= 0;

private void ddlTables_SelectedIndexChanged(object sender, EventArgs e)
= Cursors.WaitCursor;
if (ddlTables.SelectedIndex >= 0)
string[] tableSelection = ddlTables.Text.Split('.');
Table table
= db.Tables[tableSelection[1],tableSelection[0]];
DataTable columnList
= new DataTable();

foreach (Column column in table.Columns)
DataRow row
= columnList.NewRow();
"Column"] = column.Name;
"Datatype"] = SmoUtil.GetDatatypeString(column);
= columnList;
catch { }
= Cursors.Default;

I previously posted most of the INFORMATION_SCHEMA SQL Scripts: Some of my favorite INFORMATION_SCHEMA utility queries

Thanks to Justin Angel, who sent me hundreds of slides about SQL Server 2005 for his upcoming talk on December 4th at the Israeli SQL Server Usergroup. I'm sorry I won't be able to see his talk, it looks like it will be incredible.

Posted by Jon Galloway | 1 comment(s)
Filed under: ,
More Posts