May 2005 - Posts - Jon Galloway

May 2005 - Posts

Free Opera Browser v7.54 License

It's nice to test drive the top alternative browsers, especially if you're involved in web development and want to see how your sites are displayed. I've never used Opera before because I'm terminally cheap and very happy with Firefox.

Well, here's a free offer from a German magazine for a license of Opera 7.54:

Enter code OJD000MN to get Opera Browser v7.54 for free. An email with the license key will be sent. Final v8.0 is out but the license only works on v7.54 or v8.0 beta. Thanks inevitableDUDE.

Download links:


Posted by Jon Galloway | with no comments

[OT] Teaching a child to read

I'm teaching my 4 year old, Rosemary how to read. I thought I'd share a few of my experiences in case it's helpful to others.

On a friend's recommendation, I'm using Teach Your Child to Read in 100 Easy Lessons. It's phonics based, and it's pretty foolproof. It uses a special orthography to simplify the complexities of pronouncing english words - each symbol makes one sound:

It looks weird, but it abstracts the illogical nature of the english language a bit at the beginning, then removes the crutches once they're used to the idea of reading. It's working really well for Rosemary - I tried teaching her with refrigerator magnet letters before, but it just didn't make enough sense for her. The systematic lessons are geared for young children and have been working very well.

I've read some people's strong opinions that this method will ensure my child's illiteracy, and potentially induce a magnetic pole flip. All I can say is it's working well for us, so I'm not to interested in the Ph.D. educator bickering.

I've been surprised that the concepts aren't the hard part - the hardest part of reading is paying attention. We actually repeat that before and during every lesson: "the hardest part of reading is paying attention." Children are wired for rapid learning, but it's hard to sit still for 20 minutes.

Bribery helps. I've used a mixture of bribery (a little bit of candy or sugarless gum), goals (trip to Chuckie Cheese every 10 lessons), and artificial scarcity ala GMail. The artificial scarcity has been probably the most effective, which cracks me up - the bribery and goals weren't doing the trick, and Rosemary was losing interest. So I started hiding a few of my dry cleaning tickets around the house, and I told her the reading lesson people had a new rule that she could only have a reading lesson if we sent in three tickets. All of a sudden, she was scouring the house for tickets every day, searching for tickets. It's turned into a bit of a game - I hide them in her dollhouse or next to her cereal bowl when I leave for work in the morning.

This has been challenging, educational, and fun for both of us. Rosemary is very proud and excited to read the little stories at the end of each lesson, and watching her sound out her first sentence felt like watching her take her first steps.

And I've become conscious that the hardest part of learning - for me as well as my four year old - is paying attention.

Posted by Jon Galloway | with no comments
Filed under:

Hiding that "Server Application Unavailable" error message - Answered

Time's up, pencils down.

Almost a year ago, I asked if anyone had figured out how to hide that awful Server Application Unavailable message:

Server Application Unavailable

The web application you are attempting to access on this web server is currently unavailable.  Please hit the "Refresh" button in your web browser to retry your request.

Administrator Note: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur.

That message overrides whatever pretty custom 404 / 500 error pages you may have defined in IIS and / or your ASP.NET web.config, since it occurs after the page request has been passed from IIS to the ASP.NET ISAPI filter, but ASP.NET isn't able to process the request.

I figured it out - like a lot of things, it's built into ASP.NET, but it's not easy to find unless you know where to look.

The machine.config <processModel> node has a serverErrorMessageFile attribute, which can point to an HTML page. HTML is the safe bet here rather than trying any kind of server processing, since the fact that the Server Error Message is showing means you probably can't process requests.

An easy way to test your custom error message page is to edit your machine.config and temporarily change the processModel/password to "DoNotAutoGenerate". After you've tested, change it back to "AutoGenerate".

You can override the <processModel> in your application's web.config file, but I recommend also setting the custom error message in the machine.config, since a severe server error may mean that your web application can't be started, the web.config file can't be read, etc.

Posted by Jon Galloway | 2 comment(s)
Filed under:

ASP.NET 1.1 database cache dependency (without triggers)


Here's a way to implement a database cache dependency in an ASP.NET 1.1 application. At a high level, it uses a timer on a background thread (in an HTTP Module) that checksums database tables. No database triggers are required.


Way back in December I posted about a CMS driven ASP.NET site I'd worked on that's using a SQL cache dependency. I based my code on some posts by Milan ( and Peter Bromberg, Ph.D. (, which were partly derived from Rob Howard's technique here .

Why and How?

Why use a database dependant cache?

Many websites make one or many database requests for relatively static data every page hit - if you're componentizing your pages with user or server controls, they may each make separate trips to the database. You can scale a site out at the webserver level, but if they're all going to the same database, you've got a bottleneck that you can't horizontally scale out of. It's more efficient to distribute some of that data to your webserver, and only hit the database when needed. Setting a database dependency on cached data can reduces you database load and network traffic dramatically - the database load is no longer proportional to the site load.

Why not use ASP.NET OutputCache to cache your pages or page fragments?

The ASP.NET Output Cache (both Page and Fragment) is a great, simple way to reduce your webserver processing and database server load. You can set a cache timeout, optionally specify parameter dependencies, and ASP.NET will return the same HTML without processing or hitting the database as long as the cache is valid. The problem is that your dependency is set for a period of time, so your cached page can be showing stale data, which is often unacceptable. If you crank down on the cache time to keep the page relatively current, you decrease the benefit the cache was giving you.

Output caching doesn't always work so well if your page takes a while to generate, because the first page hit after the cache has expired leaves your user waiting for the page to load. The site is pretty quick most of the time, but every now and then a page takes forever to load. That kind of unpredictable user experience is never appreciated.

Is this method efficient?

It depends. This can work well for a relatively static, high volume content site. As an example, I'm using it on a website that delivers about 100K page impressions per day to about 10K users. The site is driven by a CMS database that is updated about 50 times a day by a group of about 100 content editors. The site is database driven, but delivers roughly 20K virtual pages. The CMS database hierarchy defines the site structure - navigation, breadcrumbs, etc. The navigation is dependant on both the site structure and content, since deactivating all the news articles on a page must cause the page to disappear from the site structure. This means that building the pages on the fly for each page hit is impractical, since each page hit would require rebuilding the entire site structure. This is time sensitive financial information, so setting an arbitrary cache timeout is not an option. Additionally, the database is shared with other applications and is overloaded.

This technique wouldn't make sense on large tables. In my case, we're checksumming 3 tables totalling 65K rows and the query returns in less than half a second on average. Of course, testing response time of one query gives a very incomplete picture - the database may be able to cache results when you bang away at one query, but when it's under normal load the cache availablilty may be quite different. We stress tested the heck out of this, and I'd recommend you do to. I would not use this for large tables - for instance, computing the aggregate checksum on a 1.3 million row table took over 10 seconds. If you're dealing with large tables you'll probably need to use the trigger based approach, but if you're dealing with that much information you probably won't have the memory on your webservers to cache a meaningful portion of the data anyways.

In our case, moving the data needed for the most commonly used content over to the webservers made a lot of sense. The webservers keep an XML cache of the site structure and content for the top four levels of the site hierarchy, and only hit the database when content changes or users navigate to deep content.

Why not use the Trigger based approach?

Rob Howard's trigger based approach is a workable solution. This is pretty much how database dependencies are implemented in ASP.NET 2.0 on pre-Yukon databases (Yukon uses SQL Notification Services to push data dependency update notifications to ASP.NET). The thing is, you still have to hit the database to check the Change Notification table. All the triggers do is update a table, which your application still has to ping to check if there have been changes. I've got some reservations on triggers, and I'm not alone here - see Milan's arguments here and here. It works, but since the SQL checksum functions are so efficient, why bother with triggers and notification tables when you can just checksum the table?

What the heck is an HTTP Module? Why use it?

HTTPModules are cool. They're the ASP.NET equivalent of ISAPI filters, but they're really easy to write and configure. They start up with your ASP.NET application starts, and receive all the application and request events. A lot of the cool plumbing of ASP.NET uses HTTPModules. It's the ASP.NET way to componentize background operations. You can also put this kind of code in Global.asax.cs, but it's mixed in with any other application specific code; using a module keeps the code simple and reusable. Also, HTTPModules are configured via web.config setting, so you can disable or repoint a module without recompiling your application.

Why use the Application object rather than the Cache?

I lost this argument to my friend and co-worker Michael Ravan:
The cache is faster, and the application is essentially deprecated, I said.
The performance differance is minimal, he shot back.
But if we're just storing the data to speed up the site, so that means it should be in the cache, I bellowed.
If the cache is expensive to create and it is unacceptable for a site user to wait for the cache to build, it's not a cache - it's an application resource, he trumpeted.

Trumpet beats bellow. Who knew?

This would be different if cache remove callbacks fired when the cached object becomes invalid, but they don't - they fire when an invalidated object is accessed. So instead of going to the store when the last loaf of bread is used up, it waits until I ask for toast and there's no bread. This means that expensive cached objects can cause slow response and can lead to cache thrash.

I looked at using a cached object with cache remove callbacks, but they have the same problem - you don't get the callback until it's too late. To quote Steve Smith:

One potential use for this feature would be to refresh cached data in the background so that users never need to wait for the data to be populated, but the data is kept relatively fresh. Unfortunately in practice, this doesn't work very well with the current version of the caching API, because the callback doesn't fire or complete execution prior to the cached item being removed from the cache. Thus, a user will frequently make a request that will try to access the cached value, find that it is null, and be forced to wait for it to repopulate. In a future version of ASP.NET, I would like to see an additional callback, which might be called CachedItemExpiredButNotRemovedCallback, which if defined must complete execution before the cached item is removed.

How can you communicate with it since it's in an HTTP Module?

Since the HTTP Module runs on a background thread, you can't talk to it directly from a web page (or if you can, I wouldn't advise it). Probably the simplest threadsafe method of communication between threads in an ASP.NET application is with application variables, so that's what I went with.

Getting information out of the module is a bit easier - the module reports on what it's doing via the ASP.NET trace. Trace.Write statements in the Application_BeginRequest event worked well for me.

I changed the table but the checksum didn't change. Why?

I'll let Milan field this one for me:
You're right, checksums do not reflect changes to certain fields:

"BINARY_CHECKSUM ignores columns of noncomparable data types in its computation. Noncomparable data types are text, ntext, image, and cursor, as well as sql_variant with any of the above types as its base type." - Books Online

However, most tables contain more than an ntext field. Most of them come with some sort of "name", "description", "date last updated" fields, etc, and those SQL Server does track changes to.

I'm getting locked file warnings in Visual Studio when I rebuild and the website is running. What gives?

This was a bit tricky to figure out. If the application is running on your development box and you switch between debug and release build, the assembly with this module is locked by the ASP.NET worker process and can't be overwritten. Sneakily enough, Visual Studio buries this information in the routine build messages so it's easy to miss, so you end up with an old module DLL and invalid dependencies which may or may not work when deployed. When you're doing your build, it's best to close Visual Studio and delete the contents of the obj and bin folders, then start VS back up and do your build. Big ups to Ravan for figuring that out.

This is putting heavy load on my dev database - there's no way I'm putting this into production!

This was a fun one - we had eight developers working on the application, and dev database server was dog slow. There was concern that the site was slow in development with no web traffic, so there was no way it could handle thousands of users in production. I had a hard time explaining that the database load was proportional to the number of webservers pinging the database, not the web traffic, so the load was actually much worse in development where each developer was running a webserver. We changed the ping frequency to every 5 minutes in development and things calmed down. Truth be told, this did point out an error I'd made in the SQL that was locking the tables, which lead to using "WITH (NOLOCK)" on each table.

What are you doing with the data?

My sample code shows the guts of the dependency and calls out to NavigationXML.UpdateNavXML(). That's where the application specific code comes in - in my case, we build an XML document and shove it in the Application object. The base page pulls out the chunk of XML that's relevant depending on where the user is in the site and pops it in the HttpContext, and user controls either run XSL transforms on the page specific XML or bind to it. Contextualizing the XML was important in our case, since XSL against the entire 3MB XML site digest was slow and CPU intensive; doing a one time XSL transform on page load to contextualize it slimmed it way down and made the invidual user control XSL very fast. If you want to stay out of XML land, you could probably cache a datatable or something, but we needed the hierarchy information and XML made that easy.

This part is up to you, though - this will tell you when your data has changed, and you need to figure out what you want to do with that information.

Isn't this new fangled crazy talk?

Not so! People were doing this kind of caching back in 2000 on classic ASP. It's a proven technique, this just modernizes it for ASP.NET.

So does it really work?

I delayed posting on it since the application was deployed to production in December in case there were problems. We haven't had any. Database load has decreased (compared to the previous ASP version, which didn't use any caching), stress testing shows we can support many more users with higher response time, the data is fresh, and the application is very responsive.

And now the code:

using System;
using System.Web;
using System.Threading;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
//using CompanyName.SiteName.Website.Navigation;
//using CompanyName.SiteName.Website.Instrumentation;

//Place the following (uncommented) in your web.config to enable this module.
//    <add name="BackgroundService" type="CompanyName.SiteName.Website.BackgroundService,DLLName" />

namespace CompanyName.SiteName.Website
public class BackgroundService : IHttpModule 
static Timer timer;
int interval = 5000;
static Int64 sum = 0;
static DateTime processingStarted = DateTime.MinValue;

public string ModuleName 
get return "BackgroundService"; } 

public void Init(HttpApplication application) 
            application.BeginRequest += (
new EventHandler(this.Application_BeginRequest));
            application.EndRequest += (
new EventHandler(this.Application_EndRequest));

if(ConfigurationSettings.AppSettings["ContentUpdate.Interval"] != null)
//Content update configuration is listed in seconds
interval = (Convert.ToInt32(ConfigurationSettings.AppSettings["ContentUpdate.Interval"])) * 1000;
                    interval = 5000;

if (timer == null)
                timer = 
new Timer(new TimerCallback(ScheduledWorkCallback), application.Context,0, interval);

private void Application_BeginRequest(Object source, EventArgs e) 
                HttpApplication application = (HttpApplication)source;

//An admin page sets Application["StopTimer"]="true" to disable the timer
                    timer = 
if (application.Context.Application["CacheTimestamp"]==null)
                            application.Context.Trace.Warn("Nav Data","Not updating, not available.");
                            DateTime lastUpdated = (DateTime)application.Context.Application["CacheTimestamp"];
                            application.Context.Trace.Write("Nav Data","Up to date. Last updated: " + lastUpdated.ToLongTimeString());
                        application.Context.Trace.Write("Nav Data","Updating, started at " + processingStarted.ToLongTimeString());
if (timer == null)
                        timer = 
new Timer(new TimerCallback(ScheduledWorkCallback), application.Context, interval, interval);
                        application.Context.Trace.Warn("Nav Data","Content update timer restarted, interval: " + interval.ToString());
catch {}
        //Not using this, leaving stub code in place in case an EndRequest event is needed.
        private void Application_EndRequest(Object source, EventArgs e) 
            HttpApplication application = (HttpApplication)source;
            HttpContext context = application.Context;

public void Dispose() 
            timer = 

private void ScheduledWorkCallback (object sender) 
            HttpContext context = (HttpContext) sender;

#region DB Poll
        private void 
Poll (HttpContext context) 
            SqlConnection connection = 
new SqlConnection(ConfigurationSettings.AppSettings["ConnectionString.CMS"]);

//The production application uses a stored procedure. This shows inline SQL for clarity.
SqlCommand command = 
new SqlCommand(
                    "DECLARE @sum bigint;" +
                    "SELECT @sum = CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM distribution WITH (NOLOCK);" + 
                    "SELECT @sum = @sum + CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM content WITH (NOLOCK);" + 
                    "SELECT @sum = @sum + CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM zone WITH (NOLOCK);" +
                    "SELECT @sum;", connection);

                command.CommandTimeout =5;

                Int64 current = (Int64)command.ExecuteScalar();

                        timer.Change(10 * interval,interval);

//UpdateNavXml calls another more database intensive stored procedure that pulls
                    //data from the above tables and builds an XML document which is then stored in the
                    //application object.
                    sum = current;

//This method is in the website instrumentation code
                    //Log.Trace("Nav XML updated",context);
                processingStarted = DateTime.MinValue;



private bool mustUpdate(HttpContext context,Int64 current)
//Check if the Nav XML exists and the checksum is current
return (NavigationXML.NavXmlExists(context)==false)||(current!=sum);

#region "Commented - Cache code"
//        private void GetXMLFromCache(string zone, System.Web.HttpContext context)
//        {
//            string navXmlKey = "WebsiteName.APP.CACHE." + zone + ".NavXML";
//            System.Xml.XPath.XPathDocument navXml = context.Cache[navXmlKey] as System.Xml.XPath.XPathDocument;
//            if (navXml==null)
//            {
//                // get new xml Cache
//                navXml = NavigationXML.CreateXMLFromZone(zone);
//                context.Cache.Insert(navXmlKey,navXml,null,DateTime.Now.AddSeconds(30),System.TimeSpan.Zero);
//                context.Application["CacheTimestamp"] = System.DateTime.Now;
//            }
//            return;
//        }

powered by IMHO 1.2

Posted by Jon Galloway | 4 comment(s)
Filed under:

[OT] Let's crank this blog back up

Hmmm... no posts for more than a month. A reorg at work got me a small promotion and a ton more work. Bunch of other non-computer related stuff.

Enough of the bloguilt - let's crank this sucker back up. Got 275 notes on blog post topics and several code samples that are just about complete, so here goes...
Posted by Jon Galloway | with no comments
Filed under:
More Posts « Previous page