Prior to the release of SQL 2005, there was a lot of chatter about SQL cache invalidation. Then once it was released, it kind of just stopped. If any of it actually shipped, hell if I can find any documentation on it. A quick look at the stacks at my local Borders, I can find anything in the SQL or .NET books. This page says what it does, but it sure seems a little vague. It lacks context, and I wouldn't leave any performance implications to chance.
Another thing that has largely faded into obscurity is the SQL CLR implementation. This one is clearly a cultural issue, and not one lacking documentation and books. Bring up CLR code in any traditionally structured organization, and the database guys will immediately be talking about how they won't let code run on "my" server.
All that said, there are some interesting possibilities using the CLR for things like caching beyond the limitations of SQLCacheDependency. The existing implementation will trigger invalidation because you incremented one number in a record that has a dozen other values. Seeing as how the application knows it made that change, it'd be silly to just throw all of that data away.
In a single Web server situation, of course you don't need to worry about this, because you can easily roll your own cache scheme since the box knows about everything that you'll ever do to the data. The Web farm situation makes it more interesting, and that's where I can see some of these great opportunities.
Architect types I thought would really poo-poo this kind of thing, but in talking to people much smarter than me, it surprises me to find that they're very open to the idea. Again, the barriers seem more cultural than anything. Computer sciencey types dig it.
Do you have any stories of adventure using the SQL CLR?