September 2004 - Posts - Jon Galloway

September 2004 - Posts

Viewing MSDN code samples in Firefox or Mozilla browsers

UPDATE: This should no longer be required - apparently the MSDN CSS has been fixed to support Firefox.

The MSDN facelift (not MSDN2) hosed the formatting of code samples in Firefox and Mozilla browsers. To fix it, add this to your userContent.css file (chromEdit is the easiest way to modify this file):

/* Microsoft MSDN code stylesheet */
pre
{
 white-space: pre !important;
}

[Via WebLogs @ DotNetJunkies.com]
Posted by Jon Galloway | with no comments

[tip] Speed up VPC under XP SP2

[Update: Virtual PC 2004 SP1 is out and is the official, supported way to speed up VPC under XP SP2. This hack is no longer required. Get VPC 2004 SP1 here: http://www.microsoft.com/windows/virtualpc/downloads/sp1.mspx ]

I've been using Virtual PC pretty heavily lately. I'm verifying the upgrade path for a major website project by upgrading it to ASP.NET 2.0 / SQL Server 2005 / VS 2005. I've also been testing how Monoppix works under VPC so I could try to give some decent info when it was released.

Like some other people have noticed, installing XP SP2 on a VPC image really slows performance. I picked up a pointer from Robert Scoble's link blog recently that explained the problem a bit. It seems that you're fine if you install SP2 on the actual machine, but if you install it on the virtual machine the VM Additions accelerations stop working. By the way, if you're running virtual Windows sessions on VPC and you're not using the additions, you are a bad person. Look here.

What I didn't notice on my first reading of the above explaination was that there's a way to fix the problem if you can't wait for VPC 2004 SP1 (which should be out any day now).

You can install the VM Additions from Virtual Server 2005, which will speed your VPC instance back to the pre-SP2 salad days. Here's how:

  1. Download Virtual Server 2005 free 180 day eval here. (it's only 17.4 MB!!!)
  2. Install Virtual Server 2005. You've been meaning to for a while anyways, and this is the perfect opportunity.
  3. Start up your VPC virtual machine.
  4. In the VPC menu, select CD / Capture ISO Image...
  5. Browse to the VMAdditions.iso file inside the the installation directory. The default location of the ISO is C:\Program Files\Microsoft Virtual Server Trial\Virtual Machine Additions\VMAdditions.iso. The VM Windows instance will autorun the ISO and install the speedy new VM Additions.
  6. You will have more time to motor with Hello Kitty.
Posted by Jon Galloway | 6 comment(s)
Filed under:

[utils] slayer office favelet suite / FireFox Web Developer extension

I agree with secretGeek - this may be the coolest thing I've seen this week:

Description from slayer office site:

This is a favelet that combines most of my development favelets. When invoked, a div element will appear in the top left corner of your browser window with a list of all the favelets I've included. Simply click the link you want to invoke the favelet. An "info" icon is available to take you to the favelets information page here on slayeroffice.

Included in the suite:

  • Color List
  • Document Tree Chart*
  • HTML Attribute Viewer
  • HTTP Header Viewer
  • Hidden Field Modifier
  • Javascript Object Tree*
  • Mouseover DOM Inspector
  • Object Dimensions
  • Page Info
  • Remove Children
  • Resize Fonts
  • Ruler**
  • Show Source
  • Style Sheet Tweak*
  • Style Sheet Viewer*

This isn't near as good as the Web Developer extension in FireFox, but for IE this is really handy. And if you're doing web development and have FireFox installed, you just gotta get the Web Developer extension. It's absolutely amazing.

Download slayer office favelet suite here.
Download FireFox Web Developer extension here.

Posted by Jon Galloway | with no comments

RSS is out of order? The whole system is out of order!

Summary

RSS doesn't scale. As blogs and RSS aggregators get more popular, they overwhelm web servers. The kind folks who run weblogs.asp.net and blogs.msdn.com were getting slammed for bandwidth and made a bad decision - they chopped all RSS feeds all posts to 500 characters, and the aggregated web page feed to 500 characters, unformatted.

After a lot of complaints, they've enabled HTTP compression and gone back to full text in the RSS feeds and 1250 characters on the main page feed. It's not what it was - the RSS only lists the last 25 posts, and site only shows 25 posts trimmed to 1250 characters with no links to the individual blogs - but it's a lot better. I think compression may just be a stopgap, though, so I'd like to suggest some next steps. I'll start with a recap of some ideas others have talked about, throw in a gripe about treating community sites like they're "check out my band" web pages, and finish by a modest proposal for a new RSS schema that scales.

Background: RSS Doesnt Scale

Weblogs are becoming an important source of timely information. RSS aggregators have made it possible to monitor hundreds of feeds - weblog and otherwise - without checking them for updates, since the aggregator checks all your feeds for updates at set intervals. This is convenient for end users, and gives authors a way to broadcast their information in near real-time.

The problem is that RSS feeds are an extremely inefficient (i.e. expensive) way of sending updated information. Let's say I had a list of 500 children in a class and you wanted to be informed whenever a new child joined the class. Here's how RSS would do it: you'd call me ever hour and I'd read the entire class roster to you. You'd check them against your list and add the new ones. Now remember that RSS is used for articles, not just names, and it gets a lot worse. The only way to find out that there's nothing new is to pull down the whole feed - headlines, article text, and all.

There are two possible cases here, and RSS doesn't scale for either of them. If the RSS contents change rarely, I'm downloading the same document over and over, just to verify that it hasn't changed. If the RSS changes frequently, you've probably got an active community which means you've got lots of readers, many of whom may have their aggregators cranked up to higher refresh rates. Both cases have bandwidth efficiency issues.

What's Been Suggested

HTTP Compression

The solution that was taken in this case was HTTP compression. Scott G. has been recommending this for months now; it's good to see it's being implemented now. That's a reasonably simple solution, especially in IIS 6.

Conditional Gets

The next step is most useful for infrequently updated RSS feeds: Conditional Gets. The idea is that you tell the web server what you've got, and it only gives you content if it's changed. Otherwise, it returns a tiny message saying you've already got the latest content. HTTP has included this support since HTTP 1.1. It's the technology that lets your browser cache images, and it makes a lot of sense for RSS.

The plumbing of the system invloves HTTP/1.1 Etag, If-None-Match, and If-Modified-Since headers. The ETag (Entity Tag) is just a resource identifier, chosen by the server. It could be a checksum or hash, but doesn't need to be. This isn't hard to implement - the client just needs to save the ETags and send them as If-None-Match headers on the next request for the same resource. The server can check if the client has the latest version of the file and just send them an HTTP 304 if they're current. This has been suggested many times:

http://nick.typepad.com/blog/2004/09/rss_bandwidth_c.html
http://fishbowl.pastiche.org/2002/10/21/http_conditional_get_for_rss_hackers
http://www.pocketsoap.com/weblog/stories/2002/05/0015.html

More on the HTTP headers:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/act/htm/actml_ref_href.asp
http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3.4

The HTTP approach can be taken further with a little immagination. Justin Rudd wants customized feeds per user (based on ETag and LastModified, or unique request keys), for example.

What's Been Done (on weblogs.asp.net)

Well, HTTP Compression did eventually get implemented, but the first fix was to chop the RSS feed to 500 characters and to cut the HTML main feed down to something that was (and still is, in my opinion) barely usable.

What's worse is that this was done with no warning and with no request for suggestions. That kind of support for a growing community site is bad manners and bad business. It's bad manners because it's disrespectful of the effort the authors put into their content, and it's bad business because it drives away both authors and readers. Starting a community site represents a committment to respect and support the community, and that's not what we saw last week.

What makes this worse is that it disregarded the input of those who could likely have helped. The authors on weblogs.asp.net and blogs.msdn.com represent many thousands of years - at least - of varied software development experience. Include the active readership and you're in the hundreds of thousands to millions of years of development experience. That's a mind boggling resource. A general post or a message at the top of the HTML feed - "We're hitting the wall on bandwidth and are planning to trim the feeds in two weeks" would probably have elicited the comments that brought about the HTTP Compression implementation before rather than after the weeping and gnashing of teeth. If not, at least we knew it was coming...

This unilateral action on a blog is reminiscent of Dave Winer's stunt with weblogs.com. Dave's done a lot for blogging, but shutting off thousands of blogs without warning was a really bad idea. Chopping the feeds was not near as bad, but it's the same sort of thinking. The value in weblogs.asp.net is not in the .Text (oops, I mean Community Server:: Blogs) engine, it's in the content. I'm not pretending to be half as smart as Scott W. or anyone else at Telligent, but weblogs.asp.net community (both authors and readers) definitely is.

What I'm Suggesting

Fixing RSS

Nothing big, just restructuring RSS a bit. Yeah, it's a big deal, but I think it's got to happen eventually. HTTP tricks are great, but they won't work on busy feeds that are regularly updated.

The idea is to normalize RSS a bit by separating the items into separate resources that can be requested individually. That turns the top level RSS into a light document which references posts as resources (similar to image references in HTML). The RSS Master could include ETag and IMS info (preferable), or it could be returned on request for each individual item. Regardless, this would limit the unnecessesary resending of largely unchanged RSS files. Here's a simplified example.

RSS today is one big blob of XML with everything in it:

<rss>
<channel><title>example blog</title>
<item>lots and lots of stuff</item>
<item>lots and lots of stuff</item>
<item>lots and lots of stuff</item>
<item>lots and lots of stuff</item>
</channel>
</rss>

I'm proposing we normalize this, so the main RSS feed contains a list of external references to items which can be separately requested:

<rss>
<channel><title>example blog</title>
<item id=1>
<item id=2>
<item id=3>
<item id=4>
</rss>

<item id=1>lots and lots of stuff</item>

<item id=2>lots and lots of stuff</item>

<item id=3>lots and lots of stuff</item>

<item id=4>lots and lots of stuff</item>

The benefit is that an aggregator can just pull down the light RSS summary to see if there's anything new or updated, then pull down just what it needs. My current RSS is about 40K, and this would chop it down to just over 1K.

This is even technically possible in RSS 2.0 through the RSS extensibility model. Here's a rough example (I've used the namespace RSSI for "RSS Index"):

<rss>
<channel><title>example blog</title>
<item>
<title>Simple RSS feed not supported</title>
<description>This feed requires RSSI support. Please use an RSSI compliant aggregregator such as...</description>
</item>
<rssi:item checksum= "8b7538a34e0156c"ref=http://www.tempuri.org/blogs/rss.aspx?item=8924 />
<rssi:item checksum= "a34e0156c8b7538"ref=http://www.tempuri.org/blogs/rss.aspx?item= 8925 />
</rss>

Eventually - maybe RSS 3.0 - we add checksum and ref as optional attributes of the <item> tag.

This would also simplify some other things, like synchronizing RSS information between computers. Dare has proposed something similar to what I've recommended with SIAM (Synchronization of Information Aggregators using Markup), but he's just looking at how to synchronize the RSS at the aggregator level once it's come down. If RSS supported this natively, SIAM would be even simpler to implement at the aggregator level.

Another Idea I Considered, Then Rejected

I was thinking about other HTTP tricks that could reduce the amount of RSS going across the wire, and I briefly considered partial downloads. The idea is that the server would return a complex ETag that would give individual checksums and byte ranges for each post. This is possible since the ETag is very loosely defined. Then the aggregator could request specific byte ranges in the RSS using the HTTP Range header. This would work just fine for static XML based RSS, but would be inefficient for dynamic RSS since it would cause multiple individual requests to the same big RSS page.

Another technology I considered and rejected was using XInclude and XPointer to assemble the RSS. MSDN Article here. Here's a syntax sample:

<xi:include href="http://www.tempuri.org/rss.aspx" xpointer="xpointer(//feed[@author='WTDoor'])"/>

It's interesting, but doesn't help when all the data's in one big XML file.

And what's the deal with OPML?

While I'm at it, someone needs to Pimp My OPML. OPML is a great way to pull down a list of feeds, but once you've got them you can't keep in sync. It'd be great if aggregators could subscribe to OPML feeds, and if someone would write an OPML Diff tool, and...

And also... Happy Birthday to me. and my mom. and my daughter. I was born on my mom's birthday, and we cheated a bit by scheduling a c-section for our daughter, Esther. She's one today, I'm 34, and my mom's not telling.
Posted by Jon Galloway | 4 comment(s)
Filed under:

WQL / Reflection DataProviders for ADO.NET

WQL DataProvider for ADO.NET WQL DataProvider is an ADO.Net data provider, so you can use it just like usual MS SQL DataProvider, but for WQL (WMI + SQL = WQL). This DataProvider supports SELECT, ASSOCIATORS and REFERENCES queries. Doesn't support event queries. [Via The Code Project Latest Articles]
Having written a custom ADO.NET data provider before, I'm appreciative of the amount of work that this entails. Downloading just the source didn't go smoothly for me, but the dependencies were in the demo project so I can't complain. I didn't see the actual code for WQLDataProvider.dll in the project, but that's what Reflector's for. And if you just want to use it and don't care about the code, use the DLL and you're set.

This allows you to databind against Windows Management Instrumentation (WMI), which employs a SQL like syntax to query systems, applications, networks, devices, and other managed components. The project includes a functional WMI Query Analyzer.

And while I'm at it, if you haven't seen Matthew MacDonald's article on Creating a Custom ADO.NET Provider, it's worth a look. The cool thing about the article is his sample - he implements a Reflection Data Provider [source]. Cool idea. I tested it out by databinding on the types and members of the Reflection Data Provider itself. Talk about dogfooding...
Posted by Jon Galloway | with no comments
Filed under:

Miguel de Icaza on Longhorn changes

Interesting post by Miguel de Icaza on Microsoft's recently announced changes to Longhorn. Good to read if only to get a better understanding of the complexities of Avalon's implementation.

My prediction is that Avalon v1 will be a throw-away: it is not really the foundation on which you will build applications: V2 will likely not be backwards compatible, they will have to re-architect bits of it: which means that people will end up with two frameworks running side-by-side: Avalon V1 and Avalon V2.

The above problem is compounded with the fact that the CLR has not really sorted out a good model for sharing components across versions of the framework: the GAC solution today is a bit of a hack to keep things separate and allow for multiple installations, but does not solve

This is like cooking, you can not rush a good steak, or a cake or an omelette by cranking the heat. Some things just take time. [Via Miguel de Icaza]

Posted by Jon Galloway | with no comments
Filed under:

Monoppix Preview (0.2.2.3) Release

[UPDATE: Monoppix 1.0 has been released]

Summary

Monoppix is a GNU/Linux distribution which includes Mono, XSP, and Monodevelop, and runs completely off a CD. It allows you to get familiar with Mono development in Linux without installing anything on your computer.

Monoppix was based on Knoppix and Miniknoppix and was developed by Roiy Zysman (zroiy at spymac dot com).

Disclaimers

  1. This is not an official release; it is a public preview intended for testing and feedback prior to the official release. [1]
  2. I'm a Linux lightweight. I've put a lot of time into testing and the XSP walkthrough, but there may be better or different ways of doing what I've listed. Please comment and I'll correct / update.

What is it?

Monoppix is a Live CD Linux distribution, which means you pop it in your CD drive, reboot, and you're running Linux. It works without installing a thing on your hard drive - it runs completely off the CD and RAM.

History

I'm really excited about this - I blogged back in April about how cool a Knoppix / Mono distribution would be. Rioy commented back in May asking me what it should include, and we've been corresponding since then. Rioy's done all the work, and I've done the easy stuff - testing, feature requests, and a little IT support (FTP, Freecache, etc.).

I've been most impressed by XSP (Mono's version of ASP.NET). Winforms are not yet supported by Mono (planned for release Q4 2004 according to the Mono release roadmap - see the Mono System.Windows.Forms page for info). Console apps work great, but it's tough for me to get too thrilled about a console app. XSP, on the other hand, just needs to render the same HTML as ASP.NET and it's worked great in my testing.

Downloads / Links

If you're familiar with Knoppix, download the ISO here and get to it.

If not, don't worry, it's a simple process:

1. Download the Monoppix ISO

The Monoppix ISO is available from http://monoppix.url123.com/download [404MB]. The download goes through Freecache which should ensure a high speed download. [2]

2.  Burn the ISO to a bootable CD

The easiest ways to burn an ISO to a bootable CD in Windows are:

  1. Nero, using the CD-ROM (BOOT) option
  2. ISORecorder (free, XP SP2 or Windows 2003 need to use the ISORecorder v2 beta release)
  3. CDBurn (part of free Windows 2003 Resource Kit, works on Windows XP, command line only)
3. Running Monoppix off the CD

Make sure the CD is in the CD drive and restart your computer. If it boots in your normal operating system, reboot again and change the start device priority in your BIOS to set the CD drive to higher priority in the boot order than the Hard Drive. At the prompt, you'll need to enter a Knoppix cheat code to tell Knoppix what hardware to use. If you're using standard equipment, you're probably okay with just "knoppix" (without the quotes). If you're using a laptop or LCD monitor, "fb1024x768" will probably work.

Walkthroughs

An introductory Mono CSC Quickstart (by Roiy) is available on the desktop, and is echoed here:  CSC Quickstart Walkthrough. I set up a walkthrough on XSP (Mono's version of ASP.NET) here: Basic XSP (ASP.NET) and Monodevelop Walkthrough.

If you're chicken or just lazy, the walkthroughs have plenty of screenshots and can give you a quick overview of what it looks like to develop in Mono on Linux as if you were brave and industrious.

Other info

Access to your local files

Knoppix mounts your hard drive and places a link to it on your desktop, so you can execute your .NET / ASP.NET code off your hard drive. Linux only mounts NTFS partitions as read only, though, so if you want to save your work you'll need to be creative (save to a floppy, upload to an FTP, e-mail yourself, create a VFAT partition, etc.).

Virtual PC

Virtual PC seems like it would work well here, but the Virtual PC emulated video device is 256 colors so many of the applications were unreadable / unusable. Konsole and Konqueror looked fine, but Monodevelop and Mozilla didn't. I'm guessing the KDE elements can handle 256 colors, while Gnome / GTK apps require higher color depth, but that's just speculation. I never got it working. If you'd like to prove me wrong, here are some links to get you started:
Virtual PC 45 day free demo
Running Linux on VPC
Those Knoppix Cheat codes

Suggestions / Feedback?

You can post suggestions as comments on this post, or e-mail Roiy Zysman (zroiy at spymac dot com). He's on reserve duty for the next week, so if you e-mail him you might not hear back immediately.

I think my biggest requests are mySQL and GTK#. What would you like?

[fin]

[1] I've been testing it on 4 Windows machines for months, and thousands of others are using the basic Knoppix distribution. The disclaimer is to indicate that there are features that may not be working or are missing, but since this doesn't install anything on your hard drive it's not a risky venture. Try it out!

[2] A download manager comes in handy when you're downloading ISO's - I've used Download Express, Others have recommended FlashGet and LeechGet.

Posted by Jon Galloway | 20 comment(s)
Filed under: ,

ISORecorder v2 (beta) - Works in XP SP2 and W2K3

I was happily using ISORecorder to burn ISO images to CD-ROM's / CD-RW's until it stopped working under XP SP2. CDBurn has worked since then but it's just a command line app.

Alex Feinman just released a beta of ISORecorder v2, which works in XP SP2 and W2K3.

Careful - v2 doesn't work on pre-SP2 machines, and you'll need to uninstall the old ISORecorder before installing v2.

more info (new features / known issues / screenshots)


 [p.s. 6 GMail invites - contact me if you'd like one]
Posted by Jon Galloway | with no comments
Filed under:
More Posts