September 2007 - Posts - Jon Galloway

September 2007 - Posts

Avatars? Isn't that some kind of D&D comic book stuff?

My previous post dug into using the Gravatar service to add avatar images to your community website. Afterwards, I realized that I didn't really make the case for why you should care about avatars. Yeah, the word avatar may make you think of fantasy gamers jabbering about Yoda's lineage on some forum.

Not so! All the cool kids1 are using avatars!

Gravatars make the comments on Scott Hanselman's blog (running on dasBlog) more interesting:

HanselmanGravatars

Gravatars spice up the comments on Phil Haack's SubText blog:

HaackedGravatars

And check out how Gavin's made extensive use of Gravatars in DotNetKicks:

DotNetKicksGravatars

So don't let the "avatar" thing throw you - this is about making websites look like conversations between people rather than documents stuffed with boring text.

1 For small values of cool? Maybe I need to get out more.

Adding Gravatars to your ASP.NET site in a few lines of code

Summary

Gravatar (Globally Recognized Avatar) provides a simple way to add avatars to community based sites. Users set up an account at http://site.gravatar.com with an avatar image and an e-mail address, then their avatar shows up on any site which support Gravatars - blogs, community sites, etc. Gravatar take care of hosting and resizing the images, handles things like decency ratings, and they've got a nice UI for image upload / cropping.

It's also nice for your users, since they don't have to upload avatar images over and over again, and they can update their avatar for all their favorite sites in one place.

Disclaimer - Gravatar images sometimes load slowly

I'll talk about the simplest case here, and in a subsequent post I'll point you to some code which works around the one major problem with Gravatar.com - the avatar images sometime slow to download. Hey, live.com guys, how about avatar.live.com? You've got that great content distribution network, this seems like a natural service to provide...

Really easy account setup

The best part of Gravatar is that account setup is really simple. After the standard "enter your e-mail, they e-mail you a confirmation link" dance,  you either upload an image or point at in image URL. You have the option to crop your image, and then you're done. That's it. I think the AJAX cropping thing is pretty slick:

Gravatar-crop

Setting up a website to show Gravatars

It's pretty easy to add Gravatars to your site. The Gravatar help page has sample code for a lot of languages and community sites; while they don't have ASP.NET code, it's a pretty trivial exercise. Gravatars are keyed by e-mail address, but in order to provide some basic protection, the e-mail addresses are passed as MD5 hashes. So here's the URL for my Gravatar:

Here's the avatar. I mostly use it because it really bugs Jeff Atwood.
Note that I've specified the size in the URL as 80px, so Gravatar automatically resized the image for me.

Calculating an MD5 hash doesn't take a lot of code, but it's not intuitive code. I call this sort of API the Bring me the Jade Monkey API. "And one more thing...  bring me a byte array before the next full moon..."

Anyhow, there's an easier way provided by the System.Web.Security namespace:
System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(stringToHash, "MD5");
 
So, with that out of the way, we can set an image source to the result of our GetGravatarUrl() function. My page is databound to a datasource which contains an Email field, so I'm passing in Container.DataItem to be evaluated.
<img alt='<%# Eval("UserName") %>' src="<%# GetGravatarUrl(Container.DataItem) %>" />
 
Here's the GetGravatarUrl function, which grabs the e-mail from the DataItem, calculates an MD5 hash, and formats it into a simple Gravatar image URL.
protected string GetGravatarUrl(object dataItem) { string email = (string)DataBinder.Eval(dataItem, "email"); string hash = System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(email.Trim(), "MD5"); hash = hash.Trim().ToLower(); //TODO:Include a default image. Querystring parameter example: &default=http%3A%2F%2Fwww.example.com%2Fsomeimage.jpg string gravatarUrl = string.Format("http://www.gravatar.com/avatar.php?gravatar_id={0}&rating=G&size=60",hash); return gravatarUrl; }

Note: if you want this basic functionality wrapped up in a control, check out Sean Kerney's Gravatar control.

Recap

Gravatar images are an easy way to add some personality to the members of your community website. Since the Gravatar images sometimes download slowly, however, you might want to look into some more sophisticated solutions such as local caching; I'll talk about that in a follow-on post.

CSSVista - Edit your CSS code live on both Internet Explorer and Firefox

CSS editing is extremely frustrating without immediate feedback. Until a few years ago, you didn't really have a choice: you typed some CSS, you refreshed the page, you tried to figure out what was wrong, repeat until you hopefully got it working.

Then came the Firefox Web Developer Toolbar, and things started to get a whole lot easier. The Firefox Firebug extension and the IE Web Developer Toolbar let you work interactively with the the two main browsers, albeit one at a time. I usually pick one or the other, get a design working, and verify / fix it in the other browser.

Today Mike linked to something that might change that: CSSVista, a free tool which lets you edit CSS and immediately see the results in both IE and Firefox. I tested it out on the Silverlight website, modifying the CSS to include a content injection declaration which IE doesn't support so you can see this in action:

CssVista

Here's that CSS block in case it's hard to read in the screenshot:

div.homeModule:before 
{
    content: "monkey";
    font-size: 40px;
    color: red;
}

I think this one's a keeper, and I'll be watching for future releases.

Taking CSS beyond a simple style library

Summary

CSS based design is really all about your HTML structure. We'll look at bad examples, then good examples. Finally, I'll point out some resources for generating stylable HTML in ASP.NET.

Ugly HTML, bad CSS

A lot of web developers haven't really gotten the point of CSS, so they're not getting the full benefit. They see CSS as a kind of global style constants file, in the same way they'd pack all their string constants into a resource or config file. Need a bold blue heading? Yep, I think I've got one of them... There it is:

.BigBlueText

{ color: #99f; font-size: 16px; }

No... that's not big enough. Time for BiggerBlueText!

.BiggerBlueText

{ color: #99f; font-size: 22px; }

The HTML reflects this mentality: ID's and Classes are assigned as necessary to apply the styles. There's often no rhyme or reason to them other than to map the styles to the HTML elements. Unfortunately, that completely misses the point. Although it's unfortunate that you aren't getting the maximum value out of CSS based styling, but the real crime is that those who want to restyle your work in the future don't have the hooks they need.

A classic example of that kind of unstylable HTML (impervious to CSS) is that returned by SQL Server Reporting Services. I've made some attempts to style SSRS and make it work cross-browser, but there are major limits to what can be done. Often parent elements have no class names assigned; other times class names are assigned but are re-used based on style. That makes it impossible to change the style of the class without destroying the entire page. Here's a short snippet (by far not the worst!) to show you what I mean - notice that the class "msrs-normal" has nothing to do with structure or content; it's only included to simplify styling the page.

<td valign="top" width="50%"><table width="100%" class="msrs-normal" cellpadding="0" cellspacing="0"> <tr> <td valign="top" width="50%"><table width="100%" class="msrs-normal" cellpadding="0" cellspacing="2"> <tr> <td valign="top" width="18px"> <a href="..."> <img src="..." height="16" width="16" alt="Report" border="0" /> </a> </td> <td valign="top"><a href="..." id="ui_a0">...</a></td> </tr> <tr> <td valign="top"><img src="..." height="2" width="1" border="0" /></td> </tr> </table> </td> </tr> <tr> <td valign="top" width="50%"><table width="100%" class="msrs-normal" cellpadding="0" cellspacing="2">

Once we talk about how this SHOULD work, the problem with doing it wrong will be more obvious.

Stylable pages don't mean you have to go nuts with classes

You might think that really stylable HTML needs classes all over the place. That's not true, thanks to descendant selectors, which let you target elements inside a parent element. For instance, descendant selectors will let you style all <a> elements which appear inside a <div> with and id of "nav":

div#nav a { font-weight:bold; }

This is great because we're able to target specific elements (only <a> tags inside <div id="nav">) without a lot of extra work or code.

CSS that's based on page structure

CSS Zen GardenThe classic example of great CSS is CSS Zen Garden. It's a beautiful example of what good designers can do with CSS, but most people don't consider the unsung hero: cleanly structured HTML. The page structure is designed for styling. Since the page is designed so that it can be styled via descendant selectors:

  • The HTML is very simple, structured around the page's information rather than design
  • It's got ID's assigned to all of the containers elements

So, now, let's look at a the structure of the  HTML used in CSS Zen Garden.

<BODY id=css-zen-garden> <DIV id=container> <DIV id=intro> <DIV id=pageHeader> <H1>...</H1> <H2>...</H2> </DIV> <DIV id=quickSummary> <P>...</P> <P>...</P> </DIV> <DIV id=preamble> <H3>...</H3> <P>...</P> <P>...</P> <P>...</DIV> </DIV> <DIV id=supportingText> <DIV id=explanation> <H3>...</H3> <P>...</P> <P>...</P> </DIV> <DIV id=participation> <H3>...</H3> <P>...</P> <P>...</P> <P>...</P> </DIV> <!-- Okay, Jon, we get the point. You can skip the rest. --> </DIV> </DIV> </DIV> </BODY>

So, we've got a really simple page that's been restyled hundreds of times by a wide variety of cutting edge designers. It's all possible because the HTML was constructed in such a way as to expose the page elements as something like a design API (application programmer's interface). There are no ID's or classes titled "big text" or "red footnote" because they're not at all necessary for a CSS developer to get at the elements they need. For example, let's say we want to modify the heading inside the "participation" div. No problem:

div#participation h3 { margin-top: 2em; }

Writing CSS Friendly HTML in ASP.NET

The default rendering of ASP.NET controls isn't CSS Friendly, you can build CSS friendly sites in ASP.NET without too much effort. Take a look at the remix gallery for the VisitMix site, the most remix friendly ASP.NET site I'm aware of.

By far, the nicest thing you can do for your markup is to use the ASP.NET CSS Friendly Control Adapters. Scott Guthrie has a tutorial on them, as does Scott Mitchell. They're really easy to setup and use - you drop some files in your site, and your controls go from <span><table><tr><td><table><tr> parties to using stylable elements like <ul>'s. They went a little nuts with spraying classes on every single element (pretty redundant when you can do the same thing with descendant selectors), but it's a major improvement. Steve Harman and I used them on a project this past year, and it worked great.

When you start using Visual Studio 2008 (if you're not already), be sure to look at the greatly improved CSS editing features. You'll also want to abandon the GridView for the new ListView control, which gives you a lot more control over the HTML your page generates. Here's how Rick Strahl describes the ListView:

The ListView is a sort of hybrid between a DataGrid and Repeater that combines the free form templating of the Repeater with the editing features of the data grid. It looks interesting because it basically allows you much more control over the layout than a DataGrid does while still giving you many of the more advanced features of the data grid.

Note: ID's and Classes

An ID can only exist once on a page; a class can exist multiple times on a page. Generally, it's best to use a class rather than an ID if you're not sure. ID's are best used for structural containers like headers and footers, classes are used everywhere else. Every ASP.NET element is assigned an ID, but it may be munged a bit due to the effects of naming containers on nested controls. You can assign classes to any element you'd like - even the <body> tag - and ASP.NET won't touch them.

David Shea's gone with a lot more ID's than classes for the CSS Zen Garden HTML, which makes sense because this page is completely static. There will never be another "preamble" here, mostly to allow graphic designers to tune their look down to the pixel. We're not so lucky, though. The client always wants to add another "preamble", usually late Saturday night. For that reason, I stick with Classes over ID's most of the time.

[Tip] Use RUNAS to set your Windows Auth domain for database connections

Ever run into problems connecting to a database using Windows Authentication when you're not on that domain? I sure have - I was connecting over VPN, wanting to use SQL Server Management Studio, but my VPN account wasn't in the correct domain to authenticate. I ended up just using Remote Desktop whenever I needed to connect to that database for the length of that project... which dragged out more than year. Johnny Coder ran into it, too, but he's got the solution:

Please consider the following scenarios: 

  1. An instance of SQL Server is setup in the Development Environment and it isn’t running in Mixed Mode.  This implies that one needs to be a member of the appropriate domain and have appropriate permissions in order to access the SQL Server.  Not a big deal, right?  Well, I would agree if I were a developer working onsite and the Dev Network was readily available to me. But I happen to work remotely on occasion and there isn’t an entrance point into this particular domain through VPN.  Boy, SQL Authentication sure would come in handy in this case. 
  2. Let’s say your environments (perhaps Dev, QA and Production) are hosted in separate domains.  It would be painful to switch between domains in order to access each SQL Servers, wouldn’t it?

...

an even better solution in the appropriately named RunAs command which allows a user to run specific tools and programs with different permissions than the user’s current logon provides. The following are a few of  my favorite commands which I’ve wrapped up neatly in their own .cmd file for quick execution (you will need to update the domain and user values accordingly):

  • runas /user:domain\user “C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\ssmsee.exe”
  • runas /user:domain\user “C:\WINDOWS\system32\mmc.exe /s \”C:\Program Files\Microsoft SQL Server\80\Tools\BINN\SQL Server Enterprise Manager.MSC\”"
  • runas /user:domain\user isqlw

[via Johnny Coder - Coping with Windows Auth]

That's one of those solutions that's not obvious, but makes you slap your head when you hear it. Of course! I'll be sure to keep RUNAS in mind for more than installing software on my kids' computer.

Posted by Jon Galloway | 1 comment(s)
Filed under:

Why aren't Windows file copies restartable?

Windows has supported restartable file copies for a while. CopyFileEx() has supported COPY_FILE_RESTARTABLE for a long, long time. ROBOCOPY has handled restartable file copies since Windows NT4 - maybe eight years ago? So you might think Windows Explorer would handle restartable file copies. Unfortunately, here's what you get when your connection drops for even a second:

Same problem in Vista, despite the fact that file copies take eons. In the sixth major release of an operating system, is it too much to expect that the file system GUI handle network copies, especially when the base operating system's had the capabilities for almost a decade? Robocopy, a simple console application, tells you that the connection has dropped and that it will try again in 30 seconds. How about Explorer?

Posted by Jon Galloway | 2 comment(s)
Filed under:

Easier VPN connections from a Windows desktop

Connecting to a VPN in Windows is a bit painful if you do it frequently. It's a relatively simple task, but it's "death by a thousand right clicks", and it takes a good fifteen seconds. If you're on and off VPN's frequently, it's worth taking the time to improve the process.

Disable VPN Progress Updates

The simplest thing you can do is disable progress updates while connecting. When updates are show, even fast VPN's take about 15 seconds to complete the connection process; with progress updates disabled the connection is nearly instantaneous. I know that there's still networking stuff going on, but I'm generally able to use the network right away in that case.

 VPN-Properties

Skip the Windows UI and VPN from the command line with RASDIAL.EXE

That's better, but it's still a slow clicky clicky process. You can create a shortcut to the VPN connection, but I usually skip that process and just create a one-line batch file for regularly used VPN's. That's easy, because RASDIAL.EXE is very simple. Old timers may remember RASDIAL for it's old purpose - handling dialup internet connections. It still does that, but nobody's using oldschool dialup anymore, are they? Dialup 2.0 is just phone-a-friend.

Keep in mind that RASDIAL won't allow you to set up a new VPN connection, which is fine because we don't need to do that. Set up the VPN as you always have, then use RASDIAL to connect and disconnect. There are really just three things you can do with RASDIAL - connect, disconnect, and check connection status.

Connecting

Here's the syntax to connect with RASDIAL: rasdial entryname [username [password|*]] [/DOMAIN:domain] (there are dialup switches, I didn't list them)

Note that you've got the option of saving your password. If you're connecting from a secure computer with a strong password, I don't see any problem with that... just keep in mind that you're saving your password in plain text. The benefit of saving your password is that you can just run the batch file and you're connected - there's no Windows UI to deal with at all. Here's an example of the sort of command I've used for this:

rasdial megacorp jgalloway !tricky®password! /domain:megacorp

Disconnecting

Disconnecting is even easier: rasdial [entryname] /DISCONNECT

So in the case above, I'd disconnect with this:

rasdial megacorp /disconnect

Checking connection status

That's super simple - running the command with no switches while connected will show you something like this:

C:\Documents and Settings\Jon>rasdial
Connected to
MegaCorp
Command completed successfully.

Setting up shortcuts

You can either set up shortcuts to rasdial with the above parameters, or you can create two batch files (vpn-megacorp-connect.bat and vpn-megacorp-disconnect.bat) and create shortcuts to them.

Script potential

I don't see any reason why you couldn't work this into a script that sets up what you'll want to do once connected, for instance, you could follow rasdial with mstsc to connect to remote desktop to a machine. Remember to throw the /console switch onto your mstsc call if you're connecting to a desktop that may have other sessions in use.

rasdial megacorp username password /domain:megacorp
mstsc megaserver.rdp /console

No rocket surgery, but if you're connecting to a VPN frequently then it's worth taking 30 seconds to lessen the pain.

Posted by Jon Galloway | 1 comment(s)
Filed under:

We should be virtualizing Applications, not Machines

One of the benefits of my new job at Vertigo Software is that I have more frequent opportunities to talk with my co-worker, Jeff Atwood. If everything goes right, we argue... because if we agree, neither of us is going to learn anything. Recently, we argued about virtual machines. I think machine virtualization is hugely oversold. We let the technical elegance (gee whiz, a program that lets me pretend to run another computer as another program!) distract us from the fact that virtual machines are a sleazy, inelegant hack.

I was a teenage VM junkie

I'm saying this as someone who used to be a big VM advocate. A few years ago, I noticed that it took new developers as much as a week to get set up to develop on some of the company's more complex systems, so I worked with application leads to set up Virtual PC images which had everything pre-installed. Sure enough, I had new developers working against complex DCOM / .NET / DB2 / "Classic" ASP environments with tons of dependencies in the time it took to copy the VPC image off a DVD-ROM.

One problem: it was a lousy developer experience. Most of the developers kept quiet, but a new developer let me have it: "This stinks! It's totally slow! You really expect me to show up to work every day and work eight hours on a virtual machine?" He was right, it did stink. Even with extra memory and a beefy machine, virtual machines are fine for occasional use, but they're not workable for full time development. For instance, I frequently had to shut down all other applications (Outlook, browsers, etc.) so that the virtual machine would be responsive enough that I could get work done.

Let's admit that the virtual machine emperor has no clothes on. Virtualizing machines is a neat parlor trick. It's tolerable for a few short-term uses (demonstrations, training, playing with alpha software, or testing different operating systems) but it's not a viable solution for real development.

Let's take fast computers and make them not

Resources

The problem is that virtualizing a computer in software is just incredibly inefficient. They waste system resources like crazy - they're so inefficient that they make today's multi-core machines packed full of RAM crawl. Their blessing - they let us simulate an entire machine in software - is also their curse. Simulating an entire machine in software means they're running a separate network stack, drawing video on a pretend video card, managing pretend system resources on a pretend motherboard, playing sound on pretend soundcards, etc. Even if you spend some time and / or money optimizing your virtual machine image, virtual machine images are still pigs compared to working on real machines.

Note: Machine Virtualization is a complex subject, and I'm glossing over things a little here. Virtual PC and VMWare both use native virtualization combined with virtual machine additions to leverage physical hardware as much as possible, trapping and emulating only what's necessary. So, VM's don't emulate all hardware resources, but the fact that they need to essentially filter all CPU instructions isn't a very efficient use of system resources.

Size

Then there's the problem of space. Even an optimized image with a significant amount of software - say, a development environment and a database - runs a few GB. While we can make space for them on today's large drives the problem there is in transportation and backup. Multi-GB virtual drive files are a pain to move around. And even if you do optimize them, you're keeping multiple copies of things like "C:\WINDOWS\Driver Cache\", "C:\WINDOWS\Microsoft.NET\", etc.

Updates and virus scanning

Many people mistakenly believe that they don't need to worry about safety (patches, automatic updates, virus scanners) on virtual machines since the host machine is taking care of those things. Not so. Each virtual machine with internet or network access is more than capable of becoming infected or compromised in the browser over port 80, or of spreading viruses which exploit network issues (e.g. SQL Slammer on port 1433). A virtual machine needs to be patched and protected, but since it's not on regularly it's not as likely to have been Auto-Updated. So, if you're planning to work on a virtual machine, you should plan to spend twice as much time and effort on operating system maintenance.

Host machine services

A personal pet peeve is the vampire services that VMWare runs. Yes, their VMWare Player is free, but just by installing it you install a bunch of Windows Services which autostart with your computer and run until you uninstall the VMWare Player, even if you never open a single VM.

Isolating your work environment makes it harder to get work done

By its very nature, virtualizing an environment means that your files run on what might as well be a separate computer. That means that it's difficult to take advantage of programs on the base machine. Yes, you can share clipboards and map drives to a virtual machine (while it's running), but that's about it. If, for instance, you run Outlook on the base machine and Visual Studio in a virtual machine - you end up jumping through some hoops to send the output of a program via e-mail, or view tabular data from SSMS in Excel, or add an emailed logo to your web application. These are all simple enough to get around, but add significant friction to your daily work.

Why are we virtualizing entire machines again? Isn't there a better solution?

I sure think so, or this post would just be a useless rant. I try to avoid those.

How about running unvirtualized software?

For instance, I've been developing on Visual Studio 2008 since Beta 1, and I've got it installed side by side with Visual Studio 2005. No problems. I recently upgraded to Visual Studio 2008 Beta 2 with no problems. Truth be told, this wasn't my gutsy idea - Rob Conery did it, and (as with a few other things) I followed him over the cliff - fortunately in this case it was not a real actual cliff. As software consumers, we should be able to expect software that we can trust enough, you know, actually install and use. I understand that beta software is beta software, and I'm a fan of releasing software early and often, but I should be able to expect that beta software won't trash my machine. As a side note here, I continue to be really impressed by the ever-increasing quality of Visual Studio releases.

The best way for developers to better the situation is to write software which is well compartmentalized, so we don't need to put it in an artificial container. This is commonly done by making good use of an application virtual machine (like the common language runtime, the Java language runtime, etc.). These technologies tend to steer you towards writing well isolated software, but for the sake of legacy integration they don't prevent you from going back to old, bad habits like writing to the Windows registry, modifying or installing shared resources, installing COM objects, or storing files and settings in the wrong place. Software written to run on a software virtual machine isn't guaranteed to be isolated, but it's generally more likely to play nice.

Folks talk about running under a non-administrator account, but I'm not completely convinced. While that solution does prompt you before modifying your system, it hasn't really been that helpful in my experience. Too much software just stops working under a lower privileged account. At that point you're back to the choice - install the software as an administrator or don't install it, but once you install as administrator you've pretty much said "Sure, do whatever you want to my system, please don't trash it." Hopefully software and users will both move together into an "administrator not required" world, but it's difficult as a software user to make that move before software vendors have fully embraced it.

Those other applications that won't play nice.

So what do you do with applications that insist on modifying your system? Short of the Virtual Machine Nuclear Option, is there a way to handle software which, by design, makes fundamental changes to your system?

Yes, I'm thinking of you, Internet Explorer, and I'm shaking my head. You don't play nice - IE7 won't share a computer with IE6 without a fight. I know you've got a lot of excuses, and I still don't buy them (check out this video at 1:05:30, that's me bugging the IE team about this at MIX06). To my way of thinking, IE is a browser, and if the .NET Framework folks have figured out how to run entire platforms side by side, you can manage to do that with an application which renders web pages. I felt strongly enough about this to put a significant amount of time (okay, a ridiculous amount of time) into developing and supporting workarounds - not really for my use, but for thousands of web developers (judging from the downloads and hits). I thought the IE team's suggestion to go and use VPC was... well, pretty unhelpful, especially considering that the need to run IE6 and IE7 on the same machine was to try to write forward compatible HTML which still worked with IE6's messed up rendering. I'll move off the IE example before this turns more rantlike...

The point there, I guess, is that virtual machines are sleazy hacks which users can decide to use, but software vendors should be ashamed to require. So... no matter how much we wish we could install all our applications on one machine, some applications won't play nice. As it is now, the only out we've got is to virtualize the entire machine. Is there a way to sandbox applications which won't sandbox themselves?

Waiting for an Application Virtualization host

Yes - there's a better solution which has gone almost completely unnoticed: application virtualization. I've been reading about application virtualization technology for a while, but have been frustrated with how long it's taken to actually see it in practice. The idea is to host a program in a sandbox that virtualizes just the things we don't want modified - for instance, registry and DLL's.

GreenBorder and SoftGrid are two commercial solutions to this problem. The good news and bad news about both of these programs is that they've been bought in the past year - GreenBorder was bought by Google, and SoftGrid was bought by Microsoft. The good news part of that is that the technology is backed by major, established companies; the bad news is that the technology may be folded into larger product offerings (Hello, FolderShare? You guys got eaten by SkyDrive, but the best features haven't made it over yet... And Groove, sure miss the file preview in Groove 2007...).

SoftGrid

Here's a diagram from Microsoft's SoftGrid documentation:

Pretty cool, that's exactly what I'd like to be able to do. The problem (at least as I see it) is that this cool SystemGuard stuff is wrapped up inside a bigger Application Streaming system. The benefit is that existing applications can be wrapped up into packages which deploy kind of like ClickOnce applications - they're sandboxed, automatically updated, and centrally managed:

SoftGrid 

That's neat if you're looking at this as a complement to SMS from an enterprise IT point of view, but it's not helpful for individuals who just want to be able to install software on their local machine. I'm especially un-excited about application streaming after having to use Novell ZENWorks at a former job at a financial company. Virtualized Microsoft Office must sound great to IT folks, but it wasn't very productive for end users.

GreenBorder

GreenBorder takes a smaller scope - it just sandboxes browsers, making the border of the "safe" browser green (get it?).

That's a cool idea, but it's completely web-centric. I'd expect Google's use of it to stay that way. So, it's nice for safe browsing, but not helpful if you want to sandbox an arbitrary application on your computer.

Sandboxie

UPDATE: I just remembered another application virtualization program I've been meaning to look into: Sandboxie. The unregistered version of Sandboxie is free, and registration is only $25. From their site:

Sandboxie changes the rules such that write operations do not make it back to your hard disk.

The illustration shows the key component of Sandboxie: a transient storage area, or sandbox. Data flows in both directions between programs and the sandbox. During read operations, data may flow from the hard disk into the sandbox. But data never flows back from the sandbox into the hard disk.

If you run Freecell inside the Sandboxie environment, Sandboxie reads the statistics data from the hard disk into the sandbox, to satisfy the read requested by Freecell. When the game later writes the statistics, Sandboxie intercepts this operation and directs the data to the sandbox.

If you then run Freecell without the aid of Sandboxie, the read operation would bypass the sandbox altogether, and the statistics would be retrieved from the hard disk.

The transient nature of the sandbox makes it is easy to get rid of everything in it. If you were to throw away the sandbox, by deleting everything in it, the sandboxed statistics would be gone for good, as if they had never been there in the first place. 

I'll have to give that a try the next time I want to test something... but since I'm completely sure it'll work well, should I test it in a Virtual Machine? Hmm...

MORE UPDATES:

Xenocode - I got an e-mail from Kenji Obata at Xenocode, letting me know that Xenocode virtualizes applications inside of standalone EXE's. It looks like a well thought out solution, but a single developer license costs $499. This is worth a look for professional or enterprise application virtualization without buying into the whole SoftGrid style application streaming deployment thing.

In the funny timing department, my copy of Redmond Magazine just arrived today. The September issue has an overview of application virtualization, and compares Altiris Software Virtualization Solution (SVS) with LANDesk Application Virtualization.

Let me know if there are more Application Virtualization systems I've missed, or if you've used any of these please comment with your experiences.

More Posts