October 2009 - Posts - Jon Galloway

October 2009 - Posts

My Boot-to-VHD experiment: found some tips, like it, but still haven’t found VM nirvana

Summary

  • Windows 7’s Boot to VHD works as advertised – native speed, virtual machine flexibility.
  • I came up with some tips and tricks which you might find useful
  • Having to reboot without hibernate to switch to the VHD machine means it’s a lot less useful than I’d hoped.

Background

I’ve recently been running some early releases developer tools which came with the “install on VM’s if you don’t want your computer to catch fire” warning. That seemed like a good time to back off on my “VM’s are for sissies” stance and get my VHD on. After verifying that there wasn’t already a suitable VHD available for download, I decided to follow Scott Hanselman’s directions to set up a Boot To VHD instance.

Here’s a very high level overview:

  1. Download the wim2vhd script from CodePlex
  2. Install the Windows 7 Automated Installation Kit (AIK)
  3. Copy ImageX.exe from the the AIK install into the same folder as the wim2vhd script
  4. Run the script
  5. Mount the VHD in the Windows 7 disk management screen
  6. Run some funky commands to make the disk bootable

There's a little goofing around at the command line, but it's only a few minutes if you follow the directions. Then your new VHD shows up as an option on boot.

BUT WAIT! I have a few minor modifications. Rather than write a complete walkthrough of the process – since Scott and others have done such a good job there – I’m just going to list some footnotes to the process. I’ve very roughly outlined the steps above; I recommend you read through the following list of tips, then go and follow Scott’s walkthrough and use any of the below tips you think are helpful.

Mod #1: Getting ImageX.exe without installing the AIK

There are copies of ImageX.exe floating around on the internets. I normally wouldn’t recommending using them, I only mention that because the AIK is 1.5 GB. If you’re one of those irrational people that thinks downloading 1.5GB to get a 471KB program, you could search around for “download imagex.exe” If you do that, the CRC for my my 32 bit ImageX.exe – version 6.2.7600.16385 - is 54 BF FA D5. Not recommended, but it is an option.

Note: Dear Microsoft folks that make gigantic SDK’s, please stop. Those might have made sense before the internet, but… c’mon now. Here’s how utilities should be done: live.sysinternals.com. 

If you end up downloading the entire (1.5GB!!!) AIK ISO but hate installing a bunch of junk just to use one thing, you can open the ISO in 7-zip and find the find the ImageX.exe file, by looking for F1_imagex in Neutral.cab, like so:

ImageX 

Then you can just extract that file and rename it to ImageX.exe.

Mod #2: Changing the VHD size

The wim2vhd script defaults to a 40GB dynamic disk. Normally, I don’t really care much about the size of a dynamic disk, because the actual size of the VHD is only as big as the actual used space, and you can compact a disk to recover space as needed. However, when you mount a dynamic drive, the boot manager and host filesystem appear to reserve the maximum possible size of the of the disk – 40GB. In my case (on a laptop), that wasn’t going to work.

It’s not just a convenience thing, either – if you have a VHD whose maximum size exceeds the physical disk space available, you’ll get a blue screen of death:

BSOD - Windows 7 Boot From VHD

(photo credit: Bart Lannoeye, see his post about the BSOD issue)

You can change the created VHD disk size using the /size parameter. For my Windows 7 + Visual Studio 2010 testing purposes, a 16GB disk seemed to work well. To do that, you’d call wim2vhd with this command:

cscript wim2vhd.wsf /wim:e:\sources\install.wim /sku:ultimate /size:16384

The size is calculated in MB, so you calculate it as 1024 * number of GB. A 20GB would use /size:20480

Mod #3: Rearming Windows to extend the evaluation time

If you’re using a virtual machine installation of Windows for temporary testing, you can use it without activation for 30 days. However, you can extend that evaluation period 3 times, giving you a total of 4 months, which is plenty of time for most evaluation purposes. It’s really simple:

Run "slmgr -rearm" from a command prompt with admin rights

This isn’t a hack – it uses a command that’s been shipped with Windows since Vista first came out. It’s not really news – Jeff Atwood wrote about it a while ago, and Ed Bott followed up with a cool tip on scripting that command to run every 30 days so you don’t forget. But it’s a really handy note, and it bears repeating.

Note: Apparently you can use the SkipRearm registry setting to extend that even further, but I don’t know if that’s covered by the EULA. I haven’t had the need to use a VM that long, so I’ve never run into that.

Mod #4 (untested): EasyBDC

You can apparently skip the rigmarole with BCDEDIT by using EasyBCD, because beta builds of EasyBCD 2.0 support the Windows 7’s VHD features.

Note: I haven’t done this. I’ve used previous releases of EasyBCD and haven’t had a problem, but I haven’t used EasyBCD 2.0 as it wasn’t out when I set up my VHD.

Mod #5 (untested): Disk2vhd

The SysInternals team recently released Disk2vhd, which can capture a disk image (while running) and create a VHD. I believe that in order to use the created VHD on the same machine it was created from, you’d first need to sysprep it, since otherwise you’re essentially trying to run two identical copies of the same operating system on the same computer, and you run into problems with drive paths. I haven’t tried this yet.

Gotcha #1: Go with Windows 7 Ultimate

Boot to VHD only works in Windows 7 Ultimate or Enterprise, not Windows 7 Professional. But you can’t use the Windows Activation re-arm trick we just talked about in Enterprise (since it uses a different licensing model). So I recommend that you go with Windows 7 Ultimate Edition.

Gotcha #2: Dual Boot means no hibernation

I use hibernation quite a bit, and only do a full reboot when I have to. So, for me, dual booting was inconvenient. It meant shutting everything down – including saving any tabs I happened to have open in IE8 (since tab saving in IE8 has been pretty unreliable for me) – in order to use the VHD partition, then shutting everything in the VHD partition down to switch back to the main one. That’s a lot of friction, and it ended up that I don’t use it as often as I thought I would.

As I write this, I’m setting up a VHD which I’ll just run under Virtual PC, because I can start it up without shutting everything else down. I still feel like it’s a great feature, just one that I’ll use less often than I thought.

Side note: Fast switching between boot instances would be a killer feature for Windows. I’d settle for multiple hibernation instances. I’ve read that it’s not enabled because of concerns over invalidating one hibernation instance while running the second machine instance, but I disagree – let me make that decision. At least give me a registry setting or something to enable it.

Did you know about protocol-relative hyperlinks?

Summary:

  1. (For normal humans) IE and Firefox show perplexing messages on some pages due to a potential security vulnerability in the site you’re visiting. I’ll talk about what it means and how you can get it to go away.
  2. (For web developers) Don’t perplex your users with mixed content warnings. Use protocol-specific hyperlinks to deliver your page resources (images, CSS, Javascript) using the same protocol (HTTP/HTTPS) as the page.

Do you want to only read about this puzzling webpage prompt?

If you use IE8, you’ve probably puzzled over this dialog dozens of times:

Do you want to view only the webpage content that was delivered securely?

It’s kind of an odd question: “Do you want to view only the webpage content that was delivered securely?” Yes, of course! I mean, no. Well, what’s that “only” bit mean?

Fortunately, that dialog is explained in more detail in a post on Eric Law’s IE Internals blog. It’s a warning about a webpage which displays mixed content, meaning both HTTP and HTTPS. Eric explains the weird wording a bit, too: the old dialog said “This page contains both secure and non-secure items. Do you want to display the nonsecure items? That’s almost a variant of the classic dancing bunnies problem – I clicked on the page and it’s asking me if I want to see it. Of course I do. The new prompt kind of guides you towards only viewing the secure content.

In general, the warning is a good thing. Mixed content pages allow passing content between zones. That’s bad.

If added to the DOM, insecurely-delivered content can read or alter the rest of the page even if the bulk of the page was delivered over a secure connection.  These types of vulnerabilities are becoming increasingly dangerous as more users browse using untrusted networks (e.g. at coffee shops), and as attackers improve upon DNS-poisoning techniques and weaponize exploits against unsecure traffic.

Tampering with your HTTPS web page doesn’t just mean via Javascript. An insecure, tampered CSS file could do just about anything it wanted to with how the user views the page.

But this prompt is annoying!

It is annoying, yes. If it’s a site you use frequently, you’ve got some options.

  1. You can disable the prompt (Tools / Internet Options / Security / Custom / Misc / Display Mixed Content / Disable). This would generally be a bad idea since the mixed content warning is trying to help you.
  2. You can trust the non-secure domain (if you do trust it) and then only disable the mixed content prompt from the trusted zone. Remember that this is still a security risk, since HTTP content can be read and modified anywhere between your browser and the server.
  3. If it’s a site that’s under your control, you can fix it.

Fixing the real problem with protocol-relative hyperlinks

The real way to fix the problem is for web dev's to use protocol-relative hyperlinks, such as <img src="//www.google.com/intl/en_ALL/images/logo.gif" /> - that will use HTTPS if the page is HTTPS an HTTP if the page is HTTP, preventing both the security vulnerability and the security prompt. Rather than trying to fix the links in code, we’re relying on a specified and supported HTML feature (RFC 1808, Section 2.4.3, circa 1995)

As Eric points out, you can find out which content is causing the problem with an HTTP monitoring program like Fiddler.

Adding users to a TFS project when you’re not on the domain

Visual Studio Team System was obviously designed for user groups who are all members of a Windows Active Directory domain, all working in the same local network. I’m able to work remotely (without VPN, even) as long as I’m just checking files in and out, but the Visual Studio / TFS UI won’t let me grant users permission to contribute to my projects. I messed around with TFS Power Tools, but that didn’t work either.

I ended up running TFSSecurity.exe /g+ from the command line – you can find it in (by default for Visual Studio 2008) C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. Here’s the command I ran (substituting the correct server/projectname/domain/username, of course):
TFSSecurity.exe /server:servername.domain.com /g+ "[PROJECTNAME]\Contributors" n:"DOMAIN\username"

C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE>TFSSecurity.exe /server:servername.domain.com /g+ "[PROJECTNAME]\Contributors" n:"DOMAIN\username" 
TFSSecurity - Team Foundation Server Security Tool 
Copyright (c) Microsoft Corporation.  All rights reserved.

The target Team Foundation Server is SERVERNAME.DOMAIN.COM. 
Resolving identity "[PROJECTNAME]\Contributors"... 
  [A] [PROJECTNAME]\Contributors 
Resolving identity "n:DOMAIN\username"... 
  [U] USERNAME\username (User Name) 
Adding User Name to Contributors... 
Verifying...

SID: S-1-9-1233567890-1233567890-1233567890-1233567890-1-1233567890-1233567890-1233567890-1233567890

DN:

Identity type: Team Foundation Server application group 
   Group type: Generic 
Project scope: PROJECTNAME 
Display name: Contributors 
  Description: A group for those with general read/write permissions across the project

6 member(s): 
  [U] DOMAIN\username (User Name) 
  [U] DOMAIN\username2 (User Name 2) 
  [U] DOMAIN\username3 (User Name 3) 
  [U] DOMAIN\username4 (User Name 4) 
  [U] DOMAIN\username5 (User Name 5) 
  [U] DOMAIN\jong (Jon Galloway) 
Member of 1 group(s): 
e [A] [SERVER]\Team Foundation Valid Users

Done.

Done and done.

Posted by Jon Galloway | 1 comment(s)
Filed under:

The Designer/Developer Workflow Crisis (That Everyone’s Ignoring)

Let’s take an honest look at what passes for developer/designer workflow these days:

Designer / Developer Workflow - The Old Way

Why are we okay with this?

Sure, designers are fond using the programs they’ve used for years, and developers are busy debating DI vs. IOC, but we’re missing a bigger point. We’re ignoring how ridiculous the entire workflow has become.

I argued with some folks on Twitter about this a while ago, here’s the short version:

Here’s a bit more detail on what I was thinking.

Approaching design and development separately is horribly inefficient

I’ve been privileged to work with a lot of very talented web designers over the past decade. Most of them spoke Photoshop. To quote Scott Koon, these folks see developers as compilers who turn Photoshop into websites. I’ve also had frustrating conversations with developers over the years who just didn’t see the point in this annoying standards stuff and were perfectly happy to just give up and use tables. And it all works, in the very very short term. But it only works because the people who pay the bills don’t know how ridiculously inefficient it is.

The flowchart above is funny because it’s true, the more you experience how true it is, the less funny it becomes.

Production workers need to understand - deeply understand - what they’re producing

There’s a continual flow of developer products and frameworks which all purport to sell one product: we let you write code in languages you like, so you don’t have to mess with that yucky web plumbing, cross-browser testing, and other yucky stuff – just write code all day! And many designers prefer to work at the purely visually level, preferring to live in a world of hip fonts, color schemes, and favorite Photoshop filters. At the micro level, it all makes sense.

And yet, it’s all so wrong. As members of web development teams, our jobs exist to deliver HTML. And some images, too, but really the information’s in the HTML, so that should be the focus, right? The longer I work in web development, the more appalled I come at how little professional web developers know about the core technologies of their craft: HTML and CSS. HTML and CSS should be the lingua franca of web development teams. Designers and developers should huddle around a CSS rule, both feeling at home. Instead, I hear lip service – “Of course I know HTML! And I know enough CSS to get by…”

One of the top reasons developers and designers need to be better informed about their core technologies is that they define the natural laws of the world we inhabit. For example, architects don’t ignore physical limitations when they design buildings, leaving it to engineers to make it work, and (good) engineers don’t product ugly buildings, hoping someone else can fix it with a paint job. No, beautiful and functional buildings are built by teams that have a deep understanding of what the available materials can support, and they push them to the limit. So, too, with most other professions. Why is web development is an exception to this rule?

A comp is just another word for a specification. Why are professional web developers writing specifications, when they should be designing user experiences for the web?

Server code is of no concern or value to a website user, outside of the effect that it creates in their browser. Why, then, do so few experienced web coders care about things like CSS techniques and semantic markup?

We’re doing this today

A good designer/developer workflow is standard practice where I work (Vertigo), and I’m certain it makes a huge difference in both the efficiency and quality of the end result. It requires investments (hiring, training, educating clients, etc.), but I know they pay off many times over. For instance, we’ve been able to respond to changing requirements under very short timeframes in ways that just wouldn’t have been possible if we had designers and developers working in different silos.

So when people tell me that this whole developer/designer workflow thing is just a marketing strategy, I have to disagree. I think it works in places that have tried it, and can be developed in places that haven’t.

Today, tomorrow

One great thing about developing this skill in the traditional (HTML based) world is that it’s very transferrable to RIA technologies, like Silverlight. Designers who really get HTML+CSS can pretty quickly tear into Silverlight, often finding it easier because they can substitute vectors for images.

And I really believe that the HTML story is headed that way, too. We can already approximate things by using Canvas or SVG in all leading browsers, then shimming it into IE with things like VML. Eventually I expect IE will (finally) support SVG, and we’ll see the vision of image-less pages fully realized. And then what? Well, at that point, Photoshop comps will be more obviously pointless. It’ll be clear that they’re no more than specifications, and not even very efficient in that job.

My point: an investment the whole “designer/developer workflow” is, I think, a good short term and long term bet.

And it’s an excellent career bet, too. I’m seeing a very clear trend: integrators – those who don’t limit themselves to just designing or developing – are in high demand.

More Posts