December 2011 - Posts - Jon Galloway

December 2011 - Posts

My wife and I recorded a Christmas song: Twelve Days of the Partridge

Since 2007, my wife and I have been contributing a song to the Familyre Christmas compilation albums. You can download this year's for free, and you can play them all online on Bandcamp. My wife had written original songs each previous year; this year I suggested we just record a classic like The Twelve Days of Christmas.

You can stream or download our song - Twelve Days of the Partridge - here (perhaps while reading this post):

You can stream or download the full album here:

Twelve Days: trickier than you'd think

I originally suggested Twelve Days of Christmas so we could just take it easy this year; I probably should have thought about that a bit first. As an old traditional song, The Twelve Days of Christmas has an interesting song structure. It bounces between 4/4 and 3/4 time, with some pickup notes just to mix things up a bit. It's odd enough that Wikipedia actually has a section talking about the song structure, irregular meter, and other things that give electronic recording people headaches:

The time signature of this song is not constant, unlike most popular music. This irregular meter perhaps speaks for the song's folk origin. The introductory lines, such as "On the twelfth day of Christmas, my true love gave to me", are made up of two 4/4 bars, while most of the lines naming gifts receive one 3/4 bar per gift with the exception of "Five gold(en) rings," which receives two 4/4 bars, "Two turtle doves" getting a 4/4 bar with "And a" on its 4th beat and "Partridge in a pear tree" getting two 4/4 bars of music. In most versions, a 4/4 bar of music immediately follows "Partridge in a pear tree." "On the" is found in that bar on the 4th (pickup) beat for the next verse. The successive bars of 3 for the gifts surrounded by bars of 4 give the song its hallmark "hurried" quality.

The second to fourth verses' melody is different from that of the fifth to twelfth verses. Before the fifth verse (when "five gold(en) rings" is first sung), the melody, using solfege, is "sol re mi fa re" for the fourth to second items, and this same melody is thereafter sung for the twelfth to sixth items. However, the melody for "four colly birds, three French hens, two turtle doves" changes from this point, differing from the way these lines were sung in the opening four verses.

Credits and stuff

My wife Rachel did most of the work on this one: singing, acoustic guitar, ukulele, flute, xylophone.

I played bass, organ, electronic drums (most of which I decided to remove in the mix), and ran most of the recording and mixing. My favorite part was the organ - I used my brother Glen's Nord Electro 2 on a drawbar organ setting with an auto-wah. I love that sound.

We had a bunch of help from our friends:

  • Brian Galloway provided some tasty electronic guitar licks.
  • John Ciccolella hooked us up with some awesome fiddling (and help with the arrangement).
  • Andrew Smith did the work of twelve men (twelve drummers drumming)
  • Our daughters and friends sang the children's choir bit on the 12th day.

Nerdy recording stuff

I decided to do the editing and mixing in Audacity. It's picked up a lot of features over the past few years that really helped, including track groups, envelopes, and solid VST effect support. We ended up with a few dozen live tracks, so there were two main jobs in the mix - fixing little timing issues to tighten it up, and getting the levels right so all the parts were audible.

2011-12-25 11h54_27

Handling timing

With a big song like this, a click track is a must. I also made use of a nice plugin called Regular Interval Labels to visually align audio clips. The big benefit that it provides is that the clips snap to nearest labels a bit, so it's much easier to align them.

Handling volume

For a mix like this, the trick is to make everything audible without clipping. The trap is to avoid saying "Hmm, can't hear the second violin track, better turn it up." Instead, I focus on listening for which tracks can be turned down and still be audible. For a dynamic mix which responds to new instruments joining part way through the song, I really rely on the volume envelopes:

2011-12-25 12h07_46

VST Plugins

I made some electronic drum loops which sounded cool in parts, but it was a pretty busy mix and we ended up pulling most of them out. Rachel and I joked around with putting one halfway through the song, as a brief intermission. It cracked us up, so we decided to keep it. The problem was that it kind of jumped out since it was really punchy, so we ended up thinning it out so it sounded like an old record. We used iZotope Vinyl (free), a classic EQ setting, and dBlue Tape Stop effect (also free). I made the loop in FL Studio Pro - it's intentionally a little out of time so it sounds more natural.

2011-12-25 12h40_33

Last year's track - Glory

If you like this one, you might like the song we did last year:

Merry Christmas!

Posted by Jon Galloway | with no comments

Working around a Powershell Call Depth Disaster With Trampolines

I just posted about an update to my NuGet package downloader script which included a few fixes, including a fix to handle paging. That sounds boring, but wait until you hear about the trampolines.

Recursion: Seemed like a good idea at the time

The whole idea of this script is to download package files referenced in the public NuGet package feed. The NuGet package is in OData format, and it's paged. That's a good thing - requesting the feed shouldn't make you wait while up to 16,000 (and growing) package descriptions are downloaded, and it shouldn't tie up the server with that either. Each page currently returns 100 items, but that's a service implementation detail that could change at any time.

Paging in OData is done using a <link rel="next" href="http://url.com?$skiptoken=sometoken> element at the end of each page, like this:

<?xml version="1.0" encoding="iso-8859-1" standalone="yes"?>
<feed xml:base="http://packages.nuget.org/v1/FeedService.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" 
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">
  <title type="text">Packages</title>
  <id>http://packages.nuget.org/v1/FeedService.svc/Packages</id>
  <updated>2011-12-08T23:46:46Z</updated>
  <link rel="self" title="Packages" href="Packages" />

  <entry>
    <id>http://packages.nuget.org/v1/FeedService.svc/Packages(Id='adjunct-Some.Package',Version='1.0.0.0')</id>
    <title type="text">Some.Package</title>
    <summary type="text">First!!!! w00t!!!</summary>
    < more stuff here />
  </entry>
  <entry>
    < ... entry stuff for another package ... />
  </entry>

  < etc - 98 more entry elements />

  <entry>
    <id>http://packages.nuget.org/v1/FeedService.svc/Packages(Id='adjunct-Last.On.The.Page',Version='1.0.0.0')</id>
    <title type="text">Last.On.The.Page</title>
    <summary type="text">Last package on the first page</summary>
    < more stuff here />
  </entry>
  
  <link rel="next" href="http://packages.nuget.org/v1/FeedService.svc/Packages?$skiptoken='adjunct-Last.On.The.Page','1.0.0.0'" />
</feed>

The important bit is that the link element tells you how to request the next page of information. Here it's a link to the same feed with a skip token indicating the last item we've seen. I'm simplifying this a bit - the feed also handles searching, ordering, etc., but I'm just focusing on the paging bit. If you make a big request, the server returns you a chunk of the results with a link to get the next chunk.

My original implementation was to do something like this:

function DownloadEntries {
 param ([string]$feedUrl) 

    // Download all items on the page
    
    $nextUrl = href from last link on the page
    
    DownloadEntries $nextUrl
}

DownloadEntries $firstPage

Note: If you're new to PowerShell, that's the syntax for writing a function called DownloadEntries which takes a string parameter called feedUrl.

So DownloadEntries gets passed a URL, downloads all entries on that page, and calls itself with the URL to the next page in the list. This works just fine for a while, but the problem is that every page is adding another level to the recursive depth of the call graph, meaning we'll run into trouble soon.

Recursion, Call Stacks, and Stack Overflows

Note: I know I'm going to get #wellactually'd to death. I'm bracing for it. I'm not a Recursion Expert of Recursions, and if you are I'm sure I'll hear about it in the comments. Also, you're a big, mean jerk. Actually, I'm happy to learn more, so fire away. Now with that out of the way...

The most common example of recursion in computer science is when a function calls itself. I first stumbled across it as a junior programmer, when I "re-invented" it to solve a tricky problem. I needed to generate a massive Word document report based on a complex object model, and eventually came up with a function that took a node object, wrote out that node's report information, and called itself for each child node. It worked, and when my boss said, "Hey, nice use of recursion!" I hurried off to read what he was talking about.

Recursion can be really useful, but things can get out of hand. In stack-based languages, recursion requires maintaining the state for everything in the call stack. That means that the memory used by a function grows with the recursion call depth. Since memory is limited, it's easy to write code that will exhaust your available memory, and you'll get a dreaded stack overflow error. Wikipedia lists infinite recursion as the most common cause of stack overflows, but you don't actually have to get all the way to infinity to run out of memory - a lot of recursion is enough.

PowerShell's Call Depth Limit

Rather than wait for a stack overflow to happen - or even for a script to use up an obscene amount of memory - PowerShell caps the call depth at 100. That means that, even if there were no memory overhead to each call, the above function would fail at 100 pages. Doug Finke shows a very easy way to demonstrate this:

function simpleTest ($n)
{
    if($n -ne 0) { $n; simpleTest ($n-1) }
}

PS C:\> simpleTest 10
10
9
8
7
6
5
4
3
2
1

PS C:\> simpletest 99
99
98
97
96
95

The script failed due to call depth overflow. 
The call depth reached 101 and the maximum is 100.

Side Note: PowerShell has the same call depth limit (100) on x64 and x86

Also from Doug's post:

Recursion depth limit is fixed in version 1. Deep recursion was causing problems in 64bit mode because of the way exceptions were being processed. It was causing cascading out-of-memory errors. The net result was that we hard-limited the recursion depth on all platforms to help ensure that scripts would be portable to all platforms.
- Bruce Payette, co-designer of PowerShell

Tail-Recursive Calls and Trampolines

There are ways to work around this problem, but to me they all seem to boil down to replacing deep call stacks with loops - either automatically at the language level or via fancy pants programming. In my case, I needed to convert from recursion to trampoline style - albeit a very simple conversion. With the disclaimer that I'm not an expert here, I thought I'd overview what I read about trampolines, then simplify it down to the basic trampoline solution I ended up with.

Tail Call Optimization

In some cases, languages can optimize their way around this problem. One example is tail call optimization. A tail call is a call which is happening immediately prior to returning from the function, meaning that there's no remaining logic in the calling function and thus no need to retain the stack. In a tail-recursive call, it's returning that value to itself.

Languages can effectively unroll this, since in a tail call situation they don't need to maintain the nested call stack as the calling function has completed all its logical operation. If you're using one of those languages and make sure that your recursive calls are tail-recursive (so no additional logic is performed after the recursive call), the language will handle this for you and you'll end up with the benefits of recursion without the baggage of building a big call stack.

Matthew Podwysocki runs through this in more detail with a comparison of how this works in F# and C# (with an interesting difference between 32 and 64 bit operation). He shows an example of a case where a single line of code after the recursive call breaks the tail call optimization, resulting in a stack overflow.

It's interesting to note that the common language runtime supports tail calls, so it comes down to whether the language and compiler make use of it. PowerShell doesn't support tail call optimization, so I needed to alter my code. That's where I learned about trampolines.

Trampolines

The common pattern I read about for dealing with this is to manage state yourself using a trampoline: a piece of code that repeatedly calls functions. And, depending on the language and recursive requirements, that little "a piece of code that repeatedly calls functions" can get really pretty complex.

If my implementation was really making deep use of recursion in such a way that I really needed to emulate nested calls, that trampoline implementation would need to be pretty sophisticated. While that's implemented differently in different languages, in modern .NET programming I'm reading that's usually done using a function factory trampoline class which calls functions for you. Here are some posts showing how to do that:

In my simple case, though, I could just to convert the recursive call so that it's called from a looping construct, returning state from each call. I'd call this option a strategic retreat, since this is now in no way recursive anymore. But, by calling it a refactoring to implement trampoline-style programming, I'll still keep my pride.

Less Talk, More Code

Right.

So since the reason I was using recursion was just to continue calling the next page after completing the work on the current one, my trampoline can be implemented as a simple for loop into a function that returns the URL of the next page. Remembering that we started with something like this:

function DownloadEntries {
 param ([string]$feedUrl) 

    // Download all items on the page
    
    $nextUrl = href from last link on the page
    
    DownloadEntries $nextUrl
}

DownloadEntries $firstPage

We can rewrite that as this:

function DownloadEntries {
 param ([string]$feedUrl) 

    // Download all items on the page
    
    $nextUrl = href from last link on the page
    
    return $nextUrl
}

while($feedUrl -ne $null) {
    $feedUrl = DownloadEntries $feedUrl
}

This can now handle unlimited pages, since it's just a simple loop. The only downside is that I can no longer feel quite as sophisticated, since while I call that a trampoline, you'd probably call it a boring loop that calls into a function.

Here's the full script so you can see it in context:

NuGet PowerShell Downloader Update - Adding Failed Download Retries, Better Paging Support

I previously posted a NuGet PowerShell downloader script, which is handy for downloading a local NuGet repository. There are several common uses:

  • It's used in corporate environments where network policies prevent developers from accessing NuGet.org
  • It's useful in cases where development teams want to build a customized feed with specific packages
  • It's a great backup for presentations involving NuGet, especially on overloaded conference wi-fi

Note: Now, look, I'm happy if the first two help you out, but that's not what I wrote it for. I wrote this because I was tired of watching speakers at conferences have problems installing NuGet packages and complaining about the slow conference wi-fi. Of course it's slow, it's always slow (it's not lupus, it's never lupus!). If you're doing a presentation that involves you installing a NuGet package, part of your prep needs to be verifying that you have the required packages on your machine.

Matthew Podwysocki recently told me he was getting a call depth overflow error with the script. We looked at it, and it turned out that the problem was that he was downloading significantly more packages than had been available when I first wrote the script. I'd originally tested against several hundred, but there are over 16,000 available now (3,900 unique). I updated the script to fix that, and while I was at it I added in support for retry if a package download fails for some reason. I'll describe the retry mechanism here and save the paging bit for the next post, since that's a bit more in-depth.

Handling Download Retries in PowerShell

Sometimes, downloads fail for any number of reasons. When you're downloading hundreds of files, chances of a failed download go up a bit, and I'd gotten a few requests to handle that better. The dumb-but-working workaround was to just run the script again, since it only downloads packages you don't have locally, but that's not elegant - plus, you might not notice that some of them had failed.

A slightly less dumb workaround is just to retry the download a few times, with a limit to prevent an infinite loop if the file just can't be downloaded (e.g. bad link or missing file). This is pretty easy to implement with a try-catch loop, like this:

[int]$trials = 0
do {
    try {
        $trials +=1
        $webClient.DownloadFile($url, $saveFileName)
        break
    } catch [System.Net.WebException] {
        write-host "Problem downloading $url `tTrial $trials `
                   `n`tException: " $_.Exception.Message
    }
}
while ($trials -lt 3)

The general idea:

  1. Start a counter ($trials)
  2. Increment the counter
  3. Try to download the file
  4. If the download succeeds, break out of the retry loop - we're done here
  5. If we got a WebException, write out a message
  6. If the counter's less than 3, try again. Otherwise, give up.

I was conflicted on the exception handling, but decided that I really only want to retry if I know that the failure was due to a Web Exceptions. If there were an unanticipated DownloadHasSetTheBuildingOnFireException or a BotNetUnleashedSecurityException on the first try, I'd rather not blindly repeat it. Thoughts?

PowerShell 1.0? Forget about that try/catch part.

Try/Catch requires PowerShell 2.0 or later. If you're on PowerShell 1.0, you've got three options:

  1. Just use a trap and don't do retries
  2. Rewrite this to use logic inside the trap block
  3. Use one of the clever solutions out there that simulates try/catch in PowerShell 1.0

If you are on PowerShell 1.0, as is the hapless Matt Podwysocki, you can see what we came up with here.

Converted to a gist

After having made a couple updates to this script and listing it in more than one place, it became obvious that this is exactly what a gist if for. If you haven't seen them, a gist is a single snippet of code that's posted in Github. It can be embedded in a blog post, but more importantly other users can comment on it and fork it, and I can update it. I've updated the original post to reference this gist as well, so if people stumble across the older blog post they'll automatically get the latest, greatest version of this script.

Script and Usage Instructions

I wrote up a walkthrough on the original post; usage hasn't changed with this update.

The paging part is interesting in its own right. We'll talk about that in the next post.

A look back at the ASP.NET site through the years

While writing up the release post for the new ASP.NET website, I started thinking about the site's changed over the years, and that lead me on a brief excursion through the Wayback Machine history of the ASP.NET home page.  Here are some highlights:

Prior to 2001, the site was owned by ASP Computer Products, who appear to have been marketing an External Pocket Print Server for Fast Ethernet. Sadly, I missed my opportunity to go see it at Comdex 2000.

The first ASP.NET related page available is in 2001, and it's pretty much just a splash page.

ASP.NET site - original

By 2002, the site had daily articles, Forums, Tutorials, a Control Gallery, and was advertising Web Matrix - the first one, built to help develop ASP.NET Web Forms.

ASP.NET Site - 2002

2005 saw some big changes in November, for the release of Visual Studio 2005 and ASP.NET 2.0. I fondly remember that release - it was huge! I drove from San Diego to San Francisco to attend an early adopter training week, then spent a good year trying to get my company to buy into updating some of our sites to ASP.NET 2.0.

ASP.NET site - 2005

In 2007, there was an update which brought in the blue color as well as the rounded corners and gradients I was complaining about earlier.

ASP.NET site - 2007

After that, thing didn't change much until March 2010, our previous release.

ASP.NET - Home - New

And of course, our newest release just went live today, December 1, 2011.

2011-12-01 17h00_36

Posted by Jon Galloway | with no comments
Filed under:

ASP.NET website redesign: Now with less Beta, more Live

Back in October, I posted about the beta release of the ASP.NET website redesign at beta.asp.net. Since then, we've listened to a lot of great feedback, ruthlessly evaluated a huge catalog of content, and continued to reorganize things so you can more easily find useful content. That redesign just graduated from beta to live today.

2011-12-01 17h00_36

This redesign was focused on the following goals:

  • A newer Information Architecture (IA) that scales with different types of content. Trying to get you somewhere useful quickly.
  • Content organized into relevant topic areas (Overview, Videos, Tutorials, etc.) to make information easier to find and to learn a technology.
  • Improved on boarding experience – Developers new to ASP.NET should find it easier to get started and download what they need.
  • Important Samples and Tutorials are positioned prominently in the structure of the site so that they are easier to find.
  • Textual Tutorials are as important as videos - We've heard people want text tutorials more than videos, so we're finding balance between these two kind of content.
  • Improved Social Integration – Community info, pulling from Twitter, Facebook and blogs.
  • A less cluttered user experience to get you where you need to go in fewer clicks.
  • Open Source and Samples - We're looking for new ways to showcase great open source projects and excellent samples.

Scott Hanselman's  and I planned that his blog post would cover the site overview and I'd focus on some specific details, so about the new site  blog covers the overview of what we did, so be sure to read Scott's post for the big picture.

Note: It's going to sound like I'm taking credit for actually doing all this work. I've been very busy, but there were a lot of people involved. There's a dedicated development team working on it, and the closest I got to code or production access at any time was when I updated content in the CMS admin interface (yes, still Umbraco, and I'm very happy with it).

Information Architecture focus

Back in the early 2000's (maybe as early as 2004?) someone pointed me at some online workshop materials from Adaptive Path which explained the process restructuring a website's navigation so that it's focused on the user's needs rather than the business's content. This was a revelation to me, and incredibly helpful as I was a tech lead on a project which moved a 20,000 website (comprised of 20,000 classic ASP pages, each a separate file) to a home-grown CMS running on ASP.NET 1.1.

The point is that what's behind the curtain - the different business units involved, the work that went into producing some aging content a few years ago, the work (administrative and development) involved in changing the navigation structure - none of it matters to me when I'm trying to find information I care about. When I was interviewing for this job two years ago, I had plenty of experience - and frustration - with the ASP.NET site. I actually suggested that if the site couldn't be significantly improved, it was time to get rid of it completely. They still hired me, so I've continued to approach the ASP.NET site in that way - it needs to be useful, or it's not worthwhile.

The steps:

  1. Get the site on a modern CMS (Umbraco, powered by ASP.NET of course) which could allow us to begin efficiently moving and editing content.
  2. Restructure the site navigation around user tasks rather than how our content had been historically grouped.

We took the first step with a site update in May 2010, and began on the second step. We broke out separate content around different technology focus areas (Web Pages, Web Forms, MVC) and started putting content maps on the landing pages for each of those technology areas. Those evolving content maps began to define how the site should be structured.

So, here's a look at how the actually useful content in a page on the ASP.NET site has evolved over the past year, leading up to this new release

Note: Phase 2 below is not the final state, so hang on a second.

  • Start: Seven links at the top, each of which took you to pages with lists of content, usually organized in a way that made them hard to navigate.
  • Phase 1: Surfacing some top links across several categories, but if you click on a "more" link, you go to a huge list of stuff, often hard to navigate. Cluttered with a lot of thumbnails which don't really add any value.
  • Phase 2: Content map added, which is kind of a wall of text, but at least lists out some of the top content by topic.

The content map was kind of a band-aid - it tried to show the structure we wished the site had, but once you clicked through on any of the links the dream collapsed, and there was no real way to navigate around through the pretend structure.

We refined at that pretend structure while it was just content in a page, focusing it on subject matter in a context (e.g. Security in ASP.NET MVC, Performance and Caching in ASP.NET Web Forms). Then, with this new release, we've made that pretend content the real content structure.

The Navigation Tab Structure

The information architecture starts with a common tab format across the three technology focus areas:

Within each of these sections, we've divided content by use:

  • Overview - This provides the main content map, linking to content in other tabs if appropriate (e.g. if you're interested in MVC Security and there happens to be an MVC Security section in the video tab, we'll link you to it. This content map also links to top content on MSDN as appropriate.
  • Tutorials - These are generally multi-part walkthroughs which will teach a subject step by step.
  • Videos - We separated videos out for each technology area, because we consume video differently than we consume textual content. If I'm looking for reference information on a topic and am in skim mode, videos aren't relevant. If I have some time to sit back and watch as someone demonstrates something, I only want to see videos. So, we separated them out, but we linked to them from the overview where appropriate.
  • Samples - This tab lists sample applications with working source code, so you can see things in context. The samples include content from both inside and outside of Microsoft as appropriate, so we include popular open source applications which we think are useful as samples.
  • Forum - The ASP.NET forums have been a great way to discuss issues and get help for a long time, but they've been a bit buried. I'll confess I don't think about them as the resource they are until I stumble across them in search results. We've included a tab which shows the dedicated forum for each technology area.
  • Books - The books section was kind of off on its own before, too. I updated our books list and broke it into chapters which I thought were useful. Since this is now quick to update, I can go in and add new books as the become available.
  • Open Source - We listed some top open source resources in each technology area which will help you in building your applications. This isn't a listing of open source applications running on ASP.NET, it's libraries and tools which will help you as you build your applications. It's of course not complete and can't include every open source resource available, but Scott Hanselman made a best effort at listing open source resources we'd recommend to fellow developers.

Oh, and the structure isn't pretend anymore

The landing page for a technology focus is the Overview tab, and you can see that we've subdivided the content in these areas into Chapters. Here's that same MVC page with that change:

You can really see it as you click into the content, though. That structure follows you around through the site, visible in the breadcrumb and table of contents in the right rail (highlight added to the image to point it out).

Multi-Part Article Navigation

For most of the content, the chapter format worked just fine. For a page showing a video about a Web Pages Security topic, we'd show the Web Page TOC in the right rail with that part of the Security section highlighted. There was one place where we needed a third navigational level, though: multi-part articles.

Most of the multi-part articles are in the tutorials section, since tutorials are often broken into a series of pages. As you work through one of these tutorials, you'll find that your breadcrumb looks something like this: Home/MVC/Tutorials/Chapter 4. Getting Started with EF using MVC/Creating a More Complex Data Model for an ASP.NET MVC Application (4 of 10)

In this case we're in the MVC technology area, in the Tutorials tab, in the Chapter 4. Getting Started with EF using MVC section of the Tutorial table of contents, and we're on the 4th page of that tutorial.

To make the context clear, we mark that kind of content as multi-part, which shows a different kind of navigation in the right rail:

2011-12-01 13h38_16

This shows that you're in a multi-part article, and clearly shows your context.

Content Review

The next step was mapping a lot of content to this new structure and adjusting the structure when we found gaps or additional good content to surface. The site dates back to 2002(!) and has hosted a ton of content during that time. Some of that old content is irrelevant or obsolete, but some of it continues to be useful to developers who are maintaining sites running on previous versions. Additionally, there was useful content which had become orphaned - unreachable via the site navigation - as a result of previous redesigns. We had literally thousands of pages of content, some filled with long lists of content, to evaluate.

My thought process was this:

  1. Is this information useful? If a developer were asking me for information about this subject today, would I recommend this particular page / video / tutorial? If not, delete it.
  2. Where does it fit in our site structure? If it doesn't, does the site structure need to change?
  3. Is this information complete, or would I recommend more information? If there's some available - including on MSDN - should we reference it?

I worked with Scott Hunter, Scott Hanselman, ASP.NET team PM's for each of the focus areas, and the excellent writers and editors on the Web Platform & Tools Content Team (who contribute to and curate content both on MSDN and ASP.NET) to evaluate literally thousands of pages. We did this in two passes:

  1. We split up the lists and evaluated each of the individual pages, recommending cut or keep, and if we were to keep it, where it should go.
  2. After completing that step, we had a series of bug bashes, in which individuals walk through a content area and report everything they thought should be changed. This was iterative - bug bash / fix /repeat.

The site was open during this whole cycle, and we got a lot of good feedback via social networks, the dedicated ASP.NET website UserVoice forum, and from MVP's and ASP.NET Insiders. There's a lot of content here, and it will always be a work in progress, but we think the combined effort has helped to surface a lot more useful content in a way that's easier to find. Please let us know how we're doing - this site's only useful if it's providing you the content you're looking for.

Cleaner Markup, Faster Page Loads

The site was already pretty highly optimized, but we paid attention to the site performance and were happy to see page load times improve across the board.

More Semantic Markup

The previous design had a lot of dated (quaint?) design elements, like rounded corners and gradients, which at the time required a lot of extraneous markup for support across a wide range of browsers. There was a lot of wrapper divs which were there only for styling - kind of ignoring the most important element in HTML: text!

Side note: I find it funny that, now that modern standards natively support rounded corners and gradients, they're not cool anymore.

You can see a big difference in the markup just by looking at beginning of the body tag through the end of the header items. Here's the old design:

<body>
    <form method="post" action="/webmatrix/tutorials/15-caching-to-improve-the-performance-of-your-website?"
    id="form1">
    <div class="aspNetHidden">
        <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUENTM4MQ9kFgJmD2QWAmYPZBYCZg9kFgJmD2QWAgIFEGRkFgICD

Q9kFgJmD2QWAmYPZBYCZg8WAh4HVmlzaWJsZWdkZKrDY1NG1HjNDVKzlZSX2Dy7poMS" /> </div> <div id="content_container" class="content_container"> <div class="header_container"> <div class="header_top"> <div class="header_top_right"> &nbsp;</div> </div> <div class="header_content"> <div class="header_content_right"> <a href="http://www.asp.net" title="Home Page"> <img class="logo" style="border-width: 0px;" alt="" src="http://i2.asp.net/common/header/logo.png?cdn_id=22" title="Microsoft ASP.NET" /> </a> <div id="WLSearchBoxDiv"> <div id="WLSearchBoxPlaceholder"> <input class="search_box" id="WLSearchBoxInput" disabled="disabled" name="WLSearchBoxInput" value="Search" /><input class="search_button" id="WLSearchBoxButton" type="button" value="" name="WLSearchBoxButton" /></div> </div> <div id="mainnav"> <ul class="nav_main"> <li class="first"><a href="/home">Home</a></li><li><a href="/get-started">Get Started</a></li><li> <a href="/downloads">Downloads</a></li><li><a href="/web-pages">Web Pages</a></li><li> <a href="/web-forms">Web Forms</a></li><li><a href="/mvc">MVC</a></li><li><a href="/community"> Community</a></li><li><a href="http://forums.asp.net">Forums</a></li></ul>

 

Yes, that was a bit of VIEWSTATE in there... Also, entire page was full of wrapper divs which were there just for formatting, and if you accidentally messed them up when editing content (oops) the whole page got wacky.

Here's the header markup for that same page with the redesign:

<body class=''>
    <div class='allcontent '>
        <div class="header-wrap">
            <div class="header">
                <a href="/" class="logo" title="The Official Microsoft ASP.NET Site">The Official Microsoft
                    ASP.NET Site</a><ul class="nav-main">
                        <li><a href="/">Home</a></li><li><a href="/get-started">Get Started </a></li>
                        <li><a href="/downloads">Downloads </a></li>
                        <li><a href="/web-pages" class="selected">Web Pages </a></li>
                        <li><a href="/web-forms">Web Forms </a></li>
                        <li><a href="/mvc">MVC </a></li>
                        <li><a href="/community">Community </a></li>
                        <li class="last-child"><a href="http://forums.asp.net">Forums</a></li></ul>

I see that three goals are achieved in unison here:

  1. Cleaner design means less HTML, so the pages load and render faster
  2. Less junk/formatting HTML means the content is easier to maintain
  3. A focus on content over superfluous design elements usually mean that the content is easier to read, as well

Better Performance, YSlow Improvements

Again, the focus on this redesign was really about information architecture and design refresh, but we did pay attention to site performance and best practices throughout because... well, it's the ASP.NET site and we care about this stuff. We want to be proud of this site, and we want you to be proud of it, too.

Scott Hunter, Scott Hanselman, and I were on several calls with the development team every week, and we constantly badgered them about performance tweaks, practices, etc. We started with a B YSlow rating, which is actually pretty good as far as most sites are concerned.

2011-12-01 09h57_16

We ended up with an A, with a discussion about any recommendation we couldn't meet. The only mark against us in the page below is due to some local (non-CDN) images, which was a measured decision to allow content editors to edit them in the CMS. Check the YSlow marks across the site yourself - it's a fun exercise.

2011-12-01 09h58_56

Looking at better performance and best practices ensured we were doing things like setting correct headers, using sprites effectively, minimizing and bundling resources, etc. Again, I did absolutely none of that development work, I just sent e-mails about it. ;-)

We had a team doing weekly performance testing across the site and making recommendations on best practices and opportunities to improve. Here's one of the pretty graphs in one of them showing the page load time was significantly reduced site-wide, often cutting the load time by 50% or more.

2011-12-01 15h17_44

What's Next?

We're definitely not done. We put a lot of things on the post-launch list, many of which are now possible as CMS edits. I mentioned that we've been asking for feedback and using your input to tell us what's most important. Here are some of the things we're tracking:

HTML5 Video

It's highly requested, and it's important to us, too. I prototyped a Silverlight with HTML5 fallback video solution which is in the "works on my machine" state. I went back and forth on which to make primary and which to make fallback, but since it's leveraging standard HTML content fallback (no Javascript or server logic required) it's easy to switch.

As written, this would continue to use the existing player (along with features like time based commenting and cross-browser full screen support), but if Silverlight's not installed or is disabled, the content's shown using native HTML5 video support. If that's not available, we'll show some sort of message that indicates you need Silverlight or a newer browser.

We know this is important so that the content's available across as many devices as possible, and it's high on the list.

Better Mobile and Smaller Device Support

I've blogged recently about how ASP.NET MVC 4 will use CSS media queries and adaptive layout to work well on different sized browsers, and adding adaptive layout is in the works for the ASP.NET site, too.

Fewer / smaller ads

Yes. We hear you, and we're pushing for it. Keep voting for it, it means a lot more when you ask for it than when I do.

Bring back old content

Aha! We've done that! Done. Taking the rest of the week off.

Your Feedback, give it to us

While we do watch for blog comments, by far the most effective way to get us feedback is to vote on our www.asp.net website feedback page on UserVoice. Let us know how we can continue to improve!

More Posts